• 2 Posts
  • 13 Comments
Joined 1 month ago
cake
Cake day: September 18th, 2025

help-circle




  • So, this is what I understood so far:

    • A group of authors, including George R.R. Martin, sued OpenAI in 2023. They said the company used their books without permission to train ChatGPT and that the AI can produce content too similar to their original work.

    • In October 2025, a judge ruled the lawsuit can move forward. This came after ChatGPT generated a detailed fake sequel to one of Martin’s books, complete with characters and world elements closely tied to his universe. The judge said a jury could see this as copyright infringement.

    • The court has not yet decided whether OpenAI’s use counts as fair use. That remains a key legal question.

    • This case is part of a bigger debate over whether AI companies can train on copyrighted books without asking or paying. In a similar case against Anthropic, a court once suggested AI training might be fair use, but the company still paid $1.5 billion to settle.

    • No final decision has been made here, and no trial date has been set.



  • The fact that many human drivers are “distracted, drunk, tired, or just reckless” is a huge point in favor of self-driving cars. There’s no way to guarantee that a human driver is focused and not reckless, and experience can only be guaranteed for professional drivers.

    You’re right that many human drivers are distracted, drunk or reckless, and that’s a serious problem. But not everyone is like that. Millions of people drive sober, focused and carefully every day, following the rules and handling tough situations without issue.

    When we say self-driving cars are safer, we’re usually comparing them to all human drivers, including the worst ones, while testing the cars only in favorable conditions, such as good weather and well-mapped areas. They often avoid driving in rain, snow or complex environments where judgment and adaptability matter most.

    That doesn’t seem fair. If these vehicles are going to replace human drivers entirely, they should be at least as good as a responsible, attentive person, not just better than someone texting or drunk. Right now, they still make strange mistakes, like stopping for plastic bags, misreading signals or freezing in uncertain situations. A calm, experienced driver would usually handle those moments just fine.

    So while self-driving tech has promise, calling it “safer” today overlooks both the competence of good drivers and the limitations of the current systems.

    Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.

    Once again, I believe we’ll get there eventually, but it’s still a bit rough for today.


  • You’re right to bring that up. There was and still is some concern about Ventoy using a lot of precompiled binary files (called “blobs”) in its source code, rather than building everything from source during release. This makes it harder to verify that the binaries are safe and haven’t been tampered with, especially after incidents like the XZ Utils backdoor in 2024.

    The developer acknowledges this and has started listing all the blobs with their sources and checksums here:
    https://github.com/ventoy/Ventoy/blob/master/BLOB_List.md
    This file was created in response to issue #3224, which was opened specifically to address concerns about these blobs. It includes descriptions, where each blob came from, and SHA256 hashes so users can check them manually. However, it doesn’t include automated build scripts, so verification still depends on manual effort.

    The discussion started in early 2024 in issue #2795:
    https://github.com/ventoy/Ventoy/issues/2795

    And as of May 2025, the maintainer proposed a plan to improve transparency by using GitHub CI to build the blobs from source in separate repositories:
    https://github.com/ventoy/Ventoy/issues/3224

    No major malicious activity has been found, but the lack of full reproducible builds means some trust is required. If you’re security-conscious, it’s worth verifying the hashes yourself or considering alternatives. The project remains open source and widely used, but this issue hasn’t been fully resolved yet.



  • Gpt oss is borderline crap, it’s not that smart, not that great and it’s pretty censored, but it can have niche uses for programming. The oss 20b in particular can be easier to run in some setups than their competitors like Qwen 3-30b. oss 120b is quite heavy: the cost to performance ratio is not good.

    Meta abandoned the open source ideal since Llama 4; they went closed source.

    Older open source versions of Grok are literally useless, no one should use them. Their cloud closed source models are decent.

    Deepseek and Alibaba’s models like Qwen are good.


  • Think of AI as a mirror of you: at best, it can only match your skill level and can’t be smarter or better. If you’re unsure or make mistakes, it will likely repeat them. Like people, it can get stuck on hard problems and without a human to help, it just can’t find a solution. So while it’s useful, don’t fully trust it and always be ready to step in and think for yourself.


  • Stick to a small circle of trusted people and websites. Skip mainstream news. Small blogs, niche forums, and tiny YouTube channels are often more honest.

    Avoid Google for discovery. It’s not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia. Google works okay only if you’re searching within one site, like site:reddit.com.

    Sometimes, searching in other languages helps find hidden gems with less junk. Use a translator if needed.


  • Tools like Turnitin or GPTzero don’t work well enough to trust. The real issue isn’t just detecting AI writing. It’s doing it without falsely accusing students. Even a 0.5% false positive rate is too high when someone’s academic future is on the line. I’m more concerned about wrongly flagging human-written work than missing AI use. These tools can’t explain why they suspect AI. At best, they only catch obvious cases. Ones you’d likely notice yourself anyway.