• 13 Posts
  • 1.91K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle



  • See, this is technically true. But that is not how (say) YouTube presents itself.

    They market professional creators, and algorithmically prioritize them. They set up extensive systems for them. They divert away from external linking, and create systems to explicity keep people withing their ad ecosystem. To regulators, YouTube argues that it’s still that same site to post “creative videos” to, like the cat video site it was a long time ago. Yet in the same breath, they turn around and do everything they can to crowd out professional journalism and media, to promite it across services, even viewing it as their “attention competition.”

    They’re having their cake and eating it.

    Discord’s the same. They depict it as private chat for gamers and friend groups, when it’s really host to larger interest communities, and eating similar sources alive.


    Hence I disagree.

    YouTube is setting the expectation for creators to make money, while arguing exactly what you’re arguing in court. And this:

    I would compare it to complaining that a service that teaches you how to knit is only sufficient for hobbyists and rarely allows one to build a successful company selling clothes.

    This is true! Yet YouTube wouldn’t be caught dead saying it, as it would cost them attention.

    And that’s not okay.







  • AFAIK there are also problems that Chinese companies have their own tool chain, and are releasing high level truly open source solutions for AI.

    One interesting thing about the Chinese “AI Tigers” is the lack of Tech Bro evangelism.

    They see their models as tools. Not black box magic oracles, not human replacements. And they train/structure/productize them and such.

    But with AI you can use whatever tool is best value, and switch to the competition whenever you want.

    Big Tech is making this really hard, though.

    In the business world, there’s a lot of paranoia about using Chinese LLM weights. Which is totally bogus, but also understandably hard to explain.

    And OpenAI and such are working overtime to lock customers in. See: iOS being ChatGPT-only; no “pick your own API.” Or Disney using Sora when they should really be rolling their own finetune.



  • brucethemoose@lemmy.worldtoComic Strips@lemmy.worldDebate
    link
    fedilink
    arrow-up
    70
    ·
    edit-2
    15 hours ago

    Yeah. This is something I keep realizing.

    So many people simply seek to ‘support’ their tribe/idols. Scientific debate isn’t the point; loyalty and conformity is.

    It’s a feature of a lot of religious culture. And, in an oddly similar way, influencer culture.

    And there is absolutely nothing you can do about it unless there’s a really huge personal connection/issue.


  • That’s what I’m saying.

    You have to use YouTube, Tiktok, whatever. But at least be a little more self aware and vocal of the platform’s pitfalls.

    And to be clear, some content creators already are. But it needs to be the majority. They need to remind every single viewer “this place is a trap, a bloodsucking leech, and we’re here because we have no choice.” Just subtly enough to avoid deranking, of course.







  • Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.

    It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.

    Contrast that with Red Hat’s examples.

    They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.

    Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.

    Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.


    Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.