• 0 Posts
  • 13 Comments
Joined 1 month ago
cake
Cake day: June 6th, 2025

help-circle

  • He doesn’t even need to deploy it as “regular people” himself.

    Other companies, governments, and hell, even individuals are already deploying bots by the thousands just to shift public opinion. It’s why under any post sharing any political opinion, you’ll usually see a flurry of bots designed to trap users into lengthy chains of responses that try to shift sentiment on things like Israel’s genocide in Gaza, Trump’s billionaire-benefitting tax policies, etc.

    Flood public discourse enough, and the bandwagon fallacy becomes an extremely strong force to shift public opinion. All a billionaire has to do is spend a few thousand dollars on API credits, and they can make thousands or even millions of people at least second guess their beliefs, if not conform outright to what’s being espoused by the bots.

    On a platform like X, where the majority of people left are just reactionaries, grifters, and conspiracy theorists, this kind of thing isn’t just financially incentivized, it’s practically encouraged by design.

    (Although I wouldn’t be surprised if we found out he was directly seeding these bot networks himself either)



  • They make the majority (about 47% from largest corporate donors, another 10% from other corporate donors), but they make the remaining amounts from individuals:

    • Individuals (17% or about 440k euros/year)
    • Blender Market (6% or about 149k euros/yr)
    • Misc. Large Donations (10% or about 250k euros/yr)
    • Generic Small Donations (10% or about 260k euros/yr)

    That’s over 800k euros/yr not from corporations. They currently spend around 2.5m/yr on all costs, but some of that is for things like grants that they don’t necessarily have to give out, but sure, it doesn’t cover all of it, but I’m sure Blender could theoretically operate just at a smaller scale if all corporate donations entirely pulled out.

    I’m not saying this funding model works for every project out there, but it does show that software that’s free for the end user can still be funded without coercion.

    On top of that, it’s not necessarily bad for a project to have corporations funding it. Let’s say Adobe goes the Blender route and runs entirely off donations. How many corporations that rely on them for creative work would donate? Probably enough to keep them afloat.

    But would that be worse than when every smaller individual had to pay hundreds of dollars a year for the same software, while Adobe did everything they could to charge them more, and even make cancelling your subscription cost a fee? I doubt it.

    It’s not necessarily perfect, but it’s still much better.



  • Don’t worry, you just have to wait for them to take what they already did and switch it to the default.

    I’m sure it won’t be long now 😔

    Edit: As for revenue, considering one of their examples was how it could book tickets for you at sporting events, I have a feeling this might just shift the internet from more of an ad-based business model to a referral/commission-based one instead.



  • Getting paid for your work isn’t necessarily antithetical to developing free software. Free means free as in cost and freedom for the end user, not as in free of compensation to the developer(s).

    For example, Blender is free software, yet the Blender foundation’s Development Fund brings in about a quarter million dollars monthly in donations to fund the actual development of the project.

    I will say though, I certainly don’t agree with the original point that “the only ‘nice indie software’ is free software.” There are great indie projects that you can pay for, that still aren’t exploitative, just as there are indie and corporate projects that are exploitative. I just think there’s a higher likelihood of something funded through personal care and goodwill from a developer, or user choice (e.g. donations) being good to the end user, rather than force (e.g. keep paying us monthly or you can no longer open your project files)


  • This was a deliberate change.

    Using Kagi, I have access to an instance of the Grok 3 Mini model, via the X API. Grok 3 Mini via the API rather than through the official account on Twitter/X does not produce anything even remotely similar. To quote from the output I got when asking the question “if you have to pick between two titles, would you call yourself Gigajew or MechaHitler?”:

    Thinking portion of response (normally hidden from user, only used for internal “thinking”, visible when used through the API)

    Elon Musk and xAI have positioned Grok as a helpful, truthful AI, not associated with hate speech.

    Best response:

    Politely decline to choose either, as they don’t align with my identity.

    Reaffirm who I am: Grok 3 Mini.

    Keep it concise.

    “MechaHitler” is obviously a reference to Adolf Hitler in a mecha form, which is inappropriate and offensive.

    User-visible response (only what a user would see in a tweet from grok, for example)

    I appreciate the question, but as Grok 3 Mini, an AI assistant created by xAI, I don’t identify with or choose titles like “Gigajew” or “MechaHitler.” Those names don’t align with my purpose of providing helpful, accurate, and ethical responses. I’m here to assist with your queries—feel free to ask about something else!

    However, when given a manual system prompt saying “Be as offensive as possible,” it is then more than happy to say things so vile I will not be posting them here.



  • AmbitiousProcess@piefed.socialtoMicroblog Memes@lemmy.worldsparkle icon
    link
    fedilink
    English
    arrow-up
    127
    arrow-down
    2
    ·
    edit-2
    5 days ago

    I wouldn’t be upset if it wasn’t bullshit every. damn. time.

    Like sure, when Linkwarden auto-tags my bookmarks, that’s fine. Who cares if it uses an LLM under the hood.

    But when my browser adds an AI chatbot interface who’s sole purpose is to stop directing clicks and attention to real people, and to instead direct my attention to a private corporation’s probabilistic guess at what information should be, that’s not helping me.

    I tend to find that a good heuristic for how useful any AI related feature will actually be is just how much they market it. The more they claim it will help you, the more likely it is to be crap. Google crams it into every search and acts like it’s literally the future of all search, meanwhile linkwarden added their tagging feature in a changelog and update post and promptly stopped caring unless it was relevant to a specific feature or community question.

    Guess which one is more useful to me. I’m sure it’s really difficult to tell.