Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    17 days ago

    https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

    When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

    womp, hold on let me finish, womp

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      16 days ago

      had a quick scan over the blogposts earlier, keen to read the paper

      would be nice to see some more studies with more numbers under study, but with the cohort they picked the self-reported vs actual numbers are already quite spicy

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    17
    ·
    19 days ago

    Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.

    In simpler terms, it jailbreaks LLMs by speaking in Business Bro.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 days ago

      I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise “Business English” - if anything, the fact that LLM models are similarly prone to ignore their “conscience” and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

      Or:

      Shit, isn’t the whole point of Business Bro language to make evil shit sound less evil?

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      18 days ago

      maybe there’s just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that “explain step by step” trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

      it’d be more of case of getting awful output from awful input

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 days ago

      Username called “The Dao of Bayes”. Bayes’s theorem is when you pull the probabilities out of your posterior.

      知者不言,言者不知。 He who knows (the Dao) does not (care to) speak (about it); he who is (ever ready to) speak about it does not know it.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      17 days ago

      What’s the standard advice to give them?

      It’s unfortunately illegal for me to answer this question earnestly

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    19 days ago

    In the morning: we are thrilled to announce this new opportunity for AI in the classroom

    In the afternoon:

    Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      19 days ago

      Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.

      LLMs are automatic gaslighting machines, so this makes sense

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    19 days ago

    In the recent days there’s been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don’t have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

    The argument deployed by individuals such as Bentham’s Bulldog boils down to: “Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

    “Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!”

    https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      19 days ago

      You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

      This, coming from LW, just has to be satire. There’s no way to be this self-unaware and still remember to eat regularly.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        19 days ago

        Damn making honey is metal as fuck. (And I mean that in a omg this is horrible, you could write disturbing songs about it way) CRUSHED FOR YOUNG! MAMMON DEMANDS DISMEMBERMENT! LIVING ON SLOP, HIVE CULLING MANDATORY. Makes a 40k hive city sound nice.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    18 days ago

    Today’s bullshit that annoys me: Wikiwand. From what I can tell their grift is that it’s just a shitty UI wrapper for Wikipedia that sells your data to who the fuck knows to make money for some Israeli shop. Also they SEO the fuck out of their stupid site so that every time I search for something that has a Finnish wikipedia page, the search results also contain a pointless shittier duplicate result from wikiwand dot com. Has anyone done a deeper investigation into what their deal is or at least some kind of rant I could indulge in for catharsis?

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 days ago

      I’ve seen conspiracy theories that a lot of the ad buys for stuff like this are a new avenue of money laundering, focusing on stuff like pirate sports streaming sites, sketchy torrent sites, etc. But a full scraped, SEOd Wikipedia clone also fits.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    19 days ago

    Love how the most recent post in the AI2027 blog starts with an admonition to please don’t do terrorism:

    We may only have 2 years left before humanity’s fate is sealed!

    Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.

    Most of the rest is run of the mill EA type fluff such as here’s a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      19 days ago

      Please, do not rid me of this troublesome priest despite me repeatedly saying that he was a troublesome priest, and somebody should do something. Unless you think it is ethical to do so.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      19 days ago

      It’s kind of telling that it’s only been a couple months since that fan fic was published and there is already so much defensive posturing from the LW/EA community. I swear the people who were sharing it when it dropped and tacitly endorsing it as the vision of the future from certified prophet Daniel K are like, “oh it’s directionally correct, but too aggressive” Note that we are over halfway through 2025 and the earliest prediction of agents entering the work force is already fucked. So if you are a ‘super forecaster’ (guru) you can do some sleight of hand now to come out against the model knowing the first goal post was already missed and the tower of conditional probabilities that rest on it is already breaking.

      Funniest part is even one of authors themselves seem to be panicking too as even they can tell they are losing the crowd and is falling back on this “It’s not the most likely future, it’s the just the most probable.” A truly meaningless statement if your goal is to guide policy since events with arbitrarily low probability density can still be the “most probable” given enough different outcomes.

      Also, there’s literally mass brain uploading in AI-2027. This strikes me as physically impossible in any meaningful way in the sense that the compute to model all molecular interactions in a brain would take a really, really, really big computer. But I understand if your religious beliefs and cultural convictions necessitate big snake 🐍 to upload you, then I will refrain from passing judgement.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        19 days ago

        One more comment, idk if ya’ll remember that forecast that came out in April(? iirc ?) where the thesis was the “time an AI can operate autonomously is doubling every 4-7 months.” AI-2027 authors were like “this is the smoking gun, it shows why are model is correct!!”

        They used some really sketchy metric where they asked SWEs to do a task, measured the time it took and then had the models do the task and said that the model’s performance was wherever it succeeded at 50% of the tasks based on the time it took the SWEs (wtf?) and then they drew an exponential curve through it. My gut feeling is that the reason they choose 50% is because other values totally ruin the exponential curve, but I digress.

        Anyways they just did the metrics for Claude 4, the first FrOnTiEr model that came out since they made their chart and… drum roll no improvement… in fact it performed worse than O3 which was first announced last December (note instead of using the date O3 was announced in 2024, they used the date where it was released months later so on their chart it make ‘line go up’. A valid choice I guess, but a choice nonetheless.)

        This world is a circus tent, and there still aint enough room for all these fucking clowns.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 days ago

        Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    18 days ago

    trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

    https://www.physiognomy.ai/

    Discover Yourself with Physiognomy.ai

    Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.

    At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.

    Whether you’re seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      18 days ago

      trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

      Well, I guess there’s your answer - “philosophy teaches you how to avoid falling for hucksters”

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!

      Thanks, I hate it!

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    16 days ago

    A hackernews muses about vibe coding a chatbot to provide therapy for people in crisis. Soon, an actual health care professional shows up to butcher the offender and defile the corpse. This causes much tut-tutting and consternation among the locals.

    https://news.ycombinator.com/item?id=44535197

    Edit: a shower thought: have any of yall noticed the way that prompt enjoyers describe using Cursor, tab completions, and such are a repackaging of the psychology of loot boxes? In particular, they share the variable-interval reward schedule that serves as the hook in your typical recreational gambling machines.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    17 days ago

    LessWrong’s descent into right-wing tradwife territory continues

    https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=ueRbTvnB2DJ5fJcdH

    Annapurna (member for 5 years, 946 karma):

    Why is there so little discussion about the loss of status of stay at home parenting?

    First comment is from user Shankar Sivarajan, member for 6 years, 1227 karma

    https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=opzGgbqGxHrr8gvxT

    Well, you could make it so the only plausible path to career advancement for women beyond, say, receptionist, is the provision of sexual favors. I expect that will lower the status of women in high-level positions sufficiently to elevate stay-at-home motherhood.

    […]

    EDIT: From the downvotes, I gather people want magical thinking instead of actual implementable solutions.

    Granted, this got a strong disagree from the others and a tut-tut from Habryka, but it’s still there as of now and not yeeted into the sun. And rats wonder why people don’t want to date them.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      17 days ago

      Dorkus malorkus alert:

      When my grandmother quit being a nurse to become a stay at home mother, it was seen like a great thing. She gained status over her sisters, who stayed single and in their careers.

      Fitting into your societal pigeonhole is not the same as gaining status, ya doofus.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      17 days ago

      Another comment that has been getting downvotes and tut-tuts begins,

      The only thing that will raise fertility rates is to make it more affordable to have a child.

      (Robot Santa voice) Wanting all women to be barefoot and pregnant in the kitchen? Evil! Not providing footnotes in your reply to a blog post? EXACTLY AS EVIL

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        17 days ago

        LOL the mod gets snippy here too

        This comment too is not fit for this site. What is going on with y’all? Why is fertility such a weirdly mindkilling issue?

        “Why are there so many Nazis in my Nazi bar???”

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    16 days ago

    Musk objects to the “stochastic parrot” labelling of LLMs. Mostly just the stochastic part.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      19 days ago

      Just the usual stuff religions have to do to maintain the façade, “this is all true but gee oh golly do NOT live your life as if it was because the obvious logical conclusions it leads to end in terrorism”