

Woke up to some hashtag spam this morning
AI’s Biggest Security Threat May Be Quantum Decryption
which appears to be over of those evolutionary “transitional forms” between grifts.
The sad thing is the underlying point is almost sound (hoarding data puts you at risk of data breaches, and leaking sensitive data might be Very Bad Indeed) but it is wrapped up in so much overhyped nonsense it is barely visible. Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.
(it also appears to be a month-old story, but I guess there’s no reason for mastodon hashtag spammers to be current 🫤)
Haven’t read the source paper yet (apparently it came out two weeks ago, maybe it already got sneered?) but this might be fun: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws.
Full of little gems like
I had assumed that the problem was solely technical, that the fundamental design of LLMs meant that they’d always generate bullshit, but it hadn’t occurred to me that the developers actively selected for bullshit generation.
It seems kinda obvious in retrospect… slick bullshit extrusion is very much what is selling “AI” to upper management.