“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
It’s neither. It’s a design flaw. They’re not designed to be able to handle this type of situation correctly
You out there spreading misinformation, saying they’re a manipulation tool. No, they were never invented for this.
Llm is just next word prediction. The Ai doesn’t know whether the output is correct or not. If it’s wrong or right. Or fact or a lie.
So no I’m not spreading misinformation. The only thing that might spread misinformation is the AI here.