It tells them it knows what it’s talking about and it speaks with confidence.
Meanwhile companies and governments won’t stfu about how powerful and great this tech supposedly is, so a percentage of people will believe the propaganda.
One example I like to use is to ask it for the lyrics of an extremely well known song. It just makes shit up based on the title you give it.
The online ones (Claude, chatgpt, copilot, etc) now refuse to do it for ““copyright reasons”” but the offline ones still happily oblige. I assume the online ones added that block because it was such an obvious way to prove they don’t “know” shit.
It tells them it knows what it’s talking about and it speaks with confidence.
Meanwhile companies and governments won’t stfu about how powerful and great this tech supposedly is, so a percentage of people will believe the propaganda.
I’d love students to be given a lesson on tricking AI into giving a false answer. It’s not hard and should be pretty eye opening
One example I like to use is to ask it for the lyrics of an extremely well known song. It just makes shit up based on the title you give it.
The online ones (Claude, chatgpt, copilot, etc) now refuse to do it for ““copyright reasons”” but the offline ones still happily oblige. I assume the online ones added that block because it was such an obvious way to prove they don’t “know” shit.
FR
Trump is president, but it’s just unfathomable that people would follow an automated idiot /s