To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
LLMs have more in common with humans than we tend to admit. In split-brain studies, humans have been shown to invent plausible-sounding explanations for their behavior - even when scientists know those explanations aren’t the real reason they acted a certain way. It’s not that these people are lying per se - they genuinely believe the explanations they’re coming up with. Lying implies they know what they’re saying is false.
LLMs are similar in that way. They generate natural-sounding language, but not everything they say is true - just like not everything humans say is true either.
It replicates the part of our brain responsible for bullshitting
No, it generates natural sounding language. That’s all it does.
Maybe that’s all we are doing too…?
Exactly