To elaborate a little:

Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.

The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    2 days ago

    The term “hallucinate” is a euphemism being pushed by the AI peddlers.

    It’s a computer program. It doesn’t “hallucinate”, it has errors.

    In all cases of ML models being sold by companies, what you are actually looking at is poorly tested software that is not fit for purpose, and has far less actual capability then what the marketing promises.

    “Hallucination” in the context of LLMs is marketing bullshit designed to deflect from the reality that none of these programs have been properly quality checked and are extremely error prone.

    If Excel gave bad answers for calculations 20% of the time it wouldn’t be “hallucinating”, it would just be broken, buggy software that requires more development time before distribution as a useful product.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      2 days ago

      LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.

      The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.

      • Paradachshund@lemmy.today
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Whatever they were designed for, they are currently being sold as the solution to nearly every problem. You can’t expect a layperson to look further than that, and it’s completely reasonable to judge what they do against the claims being used to sell them.

        You can blame the marketing departments, but that original purpose you mention is no longer a major talking point (even if it should be).

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      tfney aren’t even errors. They are the system working as designed. The system is designed with randomness in mind so that the model can hallucinate, intentionally. The system can’t ever be made reliable, not without some sort of paradigm shift.