Like, just as with those who think that the information they ask ChatGPT and the like is true, how many of those who ask the AI for images will believe that they are a representation of reality to which the AI had access because of a supposed omniscience that these people falsely attribute to it?

Can you imagine? A person convinced that the fake images they create are real in some twisted way? I’m guessing it will be a very small number, but I doubt it will be zero.

  • SoupBrick@pawb.social
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 day ago

    I would love to believe that AI is here, but the current technology we are using is just not there yet. Until I see irrefutable evidence that LLMs are sentient, I am going to remain skeptical.

    Believing that what we currently have is sentient and possibly new life is falling for the marketing ploys of the corpos trying to make massive amounts of money off investors.

    https://algocademy.com/blog/why-ai-can-follow-logic-but-cant-create-it-the-limitations-of-artificial-intelligence/

    AI systems are fundamentally limited by their training data. They cannot truly create logic that goes beyond what they’ve been exposed to during training. While they can combine existing patterns in new ways, giving the appearance of creativity, they cannot make the kind of intuitive leaps that characterize human innovation.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      LLMs are AI. While they’re not generally intelligent, they still fall under the umbrella of artificial intelligence. AGI (Artificial General Intelligence) is a subset of AI. Sentience, on the other hand, has nothing to do with it. It’s entirely conceivable that even an AGI system could lack any form of subjective experience while still outperforming humans on most - if not all - cognitive tasks.