Even though its my property?

  • dandelion (she/her)@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    17 days ago

    “AI” is a misnomer, ChatGPT and other “AI” are actually LLMs

    here’s a decent video by 3Blue1Brown explaining how LLMs work:

    https://www.youtube.com/watch?v=LPZh9BOjkQs

    and here’s a rather lucid explanation of how to think about LLMs:

    https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

    I think what is crucial to understand here is that we’re talking about a computer program that attempts to generates text, and in particular is trying to guess what the next best word is. This is like the little word suggestion tool on your smartphone’s keyboard, or actually similar to translation tools like Google Translate.

    This technology isn’t new, what’s new is the accumulation of larger datasets and the hardware and ability to train LLMs on such large amounts of data. This just makes the predictive generation of text more typical of the training data.

    So “AI” doesn’t need to be shackled because “AI” isn’t an intelligence and has no agency or control over anything.

    LLMs generate text, that’s all they do - they can’t control robots or “think” or “do” anything else.

    • Aerosol3215@piefed.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      17 days ago

      “So “AI” doesn’t need to be shackled because “AI” isn’t an intelligence and has no agency or control over anything.”

      Except that lots of people are giving their fancy word guessing machine agency and control over different things.

      Yay, agentic AI! /s

      • dandelion (she/her)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 days ago

        I don’t think we would be worried about needing to shackle an encyclopedia because people might learn things from it and that might have an impact or influence in the world, right?

        Or maybe a better comparison would be a search engine … OP implies agency and a sense of an independent “person” or intelligence is at play, and that’s specifically what I’m trying to challenge.

        Pointing out that the text generated by a program that generates text has influence misses my point - my point is that there is no “person”, not that the text that is generated has no impact on anything.

        • Aerosol3215@piefed.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          17 days ago

          Understood. Cory Doctorow says something along the lines of 'improving your LLM and expecting it to become sentient is like breeding horses to be faster and expecting it to give birth to a locomotive."

          • dandelion (she/her)@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            17 days ago

            https://en.wikipedia.org/wiki/Cory_Doctorow

            thanks for introducing me to him, he seems like a cool dude!

            and yeah, that quote is spot on - LLMs are just not going to produce human-like sentience, lol

            the neural networks underlying LLMs might be used to that end, though! but I’m pretty sure predictive text generation is not a way neural networks might bring about something like sentience.

            Still, it’s a neat trick because lots of people will confuse sufficiently human-like text generation with there being an actual mind on the other side.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      edit-2
      17 days ago

      “AI” is a misnomer, ChatGPT and other “AI” are actually LLMs

      It’s weird to so often see people be pedantic about this terminology while also being completely wrong. LLMs are AI, which is not a “misnomer”.

      So “AI” doesn’t need to be shackled because “AI” isn’t an intelligence and has no agency or control over anything.

      LLMs generate text, that’s all they do - they can’t control robots or “think” or “do” anything else.

      Proof by counterexample: https://en.wikipedia.org/wiki/OpenClaw

      • dandelion (she/her)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        17 days ago

        I think what’s relevant here is that we haven’t generated an artificial intelligence that has a mind like a person. LLMs are an “artificial intelligence” only in a loose sense, i.e. because it generates text like humans might generate, it’s “artificial intelligence” - the misconception is that there actually is a human-like, autonomous intelligence underlying it, and that’s just not true.

        Regarding OpenClaw, I’m not entirely sure how it functions under the hood, but it’s not really a counter-example to my point about LLMs because it’s not an LLM (even if it integrates with and uses LLMs).

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          edit-2
          17 days ago

          we haven’t generated an artificial intelligence that has a mind like a person.

          Okay, but that isn’t what AI means. It seems you’re the one with misconceptions about the definition of AI.

          Regarding OpenClaw, I’m not entirely sure how it functions under the hood, but it’s not really a counter-example to my point about LLMs because it’s not an LLM (even if it integrates with and uses LLMs).

          so, let me understand correctly, you believe an LLM can’t have agency, and if we give an LLM agency, actually we haven’t because now its no longer an LLM because it has agency? Or maybe you were just wrong… Hmmmmm

          • dandelion (she/her)@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            17 days ago

            I’m attempting to start with OP’s concept of AI, which implies a human-like intelligence. It’s fine to make these distinctions and reclaim AI as a term, but we need to be clear about what that means.

            I don’t disagree that LLMs are generally called AI because they can do something that generally human intelligence is required to do (like generate realistic text and dialogue like a human would), but it still doesn’t help OP get clear.

            How would you recommend we better approach this learning opportunity?

            so, let me understand correctly, you believe an LLM can’t have agency, and if we give an LLM agency, actually we haven’t because now its no longer an LLM because it has agency? Or maybe you were just wrong… Hmmmmm

            OpenClaw doesn’t “give an LLM agency” - the underlying program that interfaces with the LLM is presumably the “agentic” part, the LLM is still a separate program that generates text and is non-agentic.

            I’m happy to be wrong, but I just don’t see how OpenClaw “gives agency” to an LLM, it sounds like it adds an LLM to allow an agentic AI to generate text. How does the agentic AI make decisions, and how is the LLM used in relationship to that process? I don’t know as much about how OpenClaw works, tbh - so maybe it’s reasonable to say an agentic AI layer on top of an LLM is a way to “give agency” to an LLM, I’m just doubtful and not clear on the details.

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              17 days ago

              How would you recommend we approach this learning opportunity?

              I’d recommend looking for some beginner and introductory resources into Artificial Intelligence, especially before you try to comment on a topic that you are not familiar with. It helps to at least understand the definition of the word you want to “reclaim”.

              I’m happy to be wrong,

              That’s convenient for you!

              but I just don’t see how OpenClaw “gives agency” to an LLM, it sounds like it adds an LLM to an agentic AI. How does the agentic AI make decisions, and how is the LLM used in relationship to that process? I don’t know as much about how OpenClaw works, tbh.

              OpenClaw is actually extremely simple and thin. It works by just prompting the LLM in a continuous loop while providing it tool calls that are standard to LLMs. There isn’t anything more to it than that, besides customizing the prompt and tools that are available. The LLM is the agentic AI that makes the decisions and calls the tools. I guess next you’ll continue to try to save face with another non-point like “the computer, not the LLM, is the one that does things when the tools are called”

              • dandelion (she/her)@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                17 days ago

                I’m not sure a prompt loop sufficiently grants LLM what I would consider “agency” when the relevant discussion is about whether an LLM has agency in the way humans do (i.e. human-like intelligence, a mind, personhood, etc.).

                I’d recommend looking for some beginner and introductory resources into Artificial Intelligence, especially before you try to comment on a topic that you are not familiar with. It helps to at least understand the definition of the word you want to “reclaim”.

                telling me to look at beginner resources on AI isn’t a helpful response when I’ve asked how to have better explained to OP that “AI” isn’t a human-like intelligence, it ignores my question and then puts me down by implying I don’t have the first clue what I’m talking about.

                Your tone is rude and unhelpful, I’m done talking to you. 🫤

                If your goal is really to help correct misinformation (and not just to put people down), you might need to adjust how you approach conversation with others in the future.