• Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”

    who talks like this

    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      By creating a language only they are able to speak and interpret, the managerial class is protecting its existence and self reproduction, while keeping people of other classes out or only let them in after passing through a proper reeducation camp, e.g. MBA program.

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

    In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don’t want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don’t need “plausible deniability” regarding plagiarism or anything like this.

    Yet, we are being pushed to “embrace AI” as well, we are being told we need to “learn to prompt” etc. This is frustrating. My biggest fear isn’t to be replaced by an LLM, not even by someone who is a “prompting genius” or whatever. My biggest fear is to be replaced by a person who pretends that the AI’s output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what’s expected, apparently.

    • paequ2@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I work in a field that is mostly related to law and accounting… My biggest fear is to be replaced by a person who pretends that the AI’s output is smart

      Aaaaaah. I know this person. They’re an accountant. They recently learned about AI. They’re starting to use it more at work. They’re not technical. I told them about hallucinations. They said the AI rarely wrong. When he’s not 100% convinced, he says he asks the AI to cite the source… 🤦 I told him it can hallucinate the source! … And then we went back to “it’s rarely wrong though.”

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        And then we went back to “it’s rarely wrong though.”

        I am often wondering whether the people who claim that LLMs are “rarely wrong” have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

        I’m not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly “wrong” with it. Not one word about the missing semicolon, though.

        I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots’ output at all.

        • David Gerard@awful.systemsOPM
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it’s a scam. When you claim you can tell good from bad, they think you’re lying.

          • sturger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago
            • They string words together based on the probability of one word following another.
            • They are heavily promoted by people that don’t know what they’re doing.
            • They’re wrong 70% of the time but promote everything they say as truth.
            • Average people have a hard time telling when they’re wrong.

            In other words, AIs are BS automated BS artists… being promoted breathlessly by BS artists.