• slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    20 hours ago

    The more artificial intelligence is used within a law firm, the more lawyers are needed to vet the technology’s outputs.

    I mean, trust but verify is a thing for a reason.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      You cannot honestly call it “trust” if you still have to go through the output with a magnifying glass and make sure it didn’t tell anyone to put glue on their pizza.

      When any other technology fails to achieve its stated purpose, we call it flawed and unreliable. But AI is so magical! It receives credit for everything it happens to get right, and it’s my fault when it gets something wrong.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      The fact that it needs repeating is confirmation that AI output is dogshit that cannot be trusted. Using AI as anything other than a starting point, like how search engines are used, is dangerous for anything where accuracy matters.