• RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    82
    ·
    9 months ago

    That seems like a stupid argument?

    Even if a human employee did that aren’t organisations normally vicariously liable?

    • atx_aquarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      73
      arrow-down
      1
      ·
      9 months ago

      That’s what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they’re responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can’t be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.

      • Carighan Maconar@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        9 months ago

        I like this. LLMs are powerful tools, but being rebranded as “AI” and crammed into ~everything is just bullshit.

        The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they’ll notice they cannot guarantee the correct information is the only one provided as that’s not how LLMs work in their function as stochastic parrots, and they’ll stop using them for a lot of things. Hopefully sooner rather than later.

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          This is actually a very good outcome if achievable, leave LLMs to be used where there’s nothing important on the line or have humans control them