• hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    1
    ·
    edit-2
    1 year ago

    If I was a student who wrote a text that was rejected due to this tool, do I have a case against either my institution, the professor who threw it out or OpenAI?

    I am stuck with defamation but idk if that’s actually defamatory in itself, as that only works if the professor or school had done due diligence that the tool is good for use, but there were already reports that it was not.

    • experbia@kbin.social
      link
      fedilink
      arrow-up
      44
      ·
      1 year ago

      do I have a case against either my institution, the professor who threw it out or OpenAI?

      This all seems like such recent technology, I can not imagine this question being very answerable except via the long way: a courtroom. I suspect it would take someone trying in order to set precedent.

    • ReallyKinda@kbin.social
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Turnitin isn’t AI technology but I assume it has similar legal ramifications and a lot of schools require teachers to have everything go through turnitin (usually by having students submit online). It just spits out a percentage so that the prof can take a closer look. Real quotes count towards the percentage displayed. Maybe with AI you’d have a bit more of a case against the company because you might claim you trusted it to be accurate or something?

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Real quotes count towards the percentage displayed.

        TII can be configured to ignore properly quoted texts.

    • kava@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      There’s a similar issue in chess with cheating detection. They use statistical analysis to see if someone’s moves are too good. Computers play at a much higher level than humans and you can measure how “accurate” a move is.

      It doesn’t mean much for a few moves or even 1 or 2 games but with more data you get more confidence that someone is cheating or not cheating.

      Chess.com released a rather infamous report last year about a high profile chess player that was cheating on their site. They never directly said “he is cheating” but simply stated “his games triggered our anti-cheating algorithms”

      One is debatable, the other is a simple fact. The truth is an absolute defense to defamation. Hans attempted to sue Chess.com for defamation and from what I understand, the case got recently dismissed.

      I’d imagine these AI detectors for schools have similar wordings to avoid legal risk. “High probability for AI” instead of saying “AI written”. In that case, you may have very little case for defamation.

      However, I’m not a lawyer. I’m just guessing these companies that offer this analysis to colleges have lawyers and have spent time shielding the company from legal liability.