AI companies have all kinds of arguments against paying for copyrighted content::The companies building generative AI tools like ChatGPT say updated copyright laws could interfere with their ability to train capable AI models. Here are comments from OpenAI, StabilityAI, Meta, Google, Microsoft and more.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    Copyright is broken, but that’s not an argument to let these companies do whatever they want. They’re functionally arguing that copyright should remain broken but also they should be exempt. That’s the worst of both worlds.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Who said anything about “do whatever they want”? They should obviously comply with the law.

      When a human reads a comment here on Lemmy and learns something they didn’t know before - copyright law doesn’t stop them from using that knowledge. The same rule should apply to AI.

      In my opinion if you don’t want AI to learn from your work, then you shouldn’t allow humans to learn from it either. That’s fine - everyone has the right to keep their work private if they choose to do so… but if you make it publicly available, then you don’t get to control who learns from it.

      You can control who makes exact replicas of it, and if AI is doing that then sure - charge the company with copyright infringement - but generally that’s not how these systems work. They generally don’t produce exact copies except for highly structured content where there isn’t much creative flexibility (and those tend to not be protected under copyright by the way - they would be protected by patents).

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        1 year ago

        Computers aren’t people. AI “learning” is a metaphorical usage of that word. Human learning is a complex mystery we’ve barely begun to understand, whereas we know exactly what these computer systems are doing; though we use the word “learning” for both, it is a fundamentally different process. Conflating the two is fine for normal conversation, but for technical questions like this, it’s silly.

        It’s perfectly consistent to decide that computers “learning” breaks the rules but human learning doesn’t, because they’re different things. Computer “learning” is a a new thing, and it’s a lot more like creating replicas than human learning is. I think we should treat it as such.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I’m so fed up trying to explain this to people. People thing LLMs are real GAI and are treating them as such.

          Computers do not learn like humans. It cannot, and should not be regulated in the same way.

          • theluddite@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Yes 100%. Once you drop the false equivalence, the argument boils down to X does Y and therefore Z should be able to do Y, which is obviously not true, because sometimes we need different rules for different things.