AI companies have all kinds of arguments against paying for copyrighted content::The companies building generative AI tools like ChatGPT say updated copyright laws could interfere with their ability to train capable AI models. Here are comments from OpenAI, StabilityAI, Meta, Google, Microsoft and more.

  • Mnemnosyne@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    12
    ·
    1 year ago

    The way I see it, if training on copyrighted content is forbidden, then that should apply universally.

    Since all people mix together ideas they’ve learned from their own input to create new things, just like AI does, then all people-produced content should also be inherently uncopyrightable, unless produced by a person who has never been exposed to copyrighted content.

    Oh, also all copyrighted content should lose its copyright. The only copyrighted content should be the original cave paintings by the first cavemen to develop art, since all art since then uses its influence.

    And if this sounds ridiculous, then it’s no less so than arguments that AI shouldn’t be allowed to learn.

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      Copyright is broken, but that’s not an argument to let these companies do whatever they want. They’re functionally arguing that copyright should remain broken but also they should be exempt. That’s the worst of both worlds.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        1 year ago

        Who said anything about “do whatever they want”? They should obviously comply with the law.

        When a human reads a comment here on Lemmy and learns something they didn’t know before - copyright law doesn’t stop them from using that knowledge. The same rule should apply to AI.

        In my opinion if you don’t want AI to learn from your work, then you shouldn’t allow humans to learn from it either. That’s fine - everyone has the right to keep their work private if they choose to do so… but if you make it publicly available, then you don’t get to control who learns from it.

        You can control who makes exact replicas of it, and if AI is doing that then sure - charge the company with copyright infringement - but generally that’s not how these systems work. They generally don’t produce exact copies except for highly structured content where there isn’t much creative flexibility (and those tend to not be protected under copyright by the way - they would be protected by patents).

        • theluddite@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          1 year ago

          Computers aren’t people. AI “learning” is a metaphorical usage of that word. Human learning is a complex mystery we’ve barely begun to understand, whereas we know exactly what these computer systems are doing; though we use the word “learning” for both, it is a fundamentally different process. Conflating the two is fine for normal conversation, but for technical questions like this, it’s silly.

          It’s perfectly consistent to decide that computers “learning” breaks the rules but human learning doesn’t, because they’re different things. Computer “learning” is a a new thing, and it’s a lot more like creating replicas than human learning is. I think we should treat it as such.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            I’m so fed up trying to explain this to people. People thing LLMs are real GAI and are treating them as such.

            Computers do not learn like humans. It cannot, and should not be regulated in the same way.

            • theluddite@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Yes 100%. Once you drop the false equivalence, the argument boils down to X does Y and therefore Z should be able to do Y, which is obviously not true, because sometimes we need different rules for different things.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Since all people mix together ideas they’ve learned from their own input to create new things, just like AI does, then all people-produced content should also be inherently uncopyrightable, unless produced by a person who has never been exposed to copyrighted content.

      While copyright and IP law at present is massively broken, this is a very poor interpretation of the core argument at play.

      Let me break it down:

      • Yes, all human created art takes significant influence - purposefully, and accidently - from work which has come before it
      • To have been influenced by that piece, legally, the human will have had to pay the copyright holder to; go to the cinema, buy the bluray, see the performance, go to the gallery, etc. Works out of copyright obviously don’t apply here.
      • To be trained in a discipline, the human likely pays for teaching by others, and those others have also paid copyright holders to view the media that influenced them aswell
      • Even thought the vast majority of art is influenced by all other art, humans are capable of novel invention- ie things which have not come before - but GenAI fundamentally isn’t.

      Separately, but related, see the arguments the Pirate Parties used to make about personal piracy being OK, which were fundamentally down to an argument of scale:

      • A teenager pirating some films to watch cos they are interested in cinema, and being inspired to go to film school is very limited in scope. Even if they pirate hundreds of films, it can’t be argued that it’s 100 lost sales because the person may have never bought them anyway.
      • A GenAI company consuming literally all artistic output of humanity, with no payment to the artists what so ever, “learning” to create “new” art, without paying for teaching, and spitting out whatever is asked of it, is massive copyright infringement on the consumption side, and an existential threat to the arts on the generation side

      That’s the reason people are complaining, cos they aren’t being paid today, and they won’t be paid tomorrow.

      • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago
        • To have been influenced by that piece, legally, the human will have had to pay the copyright holder to; go to the cinema, buy the bluray, see the performance, go to the gallery, etc. Works out of copyright obviously don’t apply here.
        • To be trained in a discipline, the human likely pays for teaching by others, and those others have also paid copyright holders to view the media that influenced them aswell

        Neither of these are necessarily true, and the first one is even demonstrably false given the amount of copyrighted content that can be freely accessed online.

        • Even thought the vast majority of art is influenced by all other art, humans are capable of novel invention- ie things which have not come before - but GenAI fundamentally isn’t.

        That depends highly on your definition of “novel invention”. Given that GenAI can be given randomised noise as input to create something from, it’s highly debatable if GenAI is truly “incapable” of novel invention. And even then, it’s possible to provide prompts describing a novel style (e.g. “oil painting with thick, vibrant streaks of colour” or something), so a human + GenAI together may well be capable of novel invention. I don’t recall the last time a human was able to create something that could not be expressed in previously existing words at all. You can describe Van Gogh without using his name, or describe a Picasso without using named art styles. Yet we consider their works novel, no?

        That’s the reason people are complaining, cos they aren’t being paid today, and they won’t be paid tomorrow.

        Even if AI only trained on non-copyrighted art, this would still be true. It might set the AI companies back a year or two, but AI art generation is here to stay and will threaten artists’ incomes. These lawsuits are only really stalling tactics to delay the inevitable.

        I can’t predict if they’re going to win their lawsuit or not, nor do I know if they should. But the artists’ salvation won’t lie in copyright law, I know that much.

        • HelloThere@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I don’t recall the last time a human was able to create something that could not be expressed in previously existing words at all.

          It’s called outsider art.

          Even if AI only trained on non-copyrighted art, this would still be true. It might set the AI companies back a year or two

          If this is true then they have no excuse to continue to consume copywritten content. Given the extreme pushback from the companies involved, I think is clear that this isn’t true.

          • Outsider art can be explained using words. It’s certainly strange art, but not necessarily something that’s “unpromptable”.

            AI companies mostly push back because dealing with copyright is very expensive, not because it would necessarily take a very long time. Google and Microsoft likely already have a sizeable library of copyright-free art they could use, but using everything is just more efficient and much, much cheaper.

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      AI legally can’t create its own copywritable content. Indeed, it can not learn. It can only produce models that we tune on datasets. Those datasets being copywritten content. Im a little tired of the anthropomorphizing of ais. They are statistical models not children.

      No sir, I didn’t copy this book, I trained ten thousand ants to eat cereal but only after running an ink well and then a maze that I got them to move through in a way that deposits the ink where I need it to be in order to copy this book.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        The AI isn’t being accused of copyright infringement. Nothing is being anthropomorphized.

        Wether you write a copy of a book with a pen, or type it into a keyboard, or photograph every page, or scan it with a machine learning model is completely irrelevant. The question is - did you (the human using the pen/keyboard/camera/ai model) break the law?

        I’d argue no, but other people disagree. It’ll be interesting to see where the courts side on it. And perhaps more importantly, wether new legislation is written to change copyright law.

      • Mnemnosyne@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        It can only produce models that we tune on datasets. Those datasets being copywritten content.

        That’s called learning. You learn by taking in information, then you use that information to produce something new.

        • echo64@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          It isn’t. Statistical models do not learn. That’s just how we anthropomorphic them. They bias.

            • echo64@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 year ago

              no, you literally can not. Maybe if you were a techbro that doesn’t really understand how the underlying systems work but you have seen sci-fi and want to use that to describe the current state of technology.

              but you’re still wrong if you try.

              • Bgugi@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                Yes, you literally can. At the very deepest level, neural networks work in essentially the same way actual neurons do. All “learning,” artificial or not, is biasing the interconnections and firing rates between nodes “biasing” them for desired outputs.

                Humans are a lot more complicated in terms of size and architecture. Our processing has many more layers of abstraction and processing (understanding, emotion, and who knows what else). But fundamentally the same process is occuring: inputs + rewards = biases. Inputs + biases = outputs.

                • echo64@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  At the very deepest level, neural networks work in essentially the same way actual neurons do.

                  they do not, neural networks were inspired by neurons, it’s a wild oversimplification of both neural networks and neurons to state that hey work the same way, they do not. This is the kind of thing the sci-fi watching tech bros will say, but it’s incorrect to say.