They frame it as though it’s for user content, more likely it’s to train AI, but in fact it gives them the right to do almost anything they want - up to (but not including) stealing the content outright.

  • MaggiWuerze@feddit.de
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    1
    ·
    9 months ago

    So, they want to create AI written and narrated audiobooks that use the voices of well known voice actors without paying them for the privilege? How is that supposed to stand in court?

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      9
      ·
      9 months ago

      It wouldn’t be to save the cheap coat of a voice actor.

      It’s so they can play the audio to their AI for free without having to say it was fed a copywritten text. It would also get better at telling stories, depending on the quality it was fed.

      But the main advantage is training it to follow a long verbal narrative. And decide if it’s better to transcribe it for full reference, or just make a summary as the story goes and risk missing an important bit.

      Then to repeat it in the AI’s “own words”. This would make a huge loophole for exploiting famous authors. If you feed AI the text, the author can argue it was trained on it. If the AI just listened to it and makes a summary and remembers the structure. Derivative works of famous authors can be claimed to be no different than a human emulating popular authors that they had read.

      They’re just trying to find a way around using the full text, and reading it aloud might be enough.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        9 months ago

        That’s some wild speculation there.

        What you described would be a contrived and inefficient workaround that would have little to no impact on its legality compared to just using the underlying texts as part of a training corpus.

        Not sure why you think Spotify wouldn’t want to eliminate the cost of voice actors and production. If you’re self-publishing, recording and producing an audiobook traditionally is a substantial expense. If Spotify can offer something like Google’s Auto-Narrated Audiobooks to authors, then that would enable them to bring those authors to Spotify (potentially exclusively).

        Spotify’s goal also is not necessarily to imitate the voices from the existing audiobooks. There is a lot that goes into making an audiobook successful, and just copying the voice alone wouldn’t convey that. For example, pairing tone and cadence changes with what’s being narrated, techniques for conveying dialogue, particularly between different characters, etc… How you speak is just as important as your raw voice.

        That would allow Spotify to create audiobooks using those techniques without using the voice of anyone who hadn’t signed away rights to it. However I would argue that some of the techniques they would likely use are integral to a person’s voice.

        It’s also feasible that Spotify wants to be able to take an existing audiobook and make it available with a different voice. This wouldn’t require the audiobook to have ever been trained on - they would just replace the existing voice in it with another while preserving the pauses, tone shifts, etc. (and possibly adjusting them to be appropriate for the new voice).

        More closely aligned to the specific derivative work they mentioned would be to implement something like Kindle/Audible’s Whispersync, potentially in collaboration with a non-Amazon ebook retailer like Barnes&Noble or Kobo.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          This is a much better take.

          Intonation is huge, and something general models tend to have trouble with - especially with something like an audiobook, which is narration - it’s very contextual in a way not found in almost any other form of communication. It even encapsulates every other form of context through dialogue.

          And not only that - a lot of audiobooks have versions by multiple voice actors. And they might change a word here or there, but it’s highly structured data - it’s truly a treasure trove

          I’d go a step further and say they really want access to the dataset - not just for audiobooks, but because this is a fantastic dataset to train very context aware (and silky smooth) text to voice.

          Spotify probably doesn’t have the chops to do this, but they might be trying to leverage the dataset - I’m not sure if they could sell it wholesale or not, but if nothing else they could “partner” with Microsoft or Google to train VTT capabilities into multi-modal LLMs (a pitch with all the buzzwords to make investors need to change their underwear)

      • The_v@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Make the policy change, see if they can get it to hold up in the courts. AKA normal business practices for corporate America.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      9 months ago

      Voices can’t be protected by copyright but there may be a legal avenue for someone like Morgan Freeman to sue if a voice is clearly a knock off of his voice AND he can make a case for it damaging his “brand”.

      I’d be impressed though if AI can write a novel without directly referencing a fictional person, place or thing that someone else made up. Stable Diffusion, for example, can make a picture of dog wearing a tracksuit running on the side of a skyscraper made of pudding in the middle of a noodle hurricane. But it didn’t invent any of those individual components, it just combined them.

      • gedaliyah@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        This is why we need laws for likeness rights. Every person should own exclusive commercial rights to their own face, voice, etc.

        • Plopp@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 months ago

          Jesus, that’s dark.

          Edit: oh, my eyes skipped the word “image”

          • Agrivar@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            “Now I want that of the dog framed and hanging in my house.”

            Are ya sure your brain didn’t skip a few more words?

            ;-P

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        9 months ago

        What about when a talented comedian speaks in the voice of someone else? Should we just write a law that humans are allowed to do it, but machines aren’t?

        • nintendiator@feddit.cl
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          9 months ago

          Tell me you don’t understand the difference between human creative work and “”“AI”“” work without telling me you don’t understand the difference between human creative work and “”“AI”“” work

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            I don’t. What exactly is the difference between me making a remix of someone’s voice using software I don’t understand and me telling software I don’t understand to doing that slightly more?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      No. This is very likely about translations.

      The idea that they’ll be creating an unofficial sequel to your audiobook and selling it without your permission or something is a pretty ridiculous leap that would be very unlikely to actually hold up in court.