According to SAG AFTRA, the deal will “enable Replica to engage SAG-AFTRA members under a fair, ethical agreement to safely create and license a digital replica of their voice. Licensed voices can be used in video game development and other interactive media projects from pre-production to final release.”

The deal reportedly includes minimum terms and the requirement for performers’ consent to use their voice for AI.

However, several prominent video game voice actors were quick to respond on X, specifically to a portion of the statement which claims the deal was approved by “affected members of the union’s voiceover performer community.”

Apex Legends voice actor Erika Ishii wrote: “Approved by… WHO exactly?? Was any one of the ‘affected members’ who signed off on this a working voice actor?”

  • SmoothLiquidation@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    11
    ·
    6 months ago

    This is one sector where I am actually happy for AI to be available. I want to play a game where the NPC’s can say my character name.

    That being said, I also want the voice actors to be compensated fairly. Maybe the guilds can set up a deal where using someone’s voice for training data is included.

    • Tetra@kbin.social
      link
      fedilink
      arrow-up
      52
      arrow-down
      3
      ·
      6 months ago

      I feel like the solution is pretty simple: if you want to AI copy someone’s voice and put it in your project, you have to hire them and pay them as normal, and they have to give consent to let the AI use their likeness.

      Otherwise it’s theft.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        edit-2
        6 months ago

        And this has to be on a per-game basis, to. Studios licensing a voice in perpetuity will eventually come back to the same issues.

        For AI to truly be a net benefit to our society, it should be used as a tool by the artists to augment the output from the artists. It shouldn’t be a way of replacing them.

        If a voice actors job goes from recording each and every line to recording samples for AI and helping to tweak the output, that’s fine. But the compensation stays the same.

        That’s how it improves our world. Makes the human’s job easier without replacing them or affecting their compensation.

        The way it’s currently on track to be used is how it improves the lives of the wealthiest at the expense of everyone else. No amount of futurist techno-jerking should distract from that. These are not tools for us to benefit from in any significant sense.

        • Rolder@reddthat.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 months ago

          I’ve been trying to find the actual text of the deal to see if it fucks over the actors or not, but I can’t find the actual deal, just articles referencing it

      • R0cket_M00se@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        Sure, it’s just that this specific text to speech voice is created by an AI via training data via voice samples.

        AI is more than just ChatGPT, it’s an algorithm that can be applied to a lot of different things.

        • TheQuietCroc@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          You don’t need AI to do that, that kind of system can be made independent of AI. It’s just not worth doing for this one use case vs using it for a whole voice.

          • SmoothLiquidation@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Honestly, the problem is that “AI” is a dumb term that is way over used in these situations. Outside of Science Fiction, AI has generally been used to describe what “the next big thing” computers can do.

            Using a term like “Large Language Model” to refer to ChatGPT explains what it actually does. Or Deep-Learning Text to image models for the image generation.

            I remember playing around with TTS on a Apple ][ plus as a kid, there is nothing new about that, but using statistical models to have them imitate a voice is new, but just lumping them all in with Artificial Intelligence, is just dumb.