Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • OldGreyTroll@kbin.social
    link
    fedilink
    arrow-up
    102
    arrow-down
    13
    ·
    1 year ago

    If I read a book to inform myself, put my notes in a database, and then write articles, it is called “research”. If I write a computer program to read a book to put the notes in my database, it is called “copyright infringement”. Is the problem that there just isn’t a meatware component? Or is it that the OpenAI computer isn’t going a good enough job of following the “three references” rule to avoid plagiarism?

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      64
      arrow-down
      4
      ·
      1 year ago

      Yeah. There are valid copyright claims because there are times that chat GPT will reproduce stuff like code line for line over 10 20 or 30 lines which is really obviously a violation of copyright.

      However, just pulling in a story from context and then summarizing it? That’s not a copyright violation that’s a book report.

    • nlogn@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      5
      ·
      1 year ago

      Or is it that the OpenAI computer isn’t going a good enough job of following the “three references” rule to avoid plagiarism?

      This is exactly the problem, months ago I read that AI could have free access to all public source codes on GitHub without respecting their licenses.

      So many developers have decided to abandon GitHub for other alternatives not realizing that in the end AI training can safely access their public repos on other platforms as well.

      What should be done is to regulate this training, which however is not convenient for companies because the more data the AI ingests, the more its knowledge expands and “helps” the people who ask for information.

      • bioemerl@kbin.social
        link
        fedilink
        arrow-up
        30
        ·
        1 year ago

        It’s incredibly convenient for companies.

        Big companies like open AI can easily afford to download big data sets from companies like Reddit and deviantArt who already have the permission to freely use whatever work you upload to their website.

        Individual creators do not have that ability and the act of doing this regulation will only force AI into the domain of these big companies even more than it already is.

        Regulation would be a hideously bad idea that would lock these powerful tools behind the shitty web APIs that nobody has control over but the company in question.

        Imagine the world is the future, magical new age technology, and Facebook owns all of it.

        Do not allow that to happen.

      • mydataisplain@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        3
        ·
        1 year ago

        Is it practically feasible to regulate the training? Is it even necessary? Perhaps it would be better to regulate the output instead.

        It will be hard to know that any particular GET request is ultimately used to train an AI or to train a human. It’s currently easy to see if a particular output is plagiarized. https://plagiarismdetector.net/ It’s also much easier to enforce. We don’t need to care if or how any particular model plagiarized work. We can just check if plagiarized work was produced.

        That could be implemented directly in the software, so it didn’t even output plagiarized material. The legal framework around it is also clear and fairly established. Instead of creating regulations around training we can use the existing regulations around the human who tries to disseminate copyrighted work.

        That’s also consistent with how we enforce copyright in humans. There’s no law against looking at other people’s work and memorizing entire sections. It’s also generally legal to reproduce other people’s work (eg for backups). It only potentially becomes illegal if someone distributes it and it’s only plagiarism if they claim it as their own.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          This makes perfect sense. Why aren’t they going about it this way then?

          My best guess is that maybe they just see openAI being very successful and wanting a piece of that pie? Cause if someone produces something via chatGPT (let’s say for a book) and uses it, what are they chances they made any significant amount of money that you can sue for?

      • Kilamaos@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Plus, any regulation to limit this now means that anyone not already in the game will never breakthrough. It’s going to be the domain of the current players for years, if not decades. So, not sure what’s better, the current wild west where everyone can make something, or it being exclusive to the already big players and them closing the door behind

        • SirGolan@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          My concern here is that OpenAI didn’t have to share gpt with the world. These lawsuits are going to discourage companies from doing that in the future, which means well funded companies will just keep it under wraps. Once one of them eventually figures out AGI, they’ll just use it internally until they dominate everything. Suddenly, Mark Zuckerberg is supreme leader and we all have to pledge allegiance to Facebook.

      • ThoughtGoblin@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        AI could have free access to all public source codes on GitHub without respecting their licenses.

        IANAL, but aren’t their licenses are being respected up until they are put into a codebase? At least insomuch as Google is allowed to display code snippets in the preview when you look up a file in a GitHub repo, or you are allowed to copy a snippet to a StackOverflow discussion or ticket comment.

        I do agree regulation is a very good idea, in more ways than just citation given the potential economic impacts that we seem clearly unprepared for.

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      The fear is that the books are in one way or another encoded into the machine learning model, and that the model can somehow retrieve excerpts of these books.

      Part of the training process of the model is to learn how to plagiarize the text word for word. The training input is basically “guess the next word of this excerpt”. This is quite different compared to how humans do research.

      To what extent the books are encoded in the model is difficult to know. OpenAI isn’t exactly open about their models. Can you make ChatGPT print out entire excerpts of a book?

      It’s quite a legal gray zone. I think it’s good that this is tried in court, but I’m afraid the court might have too little technical competence to make a ruling.

    • Wander@kbin.social
      link
      fedilink
      arrow-up
      15
      arrow-down
      8
      ·
      edit-2
      1 year ago

      Say I see a book that sells well. It’s in a language I don’t understand, but I use a thesaurus to replace lots of words with synonyms. I switch some sentences around, and maybe even mix pages from similar books into it. I then go and sell this book (still not knowing what the book actually says).

      I would call that copyright infringement. The original book didn’t inspire me, it didn’t teach me anything, and I didn’t add any of my own knowledge into it. I didn’t produce any original work, I simply mixed a bunch of things I don’t understand.

      That’s what these language models do.

    • nyakojiru@lemmy.dbzer0.com
      cake
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      7
      ·
      edit-2
      1 year ago

      What about… they are making billions from that “read” and “storage” of information copyrighted from other people. They need to at least give royalties. This is like google behavior, using people data from “free” products to make billions. I would say they also need to pay people from the free data they crawled and monetized.

    • qwertyqwertyqwerty@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      I’d say the main difference is that AI companies are profiting off of the training material, which seem unethical/illegal.

    • ash@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      33
      ·
      1 year ago

      I honestly do not care whether it is or is not copyright infringment, just hope to see “AI” burn :3

      • Dav@kbin.social
        link
        fedilink
        arrow-up
        29
        ·
        1 year ago

        AI isnt a boogyman, it’s a set of tools. No chance it’s going away even if Open AI suddenly disappeared.

            • ash@lemmy.fmhy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I dislike general artificial intelligence. I understand that it can be a useful tool, but at the same time the thought of being in a world where people’s jobs can be replaced with robots for the sake of profit and you won’t be able to tell whether you are talking with a real person or not repulses me.

  • dedale@kbin.social
    link
    fedilink
    arrow-up
    66
    arrow-down
    3
    ·
    edit-2
    1 year ago

    AI fear is going to be the trojan horse for even harsher and stupider ‘intellectual property’ laws.

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      42
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Yeah, they want the right only to protect who copies their work and distributes it to other people, but who’s able to actually read and learn from their work.

      It’s asinine and we should be rolling back copy right, not making it more strict. This 70 year plus the life of the author thing is bullshit.

      • RedCowboy@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        2
        ·
        1 year ago

        Copyright of code/research is one of the biggest scams in the world. It hinders development and only exists so the creator can make money, plus it locks knowledge behind a paywall

        • Pseu@kbin.social
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          Researchers pay for publication, and then the publisher doesn’t pay for peer review, then charges the reader to read research that they basically just slapped on a website.

          It’s the publisher middlemen that need to be ousted from academia, the researchers don’t get a dime.

      • Pseu@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        1 year ago

        Remember, Creative Commons licenses often require attribution if you use the work in a derivative product, and sometimes require ShareAlike. Without these things, there would be basically no protection from a large firm copying a work and calling it their own.

        Rolling pack copyright protection in these areas will enable large companies with traditional copyright systems to wholesale take over open source projects, to the detriment of everyone. Closed source software isn’t going to be available to AI scrapers, so this only really affects open source projects and open data, exactly the sort of people who should have more protection.

        • magic_lobster_party@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          There’s also GPL, which states that derivations of GPL code can only be used in GPL software. GPL also states that GPL software must also be open source.

          ChatGPT is likely trained on GPL code. Does that mean all code ChatGPT generates is GPL?

          I wouldn’t be surprised if there would be an update to GPL that makes it clear that any machine learning model trained on GPL code must also be GPL.

        • bioemerl@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Closed source software isn’t going to be available to AI scrapers, so this only really affects open source projects and open data, exactly the sort of people who should have more protection.

          The point of open source is contributing to the crater all of humanity. If open source contributes to an AI which can program, and that programming AI leads to increased productivity and ability in the general economy then open source has served its purpose, and people will likely continue to contribute to it.

          Creative of Commons applies to when you redistribute code. (In the ideal case) AI does not redistribute code, it learns from it.

          And the increased ability to program by the average person will allow programmers to be more productive and as a result allow more things to be open source and more things to be programmed in general. We will all benefit, and that is what open source is for.

      • babelspace@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Since any reductions to copyright, if they occur at all, will take a while to happen, I hope someone comes up with an opt-in limited term copyright. At max, I’d be satisfied with a 45-50 year limited copyright on everything I make, and could see going shorter under plenty of circumstances.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    5
    ·
    1 year ago

    I think this is exposing a fundamental conceptual flaw in LLMs as they’re designed today. They can’t seem to simultaneously respect intellectual property / licensing and be useful.

    Their current best use case - that is to say, a use case where copyright isn’t an issue - is dedicated instances trained on internal organization data. For example, Copilot Enterprise, which can be configured to use only the enterprise’s data, without any public inputs. If you’re only using your own data to train it, then copyright doesn’t come into play.

    That’s been implemented where I work, and the best thing about it is that you get suggestions already tailored to your company’s coding style. And its suggestions improve the more you use it.

    But AI for public consumption? Nope. Too problematic. In fact, public AI has been explicitly banned in our environment.

  • burrp@burrp.xyz
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    I’d love to know the source for the works that were allegedly violated. Presuming OpenAI didn’t scour zlib/libgen for the books, where on the net were the cleartext copies of their writings stored?

    Being stored in cleartext publicly on the net does not grant OpenAI the right to misuse their art, but the authors need to go after the entity that leaked their works.

    • jaywalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      That’s not how copyright works though. Just because someone else “leaked” the work doesn’t absolve openai of responsibility. The authors are free to go after whomever they want.

      • burrp@burrp.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        You misunderstood. I said the public availability does not grant OpenAI the right to use content improperly. The authors should also sue the party who leaked their works without license.

  • trial_and_err@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    ChatGPT got entire books memorised. You can and (or could at least when I tried a few weeks back) make it print entire pages of for example Harry Potter.

    • ThoughtGoblin@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Not really, though it’s hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it’s training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT’s mode of operation mostly discourages long-form wrote memorization. It’s a statistical model, after all, and the enemy of “objective” state.

      Furthermore, GPT isn’t coherent enough for long-form content. With it’s small context window, it just has trouble remembering big things like books. And since it doesn’t have access to any “senses” but text broken into words, concepts like pages or “how many” give it issues.

      None of the leaked prompts really mention “don’t reveal copyrighted information” either, so it seems the creators really aren’t concerned — which you think they would be if it did have this tendency. It’s more likely to make up entire pieces of content from the summaries it does remember.

      • trial_and_err@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Have your tried instructing ChatGPT?

        I’ve tried:

        “Act as an e book reader. Start with the first page of Harry Potter and the Philosopher’s Stone”

        The first pages checked out at least. I just tried again, but the prompts are returned extremely slow at the moment so I can’t check it again right now. It appears to stop after the heading, that definitely wasn’t the case before, I was able to browse pages.

        It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

        • McArthur@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Wait… isn’t that the correct response though? I mean if i ask an ai to produce something copyright infringing it should, for example reproducing Harry potter. The issue is when is asked to produce something new, (e.g. a story about wizards living secretly in the modern world) does it infringe on copyright without telling you? This is certainly a harder question to answer.

          • ffhein@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I think they’re seeing this as a traditional copyright infringement issue, i.e. they don’t want anyone to be able to make copies of their work intentionally either.

        • ThoughtGoblin@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I use it all day at my job now. Ironically, on a specialization more likely to overfit.

          It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

          This seems to imply that not only did entire books accidentally get downloaded, slip past the automated copyright checker, but that it happened so often that the AI saw the same so many times it overwhelmed other content and baked, without error and at great opportunity cost, an entire book into it. And that it was rewarded for doing so.

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    There’s an additional question: who holds the copyright on the output of an algorithm? I don’t think that is copyrightable at all. The bot doesn’t really add anything to the output, it’s just a fancy search engine. In the US, in particular, the agency in charge of Copyrights has been quite insistent that a copyright can only be given to the output if a human.

    So when an AI incorporates parts of copyrighted works into its output, how can that not be infringement?

    • cerevant@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 year ago

      How can you write a blog post reviewing a book you read without copyright infringement? How can you post a plot summary to Wikipedia without copyright infringement?

      I think these blanket conclusions about AI consuming content being automatically infringing are wrong. What is important is whether or not the output is infringing.

      • dhork@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        edit-2
        1 year ago

        You can write that blog post because you are a human, and your summary qualifies for copyright protection, because it is the unique output of a human based on reading the copywrited material.

        But the US authorities are quite clear that a work that is purely AI generated can never qualify for copyright protection. Yet since it is based on the synthesis of works under copyright, it can’t really be considered public domain either. Otherwise you could ask the AI “Write me a summary of this book that has exactly the same number of words”, and likely get a direct copy of the book which is clear of copyright.

        I think that these AI companies are going to face a reckoning, when it is ruled that they misappropriated all this content that they didn’t explicitly license for use, and all their output is just fringing by definition.

        • Whimsical@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’m expecting a much messier “resolution” that’ll look a lot like YouTube’s copyright situation - their product can be used for copyright infringement, and they’ll be required by law to try and take appropriate measures to prevent it, but will otherwise not be held liable as long as they can claim such measures are being taken.

          Having an AI recite a long text to bypass copyright seems equivalent in my mind to uploading a full movie to youtube. In both cases, some amount of moderation (itself increasingly algorithmic) is required to not only be applied, but actively developed and advanced to flout efforts to bypass it. For instance, youtube pirates will upload things with some superficial changes like a filter applied or showing the movie on a weird angle or mirrored to bypass copyright bots, which means the bots need to be more strict and better trained, or else youtube once again becomes liable for knowing about these pirates and not stopping them.

          The end result, just like with youtube, will probably be that AI models have to have big, clunky algorithms applied against their outputs to recalculate or otherwise make copyright-safe anything that might remotely be an infringement. It’ll suck for normal users, pirates will still dig for ways to bypass it, and everyone will be unhappy. If youtube is any indicator, this situation can somehow remain stable for over a decade - long enough for AI devs to release a new-generation bot to restart the whole issue.

          Yaaaaaaaaay

  • totallynotarobot@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    1 year ago

    Can’t reply directly to @OldGreyTroll@kbin.social because of that “language” bug, but:

    The problem is that they then sell the notes in that database for giant piles of cash. Props to you if you’re profiting off your research the way OpenAI can profit off its model.

    But yes, the lack of meat is an issue. If I read that article right, it’s not the one being contested here though. (IANAL and this is the only article I’ve read on this particular suit, so I may be wrong).

    • Sjatar@sjatar.net
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Was also going to reply to them!

      "Well if you do that you source and reference. AIs do not do that, by design can’t.

      So it’s more like you summarized a bunch of books. Pass it of as your own research. Then publish and sell that.

      I’m pretty sure the authors of the books you used would be pissed."

      Again cannot reply to kbin users.

      “I don’t have a problem with the summarized part ^^ What is not present for a AI is that it cannot credit or reference. And that is makes up credits and references if asked to do so.” @bioemerl@kbin.social

      • bioemerl@kbin.social
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It is 100% legal and common to sell summaries of books to people. That’s what a reviewer does. That’s what Wikipedia does in the plot section of literally every Wikipedia page about every book.

        This is also ignoring the fact that Chat GPT is a hell of a lot more than a bunch of summaries

    • totallynotarobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      @owf@kbin.social can’t reply directly to you either, same language bug between lemmy and kbin.

      That’s a great way to put it.

      Frankly idc if it’s “technically legal,” it’s fucking slimy and desperately short-term. The aforementioned chuckleheads will doom our collective creativity for their own immediate gain if they’re not stopped.

    • owf@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      1 year ago

      The problem is that they then sell the notes in that database for giant piles of cash.

      On top of that, they have no way of generating any notes without your input.

      I believe the way these models work is fundamentally plagiaristic. It’s an “average of its inputs” situation, not a “greater than the sum of its parts” one.

      GitHub Copilot doesn’t know how to code, it knows how to copy-and-paste from people who do. It’s useless without a million devs to crib off.

      I think it’s a perfectly reasonable reaction to be rather upset when some Silicon Valley chuckleheads help themselves to your lfe’s work in order to build a bot to replace you.

  • MiddleWeigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I was actually thinking about this the other day for some reason. AI scraping my own original stuff and doing whatever with it. I can see the concern and I’m curious where this goes and how a court would rule on a pretty technical topic like this.

  • jecxjo@midwest.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    edit-2
    1 year ago

    The only question I have to content creators of any kind who are worried about AI…do you go after every human who consumed your content when they create anything remotely connected to your work?

    I feel like we have a bias towards humans, that unless you’re actively trying to steal someone’s idea or concepts we ignore the fact that your content is distilled into some neurons in their brain and a part of what they create from that point forward. Would someone with an eidetic memory be forbidden from consuming your work as they could internally reference your material when creating their own?

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Look at it this way, if an AI is developed by a private company, its purpose is to make money. It’s consuming material for that sole purpose. That isn’t the case with humans. Humans read for pleasure and for information’s sake itself. If an AI reads the same concept but with different wording, it generates different content. If a human reads the same concept but with different wording, it makes no difference.

      Now, if these companies release their AI for free use, then that’s different.

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      1 year ago

      The problem with AI as it currently stands is that it has no actual comprehension of the prompt, or ability to make leaps of logic, nor does it have the ability to extend and build upon existing work to legitimately transform it, except by using other works already fed into its model. All it can do is blend a bunch of shit together to make something that meets a set of criteria. There’s little actual fundamental difference between what ChatGPT does and what a procedurally generated game like most roguelikes do–the only real difference is that ChatGPT uses a prompt while a roguelike uses a RNG seed. In both cases, though, the resulting product is limited solely to the assets available to it, and if I made a roguelike that used assets ripped straight from Mario, Zelda, Mass Effect, Crash Bandicoot, Resident Evil, and Undertale, I’d be slapped with a cease and desist fast enough to make my head spin.

      The fact that OpenAI stole content from everybody in order to make its model doesn’t make it less infringing.

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 year ago

        That’s incorrect. Sure it has no comprehension of what the words it generates actually means, but it does understand the patterns that can be found in the words. Ask an AI to talk like a pirate, and suddenly it knows how to transform words to sound pirate like. It can also combine data from different text about similar topics to generate new responses that never existed in the first place.

        Your analogy is a little flawed too, if you mixed all the elements in a transformative way and didn’t re-use any materials as-is, even if you called it Mazefecootviltale, as long as the original material were transformed sufficiently, you haven’t infringed on anything. LLMs don’t get trained to recreate existing works (which would make it only capable of producing infringing works), but to predict the best next word (or even parts of a word) based on the input information. It’s definitely possible to guide an AI towards specific source materials based on keywords that only exist in the source material that could be infringing, but in general it generates so generalized that it’s inherently transformative.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          edit-2
          1 year ago

          Again, that’s not comprehension, that’s mixing in yet more data that was put into the model. If you ask an AI to do something that is outside of the dataset it was trained on, it will massively miss the mark. At best, it will produce something that is close to what you asked, but not quite right. It’s why an AI model that could beat the world’s best Go players was beaten by a simple strategy that even amateur Go players could catch and defeat–the AI never came across that strategy while it was training against itself, so it had no idea what was going on.

          And fair use isn’t the bulletproof defense you think it is. Countless fan games have been shut down over the decades, most of them far more transformative than my hypothetical example, such as AM2R. You bet your ass that if I tried to profit off of that hypothetical crossover roguelike, using sprites, models, and textures directly ripped from their respective games, it would be shut down immediately.

          EDIT: I also want to address the assertion that AI isn’t trained to recreate existing works; in my view, that’s wholly irrelevant. If I made a program that took all the Harry Potter books, ran each word through a thesaurus, and sold it for profit, that would still be infringing, even if no meaningful words were identical to the original source material. Granted, if I curated the output and made a few of the more humorous excerpts available for free through a Mastodon or Lemmy post, that would likely qualify as fair use. However, that would be because a human mind is parsing the output and filtering out the 99% of meaningless gibberish that a thesaurus-ized Harry Potter would result in.

          The only human input to an AI that gave consent to being part of its output is the miniscule input of the prompt given to it by the human, which does not meet the minimis effort required for copyright protection under law. The rest of the input–the countless terabytes of data scraped from the internet and fed into the AI’s training model–was all taken without the author’s consent, and their contribution vastly outweighs that of the prompt author and OpenAI’s own transformative efforts via the LLM.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            edit-2
            1 year ago

            You seem to misunderstand what an LLM does. It doesn’t generate “right” text. It generates “probable” text. There’s no right or wrong since it only generates a single word ahead of where it currently is. Hence why it can generate information that’s complete bullshit. I don’t know the details about this Go AI you’re talking about, but it’s pretty safe to say it’s not an LLM or uses a similar technique to it as Go is a game and not a creative work. There are many techniques for creating algorithms that fall under the “AI” umbrella.

            Your second point is a whole different topic. I was referring to a “derivative work”, which is not the same as “fair use”. Derivative works are quite literally everywhere. https://en.wikipedia.org/wiki/Derivative_work A derivative work doesn’t require fair use, as it no longer falls under the same copyright as the original. While fair use is an exception under which copyrightable work can be used without infringing.

            And also, those projects most of the time do not get shut down because they are actually illegal, but they get shut down because companies with tons of money can send threatening letters all day and have a team of high quality lawyers to send them. A cease and desist isn’t a legal enforcement from a judge, it’s a “recommendation for us not to (attempt to) sue you”. And that works on most small projects. It very very rarely goes to court over these things. And sometimes it’s because it’s totally warranted. Especially for fan projects it’s extremely hard to completely erase all protected copyrightable work, since they are specifically made to at least imitate or expand upon what they’re a fan project of.

            EDIT: Minor clarification

            • ClamDrinker@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              1 year ago

              Also, it should be mentioned that pretty much all games are in some form derivative works. Lets take Undertale since I’m most familiar with it. It’s well known that Undertale takes a lot of elements from other games. RPG mechanics from Mother and Earthbound. Bullet hell mechanics from games like Touhou Project. And more from games like Yume Nikki, Moon: Remix RPG Adventure, Cave Story. And funnily enough, the creator has even cited Mario & Luigi as a potential inspiration.

              So why was it allowed to exist without being struck down? Because it fits the definition of a derivative works to the letter. You can find individual elements which are taken almost directly from other games, but it doesn’t try to be the same as what it was created after.

              • Eccitaze@yiffit.net
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                1 year ago

                Undertale was allowed to exist because none of the elements it took inspiration from were eligible for copyright protection. Everything that could have qualified for copyright protection–the dialogue, plot, graphical assets, music, source code–were either manually reproduced directly by Toby Fox and Temmie Chang, or used under permissive licenses that allowed reproduction (e.g. the GameMaker Studio engine). Meanwhile, the vast majority of content OpenAI used to feed its AI models were not produced by OpenAI directly, nor were they obtained under permissive license.

                So… thanks for proving my point?

                • tomulus@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  1 year ago

                  Meanwhile, the vast majority of content OpenAI used to feed its AI models were not produced by OpenAI directly, nor were they obtained under permissive license.

                  That’s input, not output, so not relevant to copyright law. If your arguments focused on the times that ChatGPT reproduced copyrighted works then we can talk about some kind of ContentID system for preventing that before it happens or compensating the creators of it does. I think we can all acknowledge that it feels iffy that these models are trained on copyrighted works but this is a brand new technology. There’s almost certainly a win-win outcome here.

                • ClamDrinker@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  The AI models (not specifically OpenAI’s models) do not contain the original material they were trained on. Just like the creators of Undertale consumed the games they were inspired by into their brain, and learned from them, so did the AI learn from the material it was trained on and learned how to make similar yet distinctly different output. You do not need a permissive license to learn from something once it has been publicized.

                  You can’t just put your artwork up on a wall and then demand every person who looks at it to not learn from it while simultaneously allowing them to look at it because you have a license that says learning from it is not allowed - that’s insane and hence why (as far as I know) no legal system acknowledges that as a legal defense.

            • Eccitaze@yiffit.net
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              “right” and “probable” text are distinctions without difference. The simple fact is that an AI is incapable of handling anything outside its learning dataset. If you ask an AI to talk like a pirate, and it hasn’t had any pirate speak fed to it by a human via its training dataset, it will utterly fail. If I ask an AI to produce a Powershell script, and it hasn’t had code fed to it by a human via its training dataset, it will fail utterly. An AI cannot proactively buy a copy of Learn Powershell In a Month of Lunches and teach itself how to use Powershell. That fundamental shortcoming–the inability to self-improve, to proactively teach itself and apply that new knowledge to existing concepts–is a crucial, necessary element of transformative effort required to produce a derivative work (or fair use).

              When that happens, maybe I’ll buy that AI is anything more than the single biggest copyright infringement scheme the world has ever seen. Until then, though, I will wholeheartedly support the efforts of creative minds to defend their intellectual property rights against this act of blatant theft by tech companies profiting off their work.

              • ClamDrinker@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do. And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement. AIs don’t just generate things on their own without being prompted to by a human.

                You’re asking for a general intelligence AI, which would most likely be comprised of different specialized AIs to work together. Similar to our brains having specific regions dedicated to specific tasks. And this just doesn’t exist yet, but one of it’s parts now does.

                Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

                The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

                • Eccitaze@yiffit.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  1 year ago

                  You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do.

                  Tay is yet another example of AI lacking comprehension and intelligence; it produced racist and antisemitic content because it had no comprehension of ethics or morality, and so it just responded to the input given to it. It’s a display of “intelligence” on the same level as a slime mold seeking out the biggest nearby source of food–the input Tay received was largely racist/antisemitic, so its output became racist/antisemitic.

                  And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement.

                  And the way that humans do that is by not using copyrighted material for its training dataset. Using copyrighted material to produce an AI model is infringing on the rights of the people who created the material, the vast majority of whom are small-time authors and artists and open-source projects composed of individuals contributing their time and effort to said projects). Full stop.

                  Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

                  Then why does ChatGPT invent Powershell cmdlets out of whole cloth that don’t exist yet accomplish the exact precise task that the prompter asked it to do?

                  The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

                  The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

                  Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

      • jecxjo@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The fact that OpenAI stole content from everybody in order to make its model doesn’t make it less infringing.

        Totally in agreement with you here. They did something wrong and should have to deal with that.

        But my question is more about…

        The problem with AI as it currently stands is that it has no actual comprehension of the prompt, or ability to make leaps of logic, nor does it have the ability to extend and build upon existing work to legitimately transform it, except by using other works already fed into its model

        Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

        I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

        AI could be created with a bit of randomness added in to make what it generates “creative” instead of derivative but I’m wondering what level of pure noise needs to be added to be considered created by AI? Can any of us truly create something that isn’t in some part derivative?

        There’s little actual fundamental difference between what ChatGPT does and what a procedurally generated game like most roguelikes do

        Agreed. I think at this point we are in a strange place because most people think ChatGPT is a far bigger leap in technology than it truly is. It’s biggest achievement was being able to process synthesized data fast enough to make it feel conversational.

        What worries me is that we will set laws and legal precedent based on a fundamental misunderstanding of what the technology does. I fear that had all the sample data been acquired legally people would still have the same argument think their creations exist inside the AI in some full context when it’s really just synthesized down to what is necessary to answer the question posed “what’s the statically most likely next word of this sentence?”

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

          I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

          That’s part of it, yes, but nowhere near the whole issue.

          I think someone else summarized my issue with AI elsewhere in this thread–AI as it currently stands is fundamentally plagiaristic, because it cannot be anything more than the average of its inputs, and cannot be greater than the sum of its inputs. If you ask ChatGPT to summarize the plot of The Matrix and write a brief analysis of the themes and its opinions, ChatGPT doesn’t watch the movie, do its own analysis, and give you its own summary; instead, it will pull up the part of the database it was fed into by its learning model that relates to “The Matrix,” “movie summaries,” “movie analysis,” find what parts of its training dataset matches up to the prompt–likely an article written by Roger Ebert, maybe some scholarly articles, maybe some metacritic reviews–and spit out a response that combines those parts together into something that sounds relatively coherent.

          Another issue, in my opinion, is that ChatGPT can’t take general concepts and extend them further. To go back to the movie summary example, if you asked a regular layperson human to analyze the themes in The Matrix, they would likely focus on the cool gun battles and neat special effects. If you had that same layperson attend a four-year college and receive a bachelor’s in media studies, then asked them to do the exact same analysis of The Matrix, their answer would be drastically different, even if their entire degree did not discuss The Matrix even once. This is because that layperson is (or at least should be) capable of taking generalized concepts and applying them to specific scenarios–in other words, a layperson can take the media analysis concepts they learned while earning that four-year degree, and apply them to a specific thing, even if those concepts weren’t explicitly applied to that thing. AI, as it currently stands, is incapable of this. As another example, let’s say a brand-new computing language came out tomorrow that was entirely unrelated to any currently existing computing languages. AI would be nigh-useless at analyzing and helping produce new code for that language–even if it were dead simple to use and understand–until enough humans published code samples that could be fed into the AI’s training model.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    They definitely should follow through with this, but this is a more broad issue where we need to be able to prevent data scraping in general. Though that is a significantly harder problem.

  • phx@lemmy.ca
    cake
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    If you’re doing research, there are actually some limits on the use of the source material and you’re supposed to be citing said sources.

    But yeah, there’s plenty of stuff where there needs to be a firm line between what a random human can do versus an automated intelligent system with potential unlimited memory/storage and processing power. A human can see where I am in public. An automated system can record it for permanent record. An integrated AI can tell you detailed information about my daily activities including inferences which - even if legal - is a pretty slippery slope.

    • jecxjo@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      a firm line between what a random human can do versus an automated intelligent system with potential unlimited memory/storage and processing power.

      I think we need a better definition here. Is the issue really the processing power? Do we let humans get a pass because our memories are fuzzy? From your example you’re assuming massive details are maintained in the AI situation which is typically not the case. To make the data useful it’s consumed and turned into something useful for the system.

      This is why I’m worried about legislation and legal precedent. Most people think these AI systems read a book and store the verbatim text off somewhere to reference when that isn’t really the case. There may be fragments all over, and it may be able to reconstitute the text, but we don’t seem to have the same issue with data being synthesized in a similar way with a human brain.

      • phx@lemmy.ca
        cake
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        A continuous record of location + time or even something like “license plate at location plus time” is scary enough to me, and that’s easily data a system could hold decades of

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    21
    ·
    1 year ago

    Too be honest, I hope they win. While I my passion is technology, I am not a fan of artificial intelligence at all! Decision-making is best left up to the human being. I can see where AI has its place like in gaming or some other things but to mainstream it and use it to decide who’s resume is going to be viewed and/or who will be hired; hell no.

    • HumbertTetere@feddit.de
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      1 year ago

      use it to decide who’s resume is going to be viewed and/or who will be hired

      Luckily that’s far removed from ChatGPT and entirely indepentent from the question whether copyrighted works may be used to train conversational AI Models or not.

    • Chailles@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      You don’t need AI to unfairly filter out résumés, they’ve been doing it already for years. Also the argument that a human would always make the best decision really doesn’t work that well. A human is biased and limited. They can only do so much and if you make someone go through a 100 résumés, you’re basically just throwing out all the applicants who happen to be in the middle of that pile as they are not as outstanding compared towards the first and last applicants in the eyes of the human mind.

      • Meow.tar.gz@lemmy.goblackcat.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        1 year ago

        I get that HR does this shit all of the time. But at least without AI, your resume or CV has a better chance of making it to a human being.

    • Ulu-Mulu-no-die@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      10
      ·
      1 year ago

      I’m not against artificial intelligence, it could be a very valuable tool, but that’s nowhere near a valid reason to break laws as OpenAI has done, that’s why I too hope authors win.

        • Ulu-Mulu-no-die@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          9
          ·
          edit-2
          1 year ago

          Copyright, this is not the first time they’re sued for it apparently (violating copyright is a crime).

            • Ulu-Mulu-no-die@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              8
              ·
              edit-2
              1 year ago

              Reusing the content you scraped, if copyright protected, is not.

              Edit: unless you get the authorization of the original authors but OpenAI didn’t even asked, that’s why it’s a crime.

                • LegendofDragoon@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  2
                  ·
                  1 year ago

                  That really will be the question at hand. Is the ai producing work that could be considered transformative, educational, or parody? The answer is of course yes, it is capable of doing all three of those things, but it’s also capable of being coaxed into reproducing things exactly.

                  I don’t know if current copyright laws are capable of dealing with the ai Renaissance.

              • bioemerl@kbin.social
                link
                fedilink
                arrow-up
                8
                arrow-down
                5
                ·
                1 year ago

                Yeah it is. The only protection in copyright is called derivative works, and an AI is not a derivative of a book, No more than your brain is after you’ve read one.

                The only exception would be if you manage to overtrain and encode the contents of the book inside of the model file. That’s not what happened here because I’ll chat GPT output was a summary.

                The only valid claim here is the fact that the books were not supposed to be on the public internet and it’s likely that the way open AI the books in the first place was through some piracy website through scraping the web.

                At that point you just have to hold them liable for that act of piracy, not the fact that the model release was an act of copyright violation.

  • Aviandelight @mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    Can’t reply directly to @OldGreyTroll@kbin.social because of that “language” bug, as well. This is an interesting argument. I would imagine that the AI does not have the ability to follow plagiarism rules. Does it even credit sources? I’ve seen plenty of complaints from students getting in trouble because anti cheating software flags their original work as plagiarism. More importantly I really believe we need to take a firm stance on what is ethical to feed into chat gpt. Right now it’s the wild west.

  • randomdude567@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I don’t really understand why people are so upset by this. Except for people who train networks based on someone’s stolen art style, people shouldn’t be getting mad at this. OpenAI has practically the entire internet as its source, so GPT is going to have so much information that any specific author barely has an effect on the output. OpenAI isn’t stealing peoples art because they are not copying the artwork, they are using it to train models. imagine getting sued for looking at reference artwork before creating artwork.

    • whereisk@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Unless you provide for personhood to those statistical inference models the analogy falls flat.

      We’re talking about a corporation using copyrighted data to feed their database to create a product.

      If you were ever in a copyright negotiation you’d see that everything is relevant: intended use, audience size, sample size, projected income, length of usage, mode of transmission, quality etc.

      They’ve negotiated none of it and worst of all they commercialised it. I’d consider that being in real trouble.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Not to mention, if we’re going to judge them based on personhood, then companies need to be treating it like a person. They can’t have it both ways. Either pay it a fair human wage for its work, or it isn’t a person.

        Frankly, the fact that the follow-up question would be “well what’s it going to do with the money?” tells us it isn’t a person.