In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.
Personally I’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single company who’s built a thing on the internet.
I just take comfort in the fact that my art will never be good enough for a generative Ai to steal.
If it’s on any major platform, these companies will probably still use it since I doubt at that point if they were allowed to scrape the whole internet they’d have any human looking over the art used.
It’ll just be thrown in with everything else similar to how I always seem to find paper towels in the dryer after doing laundry.
Then I take comfort in the fact it might serve to sabotage whatever it generates.
“Bad” art is still useful in training these models because it can be illustrative of what not to do. When prompting image generators it’s common to include “negative prompts” along with your regular one, telling the AI what sorts of things it should avoid putting in the output image. If I stuck “by Roundcat” into the negative prompts it would try to do things other than the things you did.
I think the topic is more complex than that.
Otherwise you could say you’d rather stop posting creative endeavours entirely than simply let it be stolen and regurgitated by every single artist who use internet for references and inspiration.
There’s not only the argument “but companies do so for profit” because many artist do the same, maybe they are designers, illustrators or other and you’ll work will give them ideas for their commissions
deleted by creator
I think this is an interesting example, because it’s already like this. Songs reusing other sampled songs are released all the time, and it’s all perfectly legal. Only making a copy is illegal. No one can sue you if you create a character that resembles mickey mouse, but you can’t use mickey mouse.
And pharmaceutical patents serves the same scope, they encourage the company to release publicly papers, data and synthesis methods so that other people can learn and research can move faster.
And the whole point of this is exactly regulating AI like people, no one will come after you because you’ve read something and now you have an opinion about it, no body will get angry if you’ve saw an Instagram post and now you have some ideas for your art.
Of course the distinction between likeness and copy is not that defined, but that’s part of the whole debacle
deleted by creator
It’s just a single example, there are endless songs which are samples of samples of samples… Once in a while YouTube content id will have some problems as it’s not perfect. It doesn’t mean the system is fundamentally flawed. Like saying every car on the planet is cursed because once you got a flat tyre.
Pay attention because the alternative to patents is not a “free for all” approach , it’s industrial secrecy. As research is still very much expensive for entities to carry out.
Set aside than, no, extra research benefits everyone in the society as new cures for diseases are discovered faster and medicine evolve organically. Patents were the compromise to ensure companies could monetize their research while sharing their knowledge, are there other possible equilibrium? Sure, but we still have to remember we live in the real world, you can’t have a cake and eat it
deleted by creator
Oh the legal system is very much messed up, YouTube tried to put a bandage in it. You have to consider that usually you would need a full personalized legal contract for each piece of copyrighted material you use. Content id tries to automate the process, but it’s not perfect.
Which is what happens with patents today. The company holding the patent rarely also physical produces the drug, they usually have “manufacturing agreements” expecially in geographic far markets; where they let a second company make the drug with the company holding the patent on it and they are free to sell it in exchange for a percentage of the label price.
That’s also what happened with vaccines and many other medications, it’s like the standard procedure lol
That’s more of a US problem than it is a pharmaceutical patents problem.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
deleted by creator
That’s more of a US problem than it is a pharmaceutical patents problem.
Only they are able to benefit from that research at first. Which is how it’s always been, new things are rare and expensive at first and become cheaper and more common over time.
deleted by creator
And of course, the same principle must apply to the resulting AI models themselves.
Voluntary obscurity is always an option, I suppose.
We need to actively start sabotaging the data sources these LLMs are based on. Make AI worthless.
Your comment right here provides useful training data for LLMs that might use Fediverse data as part of their training set. How would you propose “sabotaging” it?