- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
Increasingly, the authors of works being used to train large language models are complaining (and rightfully so) that they never gave permission for such a use-case. If I were an LLM company, I’d be seriously looking for a Plan B right now, whether that’s engaging publishing companies to come up with new licensing options, paying 1,000,000 grad students to write 1,000,000 lines of prose, or something else entirely.
Non issue political BS. AI has no more than the capabilities of a human that half ass read the cliffnotes on any book. It is a similar awareness as anyone that knows about the work and the basic writing style. Complaining about this is as stupid as thought policing people for being aware of a book and its content without paying for it. Fucking joke media is terrible for this yellow news. I’m actually playing with open source offline AI models. I’ve tried training one on a book. The results are useless.
The main motivation in all of the garbage hype media is a propaganda campaign to limit AI to the proprietary privacy invasive garbage. The open source models are an existential threat. There is no going back now. This is like the early days of the proprietary internet framework. Everyone involved in that went out of business when the open source options became available. AI LLMs are as big of a change as the entire internet. For example, you want a search engine that works? A llama2 70B is far better at responding with what you are actually looking for than any current search engine. This makes stalkerware big tech obsolete.
Keep saying the same about diffusion models as well. I guess we just want adobe and other wealthy companies to be the only ones with access to proprietary datasets large enough to make futuristic art tools.
Pay subscriptions to your overlords or suffer.
Or move to a country with more permissive IP laws to do your AI work.
So many people urging policymakers to kneecap AI development and cede all that progress to China
It’s hard to trade with the rest of the world when you’re not a party to the Berne Convention
The Berne Convention contains an enumerated list of things that it recognizes as things that can be restricted by IP law. Training AIs is not among them.
Derivative works is though - and the cases slowly plodding through the court system right now are going to demand a decision on whether an LLM or its creations count as derivative works.
For it to be a derivative work you’re going to have to prove that the model contains a substantial portion of the material it’s supposedly a derivative work of. Good luck with that, neural nets simply don’t work that way.
That’s not really true, though. The biggest reason why these cases were able to get traction was because when prompted a certain, specific way, researchers were able to reproduce substantial portions of copyrighted works - https://arstechnica.com/tech-policy/2023/08/openai-disputes-authors-claims-that-every-chatgpt-response-is-a-derivative-work/