OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.
We have to distinguish between LLMs
- Trained on copyrighted material and
- Outputting copyrighted material
They are not one and the same
Should we distinguish it though? Why shouldn’t (and didn’t) artists have a say if their art is used to train LLMs? Just like publicly displayed art doesn’t provide a permission to copy it and use it in other unspecified purposes, it would be reasonable that the same would apply to AI training.
Ah, but that’s the thing. Training isn’t copying. It’s pattern recognition. If you train a model “The dog says woof” and then ask a model “What does the dog say”, it’s not guaranteed to say “woof”.
Similarly, just because a model was trained on Harry Potter, all that means is it has a good corpus of how the sentences in that book go.
Thus the distinction. Can I train on a comment section discussing the book?
Yeah, this headline is trying to make it seem like training on copyrighted material is or should be wrong.
I think this brings up broader questions about the currently quite extreme interpretation of copyright. Personally I don’t think its wrong to sample from or create derivative works from something that is accessible. If its not behind lock and key, its free to use. If you have a problem with that, then put it behind lock and key. No one is forcing you to share your art with the world.
Vanilla Ice had it right all along. Nobody gives a shit about copyright until big money is involved.
Yep. Legally every word is copyrighted. Yes, law is THAT stupid.
People think it’s a broken system, but it actually works exactly how the rich want it to work.
The powers that be have done a great job convincing the layperson that copyright is about protecting artists and not publishers. It’s historically inaccurate and you can discover that copyright law was pushed by publishers who did not want authors keeping second hand manuscripts of works they sold to publishing companies.
Additional reading: https://en.m.wikipedia.org/wiki/Statute_of_Anne
Why are people defending a massive corporation that admits it is attempting to create something that will give them unparalleled power if they are successful?
Because ultimately, it’s about the truth of things, and not what team is winning or losing.
Mostly because fuck corporations trying to milk their copyright. I have no particular love for OpenAI (though I do like their product), but I do have great distain for already-successful corporations that would hold back the progress of humanity because they didn’t get paid (again).
deleted by creator
The dream would be that they manage to make their own glorious free & open source version, so that after a brief spike in corporate profit as they fire all their writers and artists, suddenly nobody needs those corps anymore because EVERYONE gets access to the same tools - if everyone has the ability to churn out massive content without hiring anyone, that theoretically favors those who never had the capital to hire people to begin with, far more than those who did the hiring.
Of course, this stance doesn’t really have an answer for any of the other problems involved in the tech, not the least of which is that there’s bigger issues at play than just “content”.
Because everyone learns from books, it’s stupid.
An LLM is not a person, it is a product. It doesn’t matter that it “learns” like a human - at the end of the day, it is a product created by a corporation that used other people’s work, with the capacity to disrupt the market that those folks’ work competes in.
Leftists hating on AI while dreaming of post-scarcity will never not be funny
People are acting like ChatGPT is storing the entire Harry Potter series in its neural net somewhere. It’s not storing or reproducing text in a 1:1 manner from the original material. Certain material, like very popular books, has likely been interpreted tens of thousands of times due to how many times it was reposted online (and therefore how many times it appeared in the training data).
Just because it can recite certain passages almost perfectly doesn’t mean it’s redistributing copyrighted books. How many quotes do you know perfectly from books you’ve read before? I would guess quite a few. LLMs are doing the same thing, but on mega steroids with a nearly limitless capacity for information retention.
Nope people are just acting like ChatGPT is making commercial use of the content. Knowing a quote from a book isn’t copyright infringement. Selling that quote is. Also it doesn’t need to be content stored 1:1 somewhere to be infringement. That misses the point. If you’re making money of a synopsis you wrote based on imperfect memory and in your own words it’s still copyright infringment until you sign a licensing agreement with JK. Even transforming what you read into a different medium like a painting or poetry cam infinge the original authors copyrights.
Now mull that over and tell us what you think about modern copyright laws.
Just adding, that, outside of Rowling, who I believe has a different contract than most authors due to the expanded Wizarding World and Pottermore, most authors themselves cannot quote their own novels online because that would be publishing part of the novel digitally and that’s a right they’ve sold to their publisher. The publisher usually ignores this as it creates hype for the work, but authors are careful not to abuse it.
but on mega steroids with a nearly limitless capacity for information retention.
That sounds like redistributing copyrighted books
Lol, say that to the first (obscure) Harry Potter line I tried on ChatGPT.
Using Copyrighted Work as Art as example still influences the AI which their make Profit from.
If they use my Works then they need to pay thats it.
Still kinda blows my mind how like the most socialist people I know (fellow artists) turned super capitalist the second a tool showed like an inkling of potential to impact their bottom line.
Personally, I’m happy to have my work scraped and permutated by systems that are open to the public. My biggest enemy isn’t the existence of software scraping an open internet, it’s the huge companies who see it as a way to cut us out of the picture.
If we go all copyright crazy on the models for looking at stuff we’ve already posted openly on the internet, the only companies with access to the tools will be those who already control huge amounts of data.
I mean, for real, it’s just mind-blowing seeing the entire artistic community pretty much go full-blown “Metallica with the RIAA” after decades of making the “you wouldn’t download a car” joke.
Fuckin preach! I feel like I’m surrounded by children that didn’t live through the many other technologies that have came along and changed things. People lost their shit when photoshop became mainstream, when music started using samples, etc. AI is here to stay. These same people are probably listening to autotuned music all day while they complain on the internet about AI looking at their art.
Nobody would defend copyright if it wasn’t already in place, it’s a sick idea. They ask us to cut the field of human knowledge for private benefit. Now they want to destroy a new technology in its name. Greed knows no bounds.
I defend the idea of copyright. The first copyright law was in 1710, to protect authors from the printing press. Without copyright, whoever owned the printing press would sell copies of books with no obligation to pay the author. When copying art is trivial, the artist needs copyright protection in order to make a living creating art.
There are major problems with modern copyrights. Like all things in capitalism it has been subverted to benefit the rich, but the core idea behind copyright is sound.
These lawsuits are not to stop the development if generative AI. These lawsuits are to stop the unlicensed use of copyrighted works as AI training data.
There are AI models that are only trained with licensed data. This doesn’t stop the development of AI.
Artists should have the right to choose whether their work is used as training data. And they should be compensated fairly for it. That will be the case if these lawsuits succeed.
Ultimately it’s a propertarian scheme of ownership imposed onto the realm of concepts and ideas. The first person to successfully lay claim to an idea is given a monopoly on that idea for some number of years. A book, an invention, a melody. To secure profit for that individual, the entire rest of humanity is prevented access to the idea except under his terms, and the naturally free exchange of information is curtailed by statute to accomplish this, via the imposition of punishments for anyone who goes against this scheme. I do not think that’s defensible. That is to say, I don’t think humanity sees a net benefit from this way of doing things. Even some hypothetical 20-30% reduction in the generation of different kinds of creative works would be well offset by the benefit humanity sees from being able to access them, and the funds that would be going to the artist still could if people saw fit.
Is this being used to stop the development of generative AI? Yes, literally the imprint on an AI of having parsed the works and understood them in some symbolic capacity, they want to curtail that. And the existing models that have already done that would likely be rendered illegal, setting the entire technology back a year or two.
Nobody would defend copyright if it wasn’t already in place
I don’t know about that. Say you take a few years to write a handful of poems, and it turns out people in your neighborhood really like them. You compile the poems into a book, and sell it for $5, and it sells well. Seeing this, your neighbor buys one, copies it, and starts selling it one neighborhood over for $2, and representing themself as the author. I would think most people in that situation would want to say, ‘hey, that’s not fair’. I don’t think that’s sick or rooted in greed, copyright can be a check on greed.
So thanks to copyright, we’re now living in a world where artists are fairly compensated and not exploited by large corporations acting as middlemen that have seized control of their creative works and used it for their own profit?
More so than we would be without copyright at all
Copyright needs to be extended for individuals and cut back for corporations. People should be allowed to own rights to their ip, but corps should have much higher levels of restrictions and how some knowledge must be shared.
More so than we would be without copyright at all
It’s hard to imagine how it could be worse than what we have now.
Copyright needs to be extended for individuals and cut back for corporations. People should be allowed to own rights to their ip, but corps should have much higher levels of restrictions and how some knowledge must be shared.
Well in effect that would scale back the copyright nightmare we have now, but the basic problem is still there. The argument is still for near-indefinite monopoly privilege over information to be given to its creator at the expense of humanity’s ability to share and reproduce the work, I don’t think that’s justifiable.
I defend copyright. The original intent was to protect creators in order to foster more creativity. Most artists will have no incentive to create if their work can be reappropriated by a larger group to leverage it for monetary gain, which is directly being taken from the original creator.
I’m a photographer. I’ve removed all my pictures from the internet and plan to never post more. I don’t want my work being used to train AI. Right now we have no choice in that matter, so the only option is to no longer share our work.
I’ve released tons of stuff and it’s under Creative Commons/public domain. I welcome people to share it or create derivative works.
Cool. That’s a fine stance to have and one that plenty of other people will have too. I’m fine with actual people doing it. I’m not fine with AI. The point is the artist should have a choice if they’d like to allow training.
The problem right now is we can’t control that. Everything is being used for AI training if you want it to be or not. If I could explicitly forbid use of it for AI training (that could be backed in court) I’d be more willing to post them again.
Lemmy users are not an accurate representation of artists imo. This site skews extremely far left, to the points of such anti-corporate nonsense that I believe the majority of people just want to hurt anyone with more money than them as much as possible.
So the people who generate and curate that knowledge don’t deserve to be compensated? Are you going to be a full time wikipedia editor then? Or does your “greed know no bounds”?
So that explains the “problematic” responses.
This is just OpenAI covering their ass by attempting to block the most egregious and obvious outputs in legal gray areas, something they’ve been doing for a while, hence why their AI models are known to be massively censored. I wouldn’t call that ‘hiding’. It’s kind of hard to hide it was trained on copyrighted material, since that’s common knowledge, really.
what if they scraped a whole lot of the internet, and those excerpts were in random blogs and posts and quotes and memes etc etc all over the place? They didnt injest the material directly, or knowingly.
Not knowing something is a crime doesn’t stop you from being prosecuted for committing it.
It doesn’t matter if someone else is sharing copyright works and you don’t know it and use it in ways that infringes on that copyright.
“I didn’t know that was copyrighted” is not a valid defence.
That’s why this whole argument is worthless, and why I think that, at its core, it is disingenuous. I would be willing to be a steak dinner that a lot of these lawsuits are just fishing for money, and the rest are set up by competition trying to slow the market down because they are lagging behind. AI is an arms race, and it’s growing so fast that if you got in too late, you are just out of luck. So, companies that want in are trying to slow down the leaders, at best, and at worst they are trying to make them publish their training material so they can just copy it. AI training models should be considered IP, and should be protected as such. It’s like trying to get the Colonel’s secret recipe by saying that all the spices that were used have been used in other recipes before, so it should be fair game.
If training models are considered IP then shouldn’t we allow other training models to view and learn from the competition? If learning from other IPs that are copywritten is okay, why should the training models be treated different?
I am sure they have patched it by now but at one point I was able to get chatgpt to give me copyright text from books by asking for ever large quotations. It seemed more willing to do this with books out of print.
Yeah, it refuses to give you the first sentence from Harry Potter now.
Which is kinda lame, you can find that on thousands of webpages. Many of which the system indexed.
If someone was looking to pirate the book there are way easier ways than issuing thousands of queries to ChatGPT. Type “Harry Potter torrent” into Google and you will have them all in 30 seconds.
ChatGPT has a ton of extra query qualifiers added behind the scenes to ensure that specific outputs can’t happen
They made it read Harry Potter? No wonder its gonna kill us all one day.
One of the first things I ever did with ChatGPT was ask it to write some Harry Potter fan fiction. It wrote a short story about Ron and Harry getting into trouble. I never said the word McGonagal and yet she appeared in the story.
So yeah, case closed. They are full of shit.
There is enough non-copywrited Harry Potter fan fiction out there that it would not need to be trained on the actual books to know all the characters. While I agree they are full of shit, your anecdote proves nothing.
While I agree they are full of shit, your anecdote proves nothing.
Why? Because you say so?
He brings up a valid point, it seems transformative.
The anecdote proves nothing because the model could potentially have known of the McGonagal character without ever being trained on the books, since that character appears in a lot of fan fiction. So their point is invalid and their anecdote proves nothing.
Google AI search preview seems to brazenly steal text from search results. Frequently its answers are the same word for word as a one of the snippets lower on the page
What the article is explaining is cliff notes or snippets of a story. Isn’t that allowed in some respect? People post notes from school books all the time, and those notes show up in Google searches as well.
I totally don’t know if I’m right, but doesn’t copyright infringement involve plagiarism like copying the whole book or writing a similar story that has elements of someone else’s work?
I don’t know what’s considered fair use here. But the point is it’s taking words that aren’t theirs, which will deprive websites of traffic because then people won’t click through to the source article.
Ok I get now. I can definitely see both sides of the argument, and it’s not going to be easy to solve.
Copyright law needs to be updated to deal with all the new ways people and companies are using tech to access copyrighted material.
The response from OpenAI, and the likes of Google, Meta, and Microsoft, has mostly been to stop disclosing what data their AI models are trained on.
That’s really the biggest problem, IMO. I don’t really care whether it’s trained on copyrighted material or not, but I do want it to “cite its sources”, so to speak.
Which would also be handy in understanding in what ways the model might be biased.
I thought everyone knows that OpenAI has the same access to any books, knowledge that human beings have.
Yes, but it’s what it is doing with it that is the murky grey area. Anyone can read a book, but you can’t use those books for your own commercial stuff. Rowling and other writers are making the case their works are being used in an inappropriate way commercially. Whether they have a case iunno ianal but I could see the argument at least.
Harry potter uses so many tropes and inspiration from other works that came before. How is that different? wizards of the coast should sue her into the ground.
Lol:
Content industry: It can reproduce our stuff OpenAI: Content industry: They are hiding that it can reproduce us