Yes but there’s a threshold of how much you need to copy before it’s an IP violation.
Copying a single word is usually only enough if it’s a neologism.
Two matching words in a row usually isn’t enough either.
At some point it is enough though and it’s not clear what that point is.
On the other hand it can still be considered an IP violation if there are no exact word matches but it seems sufficiently similar.
Until now we’ve basically asked courts to step in and decide where the line should be on a case by case basis.
We never set the level of allowable copying to 0, we set it to “reasonable”. In theory it’s supposed to be at a level that’s sufficient to, “promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” (US Constitution, Article I, Section 8, Clause 8).
Why is it that with AI we take the extreme position of thinking that an AI that makes use of any information from humans should automatically be considered to be in violation of IP law?
Is that intended as a legal or moral position?
As far as I know, the law doesn’t care much if you make money off of IP violations. There are many cases of individuals getting hefty fines for both the personal use and free distribution of IP. I think if there is commercial use of IP the profits are forfeit to the IP holder. I’m not a lawyer though, so don’t bank on that.
There’s still the initial question too. At present, we let the courts decide if the usage, whether profitable or not, meets the standard of IP violation. Artists routinely take inspiration from one another and sometimes they take it too far. Why should we assume that AI automatically takes it too far and always meets the standard of IP violation?