James Grimmelmann on Nostr: Our paper is a careful look at what it means for a generative-AI model to ...
Our paper is a careful look at what it means for a generative-AI model to “memorize” its training data. The New York Times, for example, has accued OpenAI’s GPT models of memorizing Times articles and reproducing them nearly word-for-word. We explain in detail what this kind of memorization consists of, and the implications it might have for copyright law.