- OpenAI states the necessity of using copyrighted material for training AI models like ChatGPT.
- The New York Times and other entities have filed lawsuits against OpenAI and Microsoft for alleged copyright infringement.
- OpenAI emphasizes its support for independent safety analysis and red-teaming of AI systems.
OpenAI, the developer behind the innovative chatbot ChatGPT, has recently stated that access to copyrighted material is essential for creating their AI tools. This statement comes as artificial intelligence firms face increasing scrutiny over the content they use for training their products.
According to The Guardian, AI technologies, including chatbots like ChatGPT and image generators such as Stable Diffusion, rely on vast amounts of internet data, much of which is copyrighted. Last month, the New York Times filed a lawsuit against OpenAI and Microsoft, a major investor in OpenAI, alleging illegal use of their content.
In a submission to the House of Lords communications and digital select committee, OpenAI emphasized the impossibility of training large language models such as GPT-4, the technology behind ChatGPT, without copyrighted work. The company explained that since copyright covers a wide range of human expression, avoiding its use would result in inadequate AI systems.
OpenAI has defended itself against the New York Times lawsuit on their website, asserting that they support journalism and partner with news organizations. The AI firm has previously stated its respect for content creators’ rights, leaning on the legal doctrine of “fair use” for its defense.
Aside from the New York Times, OpenAI faces several other legal complaints. Notable authors including John Grisham and George RR Martin have sued the company for alleged mass-scale copyright theft. Similarly, Getty Images and music publishers like Universal Music have filed lawsuits against AI firms for copyright breaches.
In its House of Lords submission, OpenAI also highlighted its support for independent safety analysis of AI systems, advocating “red-teaming” where external researchers test AI safety by simulating malicious actors. The company is part of an agreement to work with governments on safety testing of powerful AI models, demonstrating its commitment to responsible AI development.