OpenAI: “Impossible” To Create A.I. Tools Without Copyrighted Materials

Samuel Brainard
By Samuel Brainard 2 Min Read
2 Min Read
In Short
  • OpenAI states the necessity of using copyrighted material for training AI models like ChatGPT.
  • The New York Times and other entities have filed lawsuits against OpenAI and Microsoft for alleged copyright infringement.
  • OpenAI emphasizes its support for independent safety analysis and red-teaming of AI systems.

OpenAI, the developer behind the innovative chatbot ChatGPT, has recently stated that access to copyrighted material is essential for creating their AI tools. This statement comes as artificial intelligence firms face increasing scrutiny over the content they use for training their products.

According to The Guardian, AI technologies, including chatbots like ChatGPT and image generators such as Stable Diffusion, rely on vast amounts of internet data, much of which is copyrighted. Last month, the New York Times filed a lawsuit against OpenAI and Microsoft, a major investor in OpenAI, alleging illegal use of their content.

In a submission to the House of Lords communications and digital select committee, OpenAI emphasized the impossibility of training large language models such as GPT-4, the technology behind ChatGPT, without copyrighted work. The company explained that since copyright covers a wide range of human expression, avoiding its use would result in inadequate AI systems.

OpenAI has defended itself against the New York Times lawsuit on their website, asserting that they support journalism and partner with news organizations. The AI firm has previously stated its respect for content creators’ rights, leaning on the legal doctrine of “fair use” for its defense.

Aside from the New York Times, OpenAI faces several other legal complaints. Notable authors including John Grisham and George RR Martin have sued the company for alleged mass-scale copyright theft. Similarly, Getty Images and music publishers like Universal Music have filed lawsuits against AI firms for copyright breaches.

In its House of Lords submission, OpenAI also highlighted its support for independent safety analysis of AI systems, advocating “red-teaming” where external researchers test AI safety by simulating malicious actors. The company is part of an agreement to work with governments on safety testing of powerful AI models, demonstrating its commitment to responsible AI development.

Disclaimer

Based on our quality standards, we deliver this website’s content transparently. Our goal is to give readers accurate and complete information. Check our News section for latest news. To stay in the loop with our latest posts follow us on Facebook, Twitter and Instagram. 

Subscribe to our Daily Newsletter to join our growing community and if you wish to share feedback or have any inquiries, please feel free to Contact Us. If you want to know more about us, check out our Disclaimer, and Editorial Policy.

Follow:
Sam Brainard is a writer first and an experienced AI enthusiast second, demanding to understand the 'how' and 'why' questions that drive us to a deeper comprehension of the good, the bad, and the ugly the AI has to offer. Using varying sentence structure and everyday vocabulary, Sam strives make the power of AI more digestible and accessible to the average viewer, yielding power and knowledge to any individuals who indulge. Prompt engineering, human communication, ethical questions relating to machine learning, and the effect such powerful tools have on the corporate world are central to Sam's interests.
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *