OpenAI Implements Safety Measures, Board Can Reverse AI Decisions

Mukund Kapoor
By Mukund Kapoor - Author 2 Min Read
2 Min Read
In Short
  • OpenAI introduces a safety framework allowing the board to override executive decisions on AI deployment.
  • Microsoft-backed OpenAI commits to releasing technology only if deemed safe in crucial areas like cybersecurity.
  • Growing public and expert concerns about AI risks underscore the importance of responsible AI development.

December 19, 2023: OpenAI has introduced a comprehensive safety framework for its cutting-edge AI models. This move is significant as it empowers the company’s board to override decisions made by executives on safety matters.

This development, announced on the OpenAI website, reflects the company’s commitment to deploying technology responsibly, especially in sensitive areas like cybersecurity and nuclear threat management.

Backed by tech giant Microsoft, OpenAI has stated that it will only release its latest innovations if they are assessed as safe in critical domains. The firm is also forming an advisory group tasked with evaluating safety reports, which will then be forwarded to OpenAI’s executives and board members for review.

While the executives are responsible for initial decisions, the board holds the authority to reverse these decisions if necessary.

This initiative comes at a time when the AI community and the public are increasingly aware of the potential risks associated with advanced AI technologies.

Since the launch of ChatGPT a year ago, there have been growing concerns about AI’s ability to disseminate false information and manipulate human behavior.

The technology’s capabilities, ranging from composing poetry to crafting essays, have been both admired and scrutinized.

Earlier this year, AI experts and industry leaders signed an open letter urging a six-month halt in the development of AI systems more advanced than OpenAI’s GPT-4.

This highlights the apprehensions surrounding AI’s impact on society.

Supporting this sentiment, a Reuters/Ipsos poll in May revealed that over two-thirds of Americans are worried about AI’s adverse effects, with 61% believing it could pose a threat to civilization.

TAGGED:
SOURCES:OpenAI

Disclaimer

Based on our quality standards, we deliver this website’s content transparently. Our goal is to give readers accurate and complete information. Check our News section for latest news. To stay in the loop with our latest posts follow us on Facebook, Twitter and Instagram. 

Subscribe to our Daily Newsletter to join our growing community and if you wish to share feedback or have any inquiries, please feel free to Contact Us. If you want to know more about us, check out our Disclaimer, and Editorial Policy.

By Mukund Kapoor Author
Follow:
Mukund Kapoor, the enthusiastic author and creator of GreatAIPrompts, is driven by his passion for all things AI. With a special knack for simplifying complex AI concepts, he's committed to helping readers of all levels - be it beginners or experts - navigate the intriguing world of artificial intelligence. Through GreatAIPrompts, Mukund ensures that readers always have access to the most recent and relevant AI news, tools, and insights. His dedication to quality, accuracy, and clarity is what sets his blog apart, making it a reliable go-to source for anyone interested in unlocking the potential of AI. For more information visit Author Bio.
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *