- OpenAI unveils a comprehensive plan to prevent the spread of false election information using its AI tools.
- The company will ban AI technology for creating fake chatbots and introduce digital watermarks on AI-generated images.
- Collaboration with the National Association of Secretaries of State to direct users to accurate voting information.
January 18, 2024: To safeguard democracy, OpenAI, a leader in artificial intelligence technology, has revealed a detailed strategy to combat the spread of false information during elections.
This San Francisco-based startup is known for its advanced AI tools, which can quickly produce texts and images.
Given the upcoming global elections in over 50 countries, the company has recognized the urgent need to prevent its technology from being misused.
A key part of OpenAI’s plan is to stop the creation of chatbots that could fake being real political figures or government bodies. These chatbots can be very convincing and might spread wrong information about elections.
OpenAI also stops people from using its technology for political campaigning or lobbying. This pause is to study how powerful AI can be in persuading people and ensure it’s not misused.
To make it easier to tell if AI made an image, OpenAI will add a special mark, called a digital watermark, to images created by its DALL-E image generator.
This mark will show where the image came from. This step is important to ensure people can find out if a machine made an image, especially on the internet.
Another important step is OpenAI’s work with the National Association of Secretaries of State.
Together, they will help people using the ChatGPT tool to find correct information about voting on CanIVote.org, a website that is reliable and not biased. This is to make sure that when people look for information about how to vote, they get the right details.
While these actions are seen as a good move to fight false election information, there are still questions about how well they will work.
Experts like Mekela Panditharatne from the Brennan Center for Justice say it’s important to have good filters to catch election-related questions and worry that some things might not be caught.
OpenAI knows it must keep a close watch and work on this issue.
CEO Sam Altman has said the company will stay alert and ensure these new rules are effective. He understands that AI content is always changing and can be a challenge.