- Fake AI-generated voices are becoming a new tool for spreading misinformation on TikTok.
- Experts warn that this could impact public opinion and even elections.
- TikTok and other social media platforms are working on solutions, but challenges remain.
October 14, 2023: As technology evolves, so do the tools for spreading misinformation. On TikTok, fake voices generated by artificial intelligence (AI) are becoming a growing concern.
A recent example involved a voice sounding just like former President Barack Obama defending himself against a baseless conspiracy theory. Though it sounded real, it was an AI-generated fake.
Experts say this is becoming a big problem. Stuart A. Thompson and Sapna Maheshwari, journalists at The New York Times, said that companies like ElevenLabs have developed new tools for creating AI voices.
Since these tools were released last year, there has been a spike in fake audio clips.
The fake voices are not just limited to politics. They have been used in TikTok videos that spread all kinds of wrong information. Some videos have even made false claims about celebrities like Oprah Winfrey.
Jack Brewster, an editor at NewsGuard, said that the fake voices are helping TikTok accounts get more followers. Once they have a large following, they can spread even more false information.
NewsGuard is a company that watches for false information online. They found 17 TikTok accounts that were using AI-generated voices to spread lies.
TikTok says they are trying to stop this. They have already taken down some accounts and videos breaking their rules. Jamie Favazza, a spokeswoman for TikTok, said they have new ways to label videos that use AI.
But experts say this is not enough. David G. Rand, a professor at MIT, said that bad people will not label their fake videos.
The problem is not just on TikTok. Fake voices and videos are also being shared on YouTube, Instagram, and Facebook. These social media platforms are also trying to find ways to stop the spread of false information.
The technology for making fake voices is becoming better and better. Companies like ElevenLabs are leading the way. But they are also trying to stop misuse.
They have created a tool that can tell if AI made a voice. But the tool is not perfect, and people who want to spread lies can find ways around it.
In short, AI-generated voices are a new challenge in the fight against misinformation. Companies and experts are working on ways to stop them, but it’s not easy. As technology improves, the tools for spreading lies also improve.