Artificial Intelligence (AI) is revolutionizing numerous sectors, but with the boon comes the bane. AI image generators are becoming more sophisticated, making the task of detecting deepfakes increasingly difficult.
This issue is causing alarm among global leaders and law enforcement agencies who are concerned about the impact of AI-generated deepfakes on social media and in conflict zones.
“We’re getting into an era where we can no longer believe what we see,” says Marko Jak, co-founder and CEO of Secta Labs. “Right now, it’s easier because the deepfakes are not that good yet, and sometimes you can see it’s obvious.”
Jak speculates that we are nearing a point—possibly within a year—where discerning a fake image at first glance will be impossible.
His insights should be taken seriously as he is the CEO of Secta Labs, an AI-image generator company.
The Rising Concerns about Deepfakes
A recent trend in AI-generated deepfakes has sparked outrage and concern. Deepfakes of murder victims have been appearing online, designed to evoke strong emotional reactions and gain clicks and likes.
This alarming trend emphasizes the urgency for more efficient ways to detect deepfakes.
Jak’s Austin-based startup, Secta Labs, which he co-founded in 2023, focuses on creating high-quality AI-generated images.
Secta Labs views its users as the owners of the AI models generated from their data, while the company serves as custodians creating images from these models.
The Call for AI Regulation
The potential misuse of advanced AI models has prompted world leaders to push for immediate action on AI regulation.
This has also led to companies like Meta, the creators of the new AI-generated voice platform Voicebox, deciding against releasing their advanced tools to the public.
“It’s also necessary to strike the right balance between openness with responsibility,” a Meta spokesperson shared.
Deepfakes: A Tool for Misinformation
Earlier this month, the U.S. Federal Bureau of Investigation warned of AI deepfake extortion scams and criminals using photos and videos from social media to create fake content.
In the face of the growing deepfake problem, Jak suggests that the solution may not lie solely in detecting deepfakes, but rather in exposing them.
“AI is the first way you could spot [a deepfake],” Jak said. “There are people building artificial intelligence that you can put an image into like a video, and the AI can tell you if it was generated by AI.”
Technology to Counter Deepfakes
Jak acknowledges that an “AI arms race” is emerging with bad actors creating more sophisticated deepfakes to counter the technology designed to detect them.
Jak proposes that technology such as blockchain and cryptography might offer a solution to the deepfake problem by authenticating an image’s origin.
He also suggests a low-tech solution — harnessing the collective wisdom of internet users.
“A tweet can be misinformation just like a deepfake can be,” he said. Jak believes that social media platforms could benefit from leveraging their communities to verify whether the circulated content is genuine.
As AI advances, the battle against deepfakes continues, underlining the importance of both technological and social solutions to counter this growing issue.