- Google DeepMind launches SynthID, a tool that identifies images generated by AI, in partnership with Google Cloud.
- SynthID adds a digital stamp to photographs, revealing if they were generated by AI, generating debate on how much firms should disclose to consumers.
- Companies are divided on whether to tell customers when they interact with AI-created material, raising ethical and legal questions about transparency and copyright issues.
September 4th, 2023: Google DeepMind has rolled out a new tool called SynthID. This tool can spot if an image was made by AI. It was released this Tuesday, with help from Google Cloud.
The tool puts a special mark on images made by Google Cloud’s own AI tool, called Imagen. This mark can’t be seen by humans but can be seen by SynthID. June Yang, who works at Google Cloud, said this is a new thing they’re offering.
Google DeepMind and Google Cloud are both owned by the same big company, Alphabet. Right now, SynthID only works on images. But, the company says it might work on audio, video, and text in the future.
Businesses are using AI more and more. They use it for making ads, writing news releases, and even coming up with new product names. But not all businesses tell their customers when they use AI.
Some think it’s important to tell people, while others say it’s not a big deal as long as the information is correct.
Choice Hotels’ CIO Brian Kirkland thinks what matters is the quality of the content, not who or what made it. Hatim Rahman, a professor at Northwestern University, says if a company is using its own data, it can choose whether to tell people about using AI.
But, if a company uses a public model trained on public data, not telling could be a legal problem. This is because the model might be trained on things that are copyrighted.
Some companies are very strict about this. At Laserfiche, employees can’t say that AI-made content is their own.
They have to say it was made by AI. Julia White from SAP and Lea Sonderegger from Swarovski believe companies should tell customers when they use AI. They say it builds trust.
Scott duFour from Fleetcor thinks it will become normal to tell people when AI is used, just like companies already do for human authors.
For more in-depth coverage and analysis on the latest developments, visit our Breaking News section. Stay connected and join the conversation by following us on Facebook, and Instagram. Subscribe to our daily newsletter to receive the top headlines and essential stories delivered straight to your inbox. If you have any questions or comments, please contact us. Your feedback is important to us.