Experts believe that a new approach using metadata, watermarks and other technical systems may be able to help users identify genuine or fake. Metadata, or information associated with digital image files, can provide basic details about the content, such as when and where a photo was taken. Many technology companies are now supporting some form of tagging AI-generated content in their products using this technology, and are working to make this information more public to help users determine the authenticity of content.
For example, Microsoft unveiled at its annual Build conference a new feature for adding watermarks and metadata to AI-generated images and videos, designed to mark the source of content and how it was generated. The feature will be available in Microsoft's Designer and Image Creator apps in the coming months.
In addition, design tool developer Adobe's Content Authenticity Initiative group has developed a tool called Content Credentials that tracks when an image has been edited by AI. Adobe describes it as a tag that stays with a file no matter where it is published or stored. For example, Photoshop's newest feature, Generative Fill, uses AI to quickly create new content in an existing image, and content credentials can track those changes.
Andy Parsons, senior director of Adobe's Content Authenticity Initiative group, said, "It starts with knowing what it is and, where it makes sense, who made it or where it came from."
Google also said at its I/O 2023 developer conference that a written disclosure similar to a copyright notice will be attached below the AI-generated results of Google images in the coming months. In addition, Google will also tag metadata in image files generated by its AI system. That is, when users see an image generated by Google's AI system in Google search, it will display words like "generated by Google AI" at the bottom. At the same time, the company announced that it will work with sites like Midjourney and Shutterstock, a photography photo site, to let them tag AI-generated images in Google search themselves.
In addition, Google has added an "About this image" feature next to image search results, whether they have AI tags or not. Users can simply click to see when Google first indexed the image, where it may have first appeared, and where else it has appeared online. Danny Sullivan, public liaison for Google Search, noted, "These tools will be integrated into Google search products in the future to help people better access and understand relevant information."
Facing daunting challenges
For technology platforms that automatically identify AI-generated content, this is only the beginning stage. Until a reliable solution is found, fact-checking will have to rely on manual review to fill in the gaps. For example, on May 30, Twitter made an announcement about using crowdsourced fact-checking to identify false information generated by AI.
Sam Gregory, executive director of human rights and citizen journalism site Witness, said that while AI identification technology solutions like adding watermarks are promising, many fact-checkers remain concerned about the onslaught of disinformation that AI could bring in the meantime. In fact, many professional fact checkers already have to review far more content than a human could possibly handle.
"Will a person be blamed for his inability to recognize an AI-generated image? Or will a fact checker be overwhelmed by this enormous amount of content to review?" Gregory argues that the responsibility for addressing AI disinformation "needs to fall on those who design these tools, build these models and publish them."
In addition, Gregory believes that the current regulations on AI-generated content on social media platforms are not detailed enough.
TikTok revised its latest rules on "synthetic content" in March 2023, saying the platform allows AI-generated content, but if it shows a real scene, the image must be clearly disclosed with captions, stickers or other means. TikTok said it works with outside organizations such as the industry nonprofit Partnership on AI to provide feedback on compliance with the framework for responsible AI practices.
"While we're excited about the creative opportunities AI offers creators, we're also firmly committed to developing protections for its safe and transparent use." In a statement, a TikTok spokesperson said, "Like most in our industry, we will continue to work with experts to monitor advances in AI-aware technology and continually adapt our approach."
In addition to TikTok, many other platforms may need some updates to their policies. Currently, both Meta (which owns Facebook and Instagram) and YouTube have only general rules about preventing AI-generated content from misleading users, but no clarification about what uses are acceptable or unacceptable.
"AI transcends any individual, company or country and requires the collaboration of all stakeholders." Meta said in a statement, "We are actively monitoring new trends and working to adapt existing regulatory approaches."
A way to fully identify AI-generated content may not be coming soon. For now, these new tools for flagging AI-generated content are still in the early stages, but could help mitigate the risk. Technically speaking, watermarks or metadata can be tampered with. And not all AI-generated systems are willing to disclose that it was AI-generated. In addition, people tend to ignore the truth and choose to believe lies that fit their personal beliefs.
Therefore, more research is needed to know if these AI labels can change people's minds. "Sometimes, the labels don't work as expected." According to Joshua A. Tucker of New York University, "We need to test these solutions to see if they can change people's minds when they encounter misleading AI content, and what to disclose to have an impact."