Identifying AI made easier: Meta rolls out new measures for AI-generated tools
By
Seia Ibanez
- Replies 16
Artificial intelligence (AI) is becoming a popular tool in generating content in today’s era.
In today’s story, Meta, the parent company of social media platforms Facebook and Instagram, is taking steps to ensure transparency and authenticity.
The tech giant is implementing new measures to make AI-produced images more identifiable across its platforms, including Facebook, Instagram, and Threads.
Nick Clegg, President of Global Affairs at Meta, expressed his enthusiasm for the creative potential of AI tools, such as Meta's AI image generator.
‘It’s been hugely encouraging to witness the explosion of creativity from people using our new generative AI tools, like our Meta AI image generator, which helps people create pictures with simple text prompts,’ he said.
‘People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology.’
Meta is developing ways to label photorealistic content created using AI called ‘Imagine with Meta AI’. All content created using Meta's AI feature is tagged with 'Imagined with AI'.
‘We want to be able to do this with content created with other companies’ tools, too,’ Clegg added.
‘That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.’
‘Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.’
‘We’re building this capability now, and in the coming months, we’ll start applying labels in all languages supported by each app.’
When images are created using Meta's AI feature, visible and invisible watermarks are placed on those images.
‘Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI),’ Clegg explained.
‘The invisible markers we use for Meta AI images–IPTC metadata and invisible watermarks–are in line with PAI’s best practices.’
However, the challenge lies in applying these signals to AI tools that generate audio and video content at the same scale.
‘While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it,’ Clegg said.
‘We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.’
‘If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,’ he added.
However, invisible markers can be removed, and Meta is working on ways to identify AI-generated content even if these markers have been removed.
‘This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,’ Clegg noted.
‘People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.’
‘Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.’
‘As (AI-generated content) becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content,’ he added.
‘Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well as content that has.’
‘What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.’
AI has opened up a world of possibilities, from generating images from simple text prompts to creating photorealistic videos and audio—and even finding love. But technology is not without its drawbacks.
In a previous story, cybercriminals used AI to create fake online personas through bots to deceive potential victims on Australian dating apps. You can read more about it here.
What are your thoughts on AI-generated content? Have you encountered any on your social media feeds? Let us know in the comments below.
In today’s story, Meta, the parent company of social media platforms Facebook and Instagram, is taking steps to ensure transparency and authenticity.
The tech giant is implementing new measures to make AI-produced images more identifiable across its platforms, including Facebook, Instagram, and Threads.
Nick Clegg, President of Global Affairs at Meta, expressed his enthusiasm for the creative potential of AI tools, such as Meta's AI image generator.
‘It’s been hugely encouraging to witness the explosion of creativity from people using our new generative AI tools, like our Meta AI image generator, which helps people create pictures with simple text prompts,’ he said.
‘People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology.’
Meta is developing ways to label photorealistic content created using AI called ‘Imagine with Meta AI’. All content created using Meta's AI feature is tagged with 'Imagined with AI'.
‘We want to be able to do this with content created with other companies’ tools, too,’ Clegg added.
‘That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.’
‘Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.’
‘We’re building this capability now, and in the coming months, we’ll start applying labels in all languages supported by each app.’
When images are created using Meta's AI feature, visible and invisible watermarks are placed on those images.
‘Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI),’ Clegg explained.
‘The invisible markers we use for Meta AI images–IPTC metadata and invisible watermarks–are in line with PAI’s best practices.’
However, the challenge lies in applying these signals to AI tools that generate audio and video content at the same scale.
‘While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it,’ Clegg said.
‘We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.’
‘If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,’ he added.
However, invisible markers can be removed, and Meta is working on ways to identify AI-generated content even if these markers have been removed.
‘This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,’ Clegg noted.
‘People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.’
‘Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.’
‘As (AI-generated content) becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content,’ he added.
‘Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well as content that has.’
‘What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.’
AI has opened up a world of possibilities, from generating images from simple text prompts to creating photorealistic videos and audio—and even finding love. But technology is not without its drawbacks.
In a previous story, cybercriminals used AI to create fake online personas through bots to deceive potential victims on Australian dating apps. You can read more about it here.
Key Takeaways
- Meta is taking steps to ensure AI-generated images are labelled for transparency, with a new feature that labels such content 'Imagined with AI'.
- The company is collaborating with industry partners to establish common standards to signal when content is created by AI, allowing for consistent labelling across platforms.
- Meta is implementing visible and invisible watermarks on images created using its AI tools and aligning with best practices outlined by the Partnership on AI.
- While there are current limitations in labelling AI-generated audio and video, Meta is developing features to allow users to disclose such content, with the intention of labelling it and possibly applying penalties for non-disclosure.