Meta outlines its new approach to improving transparency when it comes to detecting and disclosing AI-generated images in-stream within its apps, including on Facebook, Instagram and Threads.
By working in partnership with other providers to build industry-wide tools and standards for AI-generated content detection, Meta hopes to better inform users on the source of the content they see on their feeds.
Meta’s newly-announced approach will hopefully reduce the spread of misinformation linked to AI-generated content on their platforms, but with AI’s ongoing boom, it will take working with other tech giants and lawmakers to combat the issue.
On the detection measures they are working on for images, Meta explains:
“We’re building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”
As noted in Meta’s statement, audio and video AI generated content is not detectable as of yet, hence the focus on AI-generated images.
Meta will be adding an in-app tool for users to disclose when they share AI-generated video and audio so it can be labelled as such in-stream.
“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so”, Meta added in their statement.