Meta’s “Made with AI” label is in the process of being updated to read “AI info” after complaints that genuine photos (some even originally shot on actual film!) were tagged.
Creators and photographers were complaining that even minor edits to their work labelled as Made with AI when Metadata from apps like Photoshop were applied, even if this was as minor as an image being cropped.
The problem surfaced as Meta expanded their AI content labelling policies and the Made by AI label started to appear on real life images that hadn’t been created through AI. Even flattening an image (which takes place when saving as a .jpeg) appears to be enough for Meta to consider the image a suspect for Generative AI creation.
Flattening images is so routine and just about every image on the web has been flattened… it’s basically the way images are coded with lossy compression, a trade off between storage size and image quality and on the web it’s essential to get images to load reasonably quickly. It’s rare that you’ll find a full raw image file online – basically one that contains unprocessed data straight from the camera.
Meta’s response is to change their label from the blunt “Made with AI” to “AI info”, which is intended to merely inform that the image may have been created or manipulated with AI. This doesn’t appear to be a final solution, but demonstrates the difficulty of distinguishing between genuine (perhaps slightly edited) images and those that have leaned heavily or entirely on Generative AI to create.
As we’ve said from the beginning, we’re consistently improving our AI products, and we are working closely with our industry partners on our approach to AI labeling. The new ‘AI info’ label is intended to more accurately represent content that may have been modified using AI, rather than suggesting it was entirely generated by artificial intelligence.
– Kate McLaughlin, Spokesperson, Meta