I’m an AI reporter, and next year, I want to be really bored. I don’t want to hear about increased rates AI-powered fraudMessy Power struggles in the board of directors Or people who misuse AI software to intentionally create harmful, misleading or inflammatory images and videos.
It’s a difficult task, and I know that I probably won’t get my wish. There are simply too many companies developing AI, and too little direction and regulation. But if I had to ask for one thing this holiday season, it’s this: 2025 has to be the year we get meaningful AI content classifications, especially for photos and videos.
AI-generated photos and videos have come a long way, especially over the past year. But the development of AI-driven image generators is a double-edged sword. Improvements to forms mean fewer images appear Hallucinations Or luck. But those weird things, the people with 12 fingers and the disappearing things, were one of the few things people could report and guess whether the image was created by humans or artificial intelligence. As AI generators get better and those signs disappear, it’s going to be a big problem for all of us.
Legal power struggles and ethical debates over AI images will undoubtedly continue in the coming year. But for now, AI-based image creation and editing services are legal and easy to use. This means that AI content will continue to flood our online experiences, and identifying image origins will become harder and more important than ever. There is no magic solution, one size fits all. But I’m confident that widespread adoption of AI content ratings would go a long way toward helping.
The complex history of artificial intelligence
If there’s one button you can push to send any artist into a blind rage, it’s creating AI-powered image generators. Powered by generative AI, this technology can create entire images from a few simple words in your prompts. I have used and reviewed Many of them are for CNET, and it still amazes me how detailed and clear the images are. (They are Not all winnersbut it can be very good.)
As does my former CNET colleague Stephen Shankland Put it succinctly“AI can let you lie with images. But you don’t want an image untouched by digital processing.” Striking the balance between retouching and editing the truth is something photojournalists, editors, and creatives have been dealing with for years. Generative AI and AI-powered editing make it even more complicated.
Take Adobe for example. This fall, Adobe introduced… Tons of new featuresMany of them are powered by artificial intelligence. Photoshop can now remove distracting wires and cables from images, and Premiere Pro users can lengthen existing movie clips using general artificial intelligence. Obstetric mobilization is one of The most famous Photoshop toolson par with the crop tool, Adobe’s Deepa Subramaniam told me. Adobe has made it clear that generative editing will be the new standard and future. Because Adobe is the industry standard, it puts creators in a dilemma: join AI or be left behind.
Although Adobe promises to never rehearse its users’ work — one of the biggest concerns with generative AI — not every company does that or even discloses how it creates its AI models. Creators who share their work online already have to deal with “art theft and plagiarism,” says digital artist Rene Ramos He told me earlier this year, noting how image-generating tools give access to styles that artists have spent their lives refining.
What AI labels can do
AI labels are any type of digital notification that indicates an image may have been created or significantly modified by artificial intelligence. Some companies automatically add a digital watermark to their generations (eg Imagine Meta AI), but many offer the ability to remove them by upgrading to paid tiers (such as OpenAI’s D-E 3). Or users can simply crop the image to cut out the meta tag.
A lot of good work has been done over the past year to help in this effort. Adobe’s Content Authenticity Initiative launched a new app this year called Content credentials It allows anyone to attach digital and invisible signatures to their work. Creators can also use these credentials to detect and track the use of AI in their work. Adobe also has a Google Chrome extension that helps identify these credentials in content across the web.
Google has adopted a new content credential standard for Images and ads in Google search As part of the Alliance for Content Source and Authenticity, co-founded by Adobe. A. also added: New section For image information on a Google search that highlights any AI-based editing for “greater transparency.” Google’s pilot program for watermarking and AI content identification is called Synthide IDtook a step forward and it was It has been released open source for developers this year.
Social media companies are also working on AI content classification. People are more likely to encounter false or misleading images online on social media than any other channel, according to a report by Poynter Mediaways Initiative. Meta, the parent company of Instagram and Facebook, was automatically rolled out “Made with AI” stickers. For social shares, labels quickly, It was flagged by mistake Images captured by humans and generated by artificial intelligence. Dead later He explained That the labels are applied when “industry standard AI image indicators are detected” and the label was changed to read “AI information” to avoid implying that the image was created entirely by a computer program. Other social media platforms, like Pinterest and TikTok, have had AI tags with varying degrees of success — in my experience, Pinterest is overwhelmingly inundated with AI, and TikTok’s AI tags are ubiquitous but easy to overlook.
Adam Mosseri, head of Instagram, recently shared a video Series of posts On the subject, he said: “Our role as internet platforms is to label AI-generated content as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be AI-generated, so we must also provide context.” About who is sharing so you can evaluate how much you want to trust their content.”
If Mosseri has any actionable advice beyond “think infinitive” — which most of us are taught in high school English class — I’d love to hear it. But more optimistically, it could point to future product developments to give people more context, like Twitter/X community feedback. These things like AI nomenclature will be even more important if Meta decides to continue its experiment to add them Suggested posts generated by artificial intelligence To our conclusions.
What we need in 2025
This is all great, but we need more. We need consistent, clear labels in every corner of the internet. They are not buried in the metadata of the photograph but are placed across (or above or below) it. The clearer, the better.
There is no easy solution to this. This type of online infrastructure will require a lot of work and collaboration across technology, social, and perhaps government and civil society groups. But this kind of investment in differentiating between raw images and those entirely AI-generated with everything in between is essential. Teaching people how to spot AI content is great, but as AI improves, it will become harder for even experts like me to accurately evaluate images. So why not make it very clear and give people the information they need to know about the origins of the image – or at least help them guess when they see something strange?
My concern is that this problem is currently at the bottom of many AI companies’ to-do lists, especially as the tide seems to be turning toward development. Artificial intelligence videos. But for my sanity and everyone else’s, 2025 has to be the year we come up with a better system for identifying and classifying AI images.
https://www.cnet.com/a/img/resize/4a0545c27ae2ff3358060d5b78e9d0bcc94c6942/hub/2024/12/19/987e9b59-942c-42b2-9cf7-5ff8579a6511/ai-on-laptop-gettyimages-1965212714.jpg?auto=webp&fit=crop&height=675&width=1200
Source link