AI slop has infected every social media platform, from soulless images to bizarre videos and superficially literate text. The vast majority of US adults who use social media (94%) believe they encounter content that was created or altered by AI, but only 44% of US adults say they’re confident they can tell real photos and videos from AI-generated ones, according to an exclusive CNET survey. That’s a big problem.
There are a lot of different ways people are fighting back against AI content. Some solutions are focused on better labels for AI-created content, since it’s harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believed we need better AI labels online. Others (21%) believe there should be a total ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.
AI isn’t going anywhere, and it’s fundamentally reshaping the internet and our relationship with it. Our survey shows that we still have a long way to go to reckon with it.
Key findings
- Most US adults who use social media (94%) believe they encounter AI content on social media, yet far fewer (44%) can confidently distinguish between real and fake images and videos.
- Many US adults (72%) said they take action to determine if an image or video is real, but some don’t do anything, particularly among Boomers (36%) and Gen Xers (29%).
- Half of US adults (51%) believe AI-generated and edited content needs better labeling.
- One in five (21%) believe AI content should be prohibited on social media, with no exceptions.
US adults don’t feel they can spot AI media
Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google’s Nano Banana image model can create hyperrealistic media, with chatbots smoothly assembling swaths of text that sound like a real person wrote them.
So it’s understandable that a quarter (25%) of US adults say they aren’t confident in their ability to distinguish real images and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If folks don’t have a ton of knowledge or exposure to AI, they’re likely to feel unsure about their ability to accurately spot AI.
People take action to verify content in different ways
AI’s ability to mimic real life makes it even more important to verify what we’re seeing online. Nearly three in four US adults (72%) said they take some form of action to determine whether an image or video is real when it piques their suspicions, with Gen Z being the most likely (84%) of the age groups to do so. The most obvious — and popular — method is closely inspecting the images and videos for visual cues or artifacts. Over half of US adults (60%) do this.
But AI innovation is a double-edged sword; models have improved rapidly, eliminating the previous errors we used to rely on to spot AI-generated content. The em dash was never a reliable sign of AI, but extra fingers in images and continuity errors in videos were once prominent red flags. Newer AI models usually don’t make those pedestrian mistakes. So we all have to work a little bit harder to determine what’s real and what’s fake.
As visual indicators of AI disappear, other forms of verifying content are increasingly important. The next two most common methods are checking for labels or disclosures (30%) and searching for the content elsewhere online (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using a deepfake detection tool or website.
But 25% of US adults don’t do anything to determine if the content they’re seeing online is real. That lack of action is highest among Boomers (36%) and those in Gen X (29%). This is worrisome — we’ve already seen that AI is an effective tool for abuse and fraud. Understanding the origins of a post or piece of content is an important first step to navigating the internet, where anything could be falsified.
Half of US adults want better AI labels
Many people are working on solutions to deal with the onslaught of AI slop. Labeling is a major area of opportunity. Labeling relies on social media users to disclose that their post was made with the help of AI. This can also be done behind the scenes by social media platforms, but it’s somewhat difficult, which leads to haphazard results. That’s likely why 51% of US adults believe that we need better labeling on AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.
Other solutions aim to control the flood of AI content shared on social media. All of the major platforms allow AI-generated content, as long as it doesn’t violate their general content guidelines — nothing illegal or abusive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feeds; Pinterest rolled out its filters last year, while TikTok is still testing some of its own. The idea is to give every person the ability to permit or exclude AI-generated content from their feeds.
But 21% of respondents believe that AI content should be prohibited on social media altogether, no exceptions allowed. That number is highest among Gen Z at 25%. When asked if they believed AI content should be allowed but strictly regulated, 36% said yes. Those low percentages may be explained by the fact that only 11% find AI content provides meaningful value — that it’s entertaining, informative or useful — and that 28% say it provides little to no value.
How to limit AI content and spot potential deepfakes
Your best defense against being fooled by AI is to be eagle-eyed and trust your gut. If something is too weird, too shiny or too good to be true, it probably is. But there are other steps you can take, like using a deepfake detection tool. There are many options; I recommend starting with the Content Authenticity Initiative‘s tool, since it works with several different file types.
You can also check out the account that shared the post for red flags. Many times, AI slop is shared by mass slop producers, and you’ll easily be able to see that in their feeds. They’ll be full of weird videos that don’t seem to have any continuity or similarities between them. You can also check to see if anyone you know is following them or if that account isn’t following anyone else (that’s a red flag). Spam posts or scammy links are also indications that the account isn’t legit.
If you want to limit the AI content you see in your social feeds, check out our guides for turning off or muting Meta AI in Instagram and Facebook and filtering out AI posts on Pinterest. If you do encounter slop, you can mark the post as something you’re not interested in, which should indicate to the algorithm that you don’t want to see more like it. Outside of social media, you can disable Apple Intelligence, the AI in Pixel and Galaxy phones and Gemini in Google Search, Gmail and Docs.
Even if you do all this and still get occasionally fooled by AI, don’t feel too bad about it. There’s only so much we can do as individuals to fight the gushing tide of AI slop. We’re all likely to get it wrong sometimes. Until we have a universal system to effectively detect AI, we have to rely on the tools we have and our ability to educate each other on what we can do now.
Methodology
CNET commissioned YouGov Plc to conduct the survey. All figures, unless otherwise stated, are from YouGov Plc. The total sample size was 2,530 adults, of which 2,443 use social media. Fieldwork was undertaken Feb. 3-5, 2026. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 plus).

