Social media has never been lauded for realism, but online deceit is reaching a fever pitch.
Since the inception of platforms like Facebook, Instagram and TikTok, aspirational content has flooded our feeds with too-perfect bodies, homes and lifestyles. Despite the supposed aim of social media to keep us connected, going online increasingly feels isolating and detached from reality. AI slop and deepfakes are making it infinitely worse.
The emergence of generative AI tools like OpenAI’s Sora, Google’s Veo and Midjourney has facilitated the creation of videos that are all at once extraordinary, imaginative and deceptive. Suddenly, a simple text prompt can conjure anything you think up. It’s a technological marvel, and, oftentimes, an ethical nightmare.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
AI slop refers to the endless barrage of low-quality digital content created with artificial intelligence. You’ve likely seen examples of this on your social media feeds, from videos of animals exhibiting oddly human characteristics to pranks and gags that seem to defy the laws of physics. In addition, deepfakes of public figures, both alive and dead, saying and doing things that never actually happened have become a hot — and contentious — commodity.
It’s not just shoddy AI that’s diminishing the online experience. Filtering through more convincing fabrications of people, places and events demands more vigilance than ever, and it’s utterly exhausting.
From human connection to platform addiction
Ever since the rollout of OpenAI’s Sora app in September, I’ve wondered why anyone would want to scroll through a feed made up solely of artificial moments. Even more «traditional» social platforms like Instagram and TikTok have become inundated with AI content that seemingly serves no real purpose, beyond demonstrating the astonishingly lifelike imagery that AI tools can now conjure in an instant.
Isn’t the point of social media to stay in touch with people you know and to follow public figures that interest you? Is that foundational objective officially dead?
«The cynical answer is that social media is now aimed at keeping you connected to the tool, rather than to each other,» said Alexios Mantzarlis, director of Cornell Tech’s Security, Trust and Safety Initiative.
Tech giants are «making their stock price go up» by showing off their AI capabilities, he said, but «it’s coming at the expense of the experience on these platforms.»
Indeed, that heightened artificiality clashes with the authenticity many of us seek and increasingly struggle to find online. One of the reasons I fell in love with TikTok years ago is that it’s easier to find more genuine, unfiltered content — a welcome relief from the overly polished influencer posts flooding my Instagram feed, although I increasingly see my fair share of that on TikTok, too.
Even before the rise of generative AI, Instagram updates from friends and family have largely been supplanted by content from high-profile creators I don’t follow. That’s not always a bad thing, as it often exposes me to topics I want to see more of. That focus on user interests is what makes TikTok’s algorithm in particular so powerful (and addictive). But it makes me feel even more disconnected from the people I actually do know — especially since everyday folks are seemingly posting less nowadays.
Factor AI into that equation, and any semblance of authenticity fades away. Now, along with battling feelings of insecurity from seeing touched-up images of real people, you may also stumble upon vacation photos that are entirely AI-generated, or come across an AI influencer that amplifies unattainable beauty standards.
«Before, we had the problem of unrealistic body expectations,» Mantzarlis said. «And now we’re facing the world of unreal body expectations.»
It’s becoming harder for people to separate fact from fiction, but the dissemination of slop and deepfakes isn’t slowing down. The clock is ticking for platforms to regulate this new wave of content before it drowns out our sense of reality.
Curbing AI’s harm
Social media companies like Meta and TikTok have pledged to label AI-generated content and ban harmful posts such as fake crisis events or the use of private individuals’ likenesses without their permission. On Wednesday, TikTok said it’ll start testing a new control for people to choose how much AI-generated content they want to see in their feeds.
But in the absence of government regulation — which is lagging due to factors like political gridlock over how to regulate AI, lobbying from tech firms (Meta recently launched a super PAC to push back on AI laws) and the rapid rate at which the technology is evolving — it’s up to companies to uphold their own policies. It can admittedly be tricky for sites to flag everything given the sheer volume of content, but so far, their perfunctory efforts don’t seem very promising.
That lack of regulation can magnify distrust and discord online. An August study by Raptive, a media company that works with digital creators, found that when people merely suspected content was AI-generated, they instinctively distanced themselves from it. Specifically, 48% of respondents found the content to be less trustworthy, while 60% said they felt a lower emotional connection to it.
But with influencer-created content dominating most social platforms these days, AI can be marketed as a way to simplify a traditionally time-consuming process.
«AI tools make it easier for more people to be creators,» said Paul Bannister, chief strategy officer at Raptive. «It increases the footprint of who can be a creator much faster.»
Chatting with Bannister helped to offset a bit of my skepticism; he reminded me that «like any new technology, there are always good and bad uses.» Along with AI slop, he noted, there’s still human creativity behind the (oftentimes ridiculous) AI content we see online.
«There will be lots of junk and bumps in the road and problems, but maybe this can create amazing new forms of information-sharing and entertainment,» Bannister said. «There’s still so much junk flowing through the system that we don’t know what that outcome is going to be.»
There’s another side to what AI is capable of: «It’s going to be used for further exacerbating tensions, for confirming people’s pre-existing biases,» Mantzarlis said.
Social media is already an echo chamber for reaffirming people’s harmful, close-minded viewpoints and rapidly spreading misinformation. I’m not sure we’re prepared for just how much worse that can get when everyone has the power to easily manufacture their own reality and share it with the world. The fracturing of society is only going to show wider cracks.
If anything, I’d appreciate it if more platforms took a page out of Pinterest’s book and gave us the option to dial down how much AI we see on our feeds. Though given the choice, I’d dial it all the way down to zero.

