More

    The Only Way to Stop AI ‘Art’ in 2026 Is to Make It Uncool

    A hill I’m willing to die on: I don’t consider content created entirely by an AI image or video generator «art.»

    This rule — made by me, for me — has occupied a lot of my brain space in 2025. Over the course of the year, we’ve gone from clunky, hallucination-ridden AI videos to clips that can be virtually indistinguishable from real videos. This year might seem like it’s gone on forever, but the pace at which AI video has improved over the past seven months is truly frenetic. The same is true for image generation — Google’s Nano Banana and OpenAI’s first image model are also only a few months old, as hard as that is to believe.

    It’s more than just the addition of audio to videos, although that was a big leap forward this year. Veo 3 proved cinematic AI video isn’t an oxymoron, and Sora, the app and second-generation model powering it, showed us a terrifying glimpse into a future where your likeness is game for every internet weirdo’s imagination. But if you can get over that queasy feeling (I still can’t), it’s also the best AI video model I’ve tested, with an undeniably impressive technical prowess that sheds itself of common AI errors.

    Also this year, we heard more than ever from artists, creators and copyright owners that generative AI models are being created and deployed irresponsibly. Disney and Warner Bros. filed hotly worded copyright infringement lawsuits against Google and Midjourney, calling the latter «a bottomless pit of plagiarism.» Anthropic announced a $1.5 billion settlement with the authors who accused it of piracy. And AI energy demands, which are especially high for video, have AI companies racing to build massive data centers across the US, despite concerns from the local communities and environmental experts.

    I spend more time than most using these generative AI tools. These companies brand themselves as «democratizing creation» or «making it easier than ever to create art.» That rhetoric was dialed up this year, as big tech companies, not exactly known for their creativity and compassion for creators, try to convince potential customers they know ball. The technical improvements we’ve seen with the new 2025 models, along with their viral popularity, have meant our online lives are filled with AI at an alarming rate. Undoubtedly, nothing AI makes is art. Period.

    I expect we’ll see a lot more of creative AI in 2026. It feels like a tide that won’t slow down. So it’s more important than ever to make a clear distinction between AI-generated content and truly human art. It will also be more important than ever to call out so-called AI «art» for what it is: pathetic, boring and unoriginal. While I’m still hopeful we’ll get better AI labels, we need to rethink how we approach creative AI and the content (and slop) that it creates as it fills our online lives.


    Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    AI versus art

    AI-generated content is a mimicry of human art. That’s by design. These creative AI models are designed and refined using large swaths of human-generated data. For image and video models, that data includes photographs, designs and social media posts. The wider a model’s training data, the more capable it is. For example, you can ask ChatGPT to create images in the style of Studio Ghibli (which many people did in March 2025). The model knew that the film studio created a specific cartoon/anime aesthetic and was able to apply that style to its own AI images.

    Because of that process, AI rarely makes something new. In one of my favorite quotes about AI from this year, film writer (and former Meta AI data trainer) Nora Garrett told reporters while promoting her movie After the Hunt, «AI is sold to us like it’s the future, but it’s a regurgitation of our collective past, remarketed as the future.»

    She goes on to say, «I think that ultimately there’s always going to be a human element that people want. I don’t know that making things happen quicker, cheaper and more optimally is really conducive to the human spirit and our human collectivism.»

    (My runner-up quote of the year came from Guillermo del Toro. When asked his stance on using AI, he said, «I’d rather die.»)

    I’m not saying you can’t make art from collaging pieces from the past, but creative AI models are limited in a way that human creativity isn’t. AI can’t fundamentally connect with people the way that art can. It isn’t designed to make us reflect deeply; in fact, there’s growing evidence that we stop thinking critically when using AI. Great art pushes us to be uncomfortable, shows us things we don’t want to see and connects us to our collective humanity. AI is notoriously terrible at that.

    To give a seasonal example, take The Nutcracker’s pas de deux by Pyotr Ilyich Tchaikovsky. If you’ve seen the show, you may recall that it ends with a duet of the Sugar Plum Fairy and her cavalier. It’s one of the most well-known dances, and that’s due in part to the emotional, iconic musical composition. Tchaikovsky famously wrote the 1892 ballet in a state of grief over the passing of his sister Aleksandra, and you can hear that sorrowful, melancholic influence in the music, particularly in the pas de deux. The ballet’s emotional heart is so strong that it moves people 133 years after it was originally performed. So-called AI music generators could never manage that.

    Even legitimate uses for AI that don’t claim to be art come with risk. We’ve seen a surge of AI slop, images and videos that are low-quality, trashy, plasticy and seemingly pointless. It’s inescapable on social media, and the increase in creative AI models this year has made it so much worse. While this slop isn’t pretending to be art, it’s so ubiquitous online that, as my colleague Abrar Al-Heeti wrote earlier this year, social media is an antisocial wasteland.

    We can’t trust tech companies to stop AI ‘art’ or slop

    Tech companies have made it clear this year that generative image and video are now must-have, crucial components to winning the AI race. And it’s a well-funded, ultra-competitive marathon where any innovation can give each company the edge they need to stay in business and retain their users.

    Because of this, we can’t depend on AI companies to stop AI «art» or slop. Many companies have invested in ways to prevent deepfakes and other potentially illegal content, but we’ve already seen examples of how easy it is to get around each system’s rules. AI detection technology is important but not advanced enough to capture every instance of AI-generated misinformation.

    If we want to stop the spread of AI «art,» we have to make it uncool.

    The only way to slow down the supply is to decrease the demand. Generative AI is so ubiquitous — and in specific uses like brainstorming or personalization, helpful — that it’s hard to imagine a complete cessation of it. But we can be more thoughtful with how we use it. AI isn’t always the appropriate tool for every project. Great creative work is so often found in the process of doing that work. Creative work is knowledge work, and replacing that intellectual and emotional work with AI only leads to slop.

    We have to demand better from ourselves and creators. This movement against AI and AI slop is already in full swing. Backlash against McDonald’s and Coca-Cola’s AI holiday ads was swift. Artists who share their work online underscore that their work didn’t use AI, while others proclaim themselves as AI haters.

    We can’t elevate the AI enthusiast to the level of a professional creator. And we can’t let professional creators and brands get away with feeding us AI slop instead of human-centric work. Certainly, we can’t let tech companies get away with thinking their AI slop is an unfortunate yet unavoidable consequence of innovation. We can and must do better in 2026.

    Recent Articles

    spot_img

    Related Stories

    Stay on op - Ge the daily news in your inbox