As the games industry has been riddled with layoffs and studio closures in recent years, another shadow emerged in 2025: generative AI, which made its way into the game development pipeline.
Last March, I attended the Game Developers Conference in San Francisco, California, dashing between the wings of the Moscone Center to hear how the games industry was incorporating generative AI. The technology could be applied to generate code, text or images, yet there was no seeming consensus on what it should be used for. From panels of cautiously optimistic executives to roundtables of freelance developers concerned with securing steady employment, the conference was flooded with a range of views on AI, despite the limited evidence of its use in game development.
By the end of 2025, the issue spiked, grabbing the attention of gamers everywhere, as developers open up about the ways they’ve used generative AI to make games — which, as far as we know, has still been minimal. On social media, numerous unfounded accusations have been made against games for using AI-generated art and text. The technology has become a bogeyman for gamers.
When actual proof of AI in a game is revealed, the consequences can be serious. After it came to light that AI-made placeholder assets were included in the launch of JRPG Clair Obscur: Expedition 33 (even though they were swiftly patched out), the Indie Game Awards rescinded two awards for the much-lauded game. And when Swen Vincke, founder and game director of Larian Studios (Baldur’s Gate 3), announced that generative AI was being used to create concept art and placeholder text for its next game, it sparked backlash, according to the video game news and reviews site IGN.
What’s changed? Awareness, certainly. Throughout the year, AI has been like background radiation, bumming out gamers in other aspects of their lives, spreading through software, exacerbating climate issues, increasing misinformation with falsified images and spiking PC RAM prices. It makes sense that gamers would be suspicious of the use of generative AI in the games they play, especially given its dubious training on datasets and art, often done without the consent of creators.
Lack of transparency is also sparking concern. Companies aren’t disclosing the amount, if any, of generative AI used. It’s common practice for studios to stay quiet during game development, sometimes releasing snippets of behind-the-scenes footage on social media or YouTube to build hype. But opacity only intensifies the furor among fans if news about the use of generative AI then becomes public. Besides, there isn’t an agreed-upon standard on where to use generative AI, how much is appropriate and whether game-makers are obliged to disclose when they’ve used it.
How gen AI’s promises pitted players against studios
GDC, an annual conference that has been running since 1988, has long been a hub for discussions and sessions on AI. In the past, you’d mostly hear about topics such as computer-controlled character behavior and the use of machine learning. Some of that remains, but much of AI’s presence at GDC has moved on to generative AI.
Despite the skepticism surrounding the technology, I’ve seen ideas for what it could offer players in the future. GDC 2024 was brimming with possibilities for generative AI in gaming, and GDC 2025 took it to the next level, demonstrating prototype technology to attendees. From the moment the doors opened at the Moscone Center, it was all about promoting the current and near-future applications of generative AI in both game production and tools for players.
Xbox executives Fatima Kardar and Sonali Yadav, corporate vice president of gaming AI at Microsoft and partner group product manager, respectively, gave an overview of their plans to use Microsoft’s Copilot, an AI-powered assistant, to support Xbox gamers during play. It felt much like a pitch for other smart assistants. They proposed ways it could guide new players or provide customized advice to more experienced players, offering the example of suggesting hero choices and post-death tips in Overwatch. (This Copilot on Xbox functionality launched in beta back in September.)
They also emphasized their responsibility to players when deploying the assistant. «We want to make sure that, as AI shows up in their experiences, those experiences add value and make the gaming more powerful an experience, yet keep games at the front and center of it,» Kardar said. «It needs to make sure gamers are having more fun.»
Accessory-maker Razer also showcased its own AI-powered in-game assistant at GDC. The abundance of gaming guides online, including those on YouTube, suggests that gamers would be receptive to such guidance, even if they might initially resist it. At this point, however, there haven’t been enough titles that incorporate in-game assistance to gauge player reaction.
Instead, the wider gaming community’s exposure to generative AI in games has been discovering, after release, that the technology was used but not divulged. For example, 11 Bit Studios, which developed the sci-fi base-builder The Alters, apologized in June for not disclosing its use of AI in development (players discovered AI-generated text prompts in the released version of the game).
Embark, the studio behind extraction shooter Arc Raiders, pushed back against accusations that it used generative AI, telling PCGamesN that machine learning handled movement for the game’s multilegged robots. On the game’s Steam page, the studio says AI was used in development, but doesn’t specify the nature of the AI used, unlike the disclosure for its previous game, The Finals, which used text-to-speech tools to generate audio.
In each instance, fans reacted sourly, with bitter condemnation that studios had deliberately misled them. Some developers owned up, like 11 Bit Studios apologizing for using generative AI to hastily translate text for international versions of the game in time for its launch (saying the plan was to swap in professional translations later). Other instances seem to have been oversights, as with Sandfall Interactive admitting that the AI-generated textures in Clair Obscur: Expedition 33 were accidentally left in but then removed days after its release.
While it’s unclear how broad this sentiment is among gamers, the loudest critics consider AI-generated game elements tantamount to poisoning their experience. Aftermath journalist Luke Plunkett appropriately titled his commentary: «I’m Getting Real Tired of Not Being Able to Trust That a Video Game Doesn’t Have AI Crap in It.»
Nowhere has that new norm of AI hostility been more evident than in the immediate aftermath of The Game Awards in December, when Larian, beloved creator of Baldur’s Gate 3, released a trailer for its next RPG, Divinity 3. The reveal was well received until studio head Vincke discussed his company’s use of AI in a follow-up interview with Bloomberg. Fan backlash prompted him to release a statement to IGN clarifying that no AI-generated content would be included in the final game, which is still years away from release. In a separate post on X, Vincke explained that Larian is using generative AI to explore visual ideas and compositions before the in-house artists create the actual concept art.
What generative AI promises game developers
Within the industry itself, developers see AI as a mixed bag.
Microsoft’s talk with Xbox executives Kardar and Yadav explored other ways AI could be built into Microsoft’s developer tools (like DirectX, Visual Studio, Azure AI Services and more) to help developers create games, whether by speeding up workflows or helping log bugs faster, as well as by offering AI chat-based support.
Razer also showcased another generative AI tool, designed for game development: a quality assurance assistant that automates aspects of bug tracking and filing. When a tester plays a build of a new game and stops the session because they noticed something awry, Razer’s tool can create an automatic report that logs when and where certain bugs were encountered. Razer says this automation can reduce QA time by 50%, though it stressed that the tech was intended to be an efficiency multiplier, not a job replacer.
The corporations also envision using generative AI to address issues, such as easing internal processes, automating mundane tasks, and parsing player and industry data for actionable insights. It’s an idea that was echoed in several talks throughout GDC, including one featuring developers from studios such as Raven Software, Sledgehammer Games, Treyarch and Activision Shanghai. The developers listed technical ways in which large language models helped them use multimodal searches to identify the right item among hundreds of thousands of assets in digital libraries, or spot and eliminate redundant tickets in task-tracking software like Jira.
Another panel of executives from several companies, including Xbox, Roblox, 2K, enterprise AI platform maker Databricks and game engine creator Unity, explored the downsides of prompting generative AI to produce code. 2K chief technical officer Nibedita Baral recounted a developer who seemingly reduced a three-day task down to minutes, though it then took three days to correct the issues in the AI-generated output. Optimizing models is challenging, especially in ensuring that the output is ethical.
«That’s on us to reduce the bias, to have diversity. A machine cannot do it, a tool cannot do it. Humans have to invest in that to figure out the balance,» Baral said.
AI’s threat to labor and art in the games industry
While GDC opened with optimistic corporate pitches and rather pedestrian uses for generative AI in game production, concerns about the human cost bubbled up through the rest of the week.
Anyone currently seeking employment is aware of the significant impact that generative AI has had on the job market. These days, AI services filter out many applicants before they even reach a human’s desk. With applicants using AI to build resumes that can survive automated filtering, the entire process is obscured. At a roundtable discussing how AI is impacting hiring new employees, games industry recruiters described using LLMs for an additional phone screening of applicants to cut down on time. Yet that also presents another AI barrier to prospective hires — one that can’t filter for culture fit the way humans can.
A few hundred feet away, contractors were hashing out survival strategies to weather one of the worst employment periods the industry has seen. Many developers employed by studios voiced concerns about how AI might replace their work, but it was low on the list of priorities for freelancers. They were more bedeviled by the ordinary evils that plague vulnerable workers, such as getting stiffed on client payments or being pressured into performing free labor through endless revisions.
In a conversation with Dr. Jakin Vela, executive director of the International Game Developers Association, we explored the challenges facing the games industry during what could be considered one of its cyclical troughs. Yet it appears that this post-expansion course correction has been particularly grueling. Even more than the rise of generative AI, what weighs on developers is profound economic uncertainty and geopolitical strain, alongside studios cutting jobs and the decline in efforts to hire inclusively.
IGDA’s membership has varying perspectives on the new technology. «Some people are excited for the possibility to incorporate generative AI in their workflows to support their processes, but we have others in our community, especially among artists, localization professionals, QA testers and writers who are rightfully terrified that generative AI will be used by studio leadership and executives to replace them to save costs,» Vela said.
One thing Vela conceded, and which was echoed during the conference, was that generative AI is here to stay. The question is how to ethically incorporate it and identify whether language models used by AI tools were trained on stolen data. Another question is how to use AI to augment developer workflows rather than replace them.
Former EA software engineer David «Rez» Graham hosted a panel on the ethics of using AI in game development. It came with a stern warning: that the increased use of gen AI in production also threatens the death of art. Since any output from the technology is derivative, not creative, normalizing its use in an artistic and experiential art form risks «losing the soul of the industry in the worst, extreme case.»
Graham noted that many artists and designers feel like nobody is listening to their concerns or taking them seriously. Generative AI represents a split in priorities between creatives (artists, designers, developers) and managers. While one could argue that AI tools with ethically sourced data have a place in empowering workers, Graham’s concern is that AI adoption will soon be mandated by individuals with solely financial motives who lack an understanding of artistic workflows.
«I think we’re sitting right now at a crossroads where we get to decide: Are we going to have the bad, dystopian ending, or are we going to have an ending where we can use these tools to uplift?» Graham said.
During GDC, games industry veterans fed up with layoffs and turmoil launched their own union, United Videogame Workers. The union aimed to unify developers across companies, with the ultimate goal of achieving a large enough membership to drive industry-wide change. The workers’ demands have included broad employment protections to resist rampant layoffs — over 25,000 employees lost their jobs over the last two years. And now, there are also concerns about AI technologies threatening those who remain employed.
Into 2026, the beat continues: AI is here to stay
For a tech reporter like myself, the rest of the year in gaming wasn’t that different. I got early looks at upcoming titles at Summer Game Fest and various previews. My colleagues and I tallied up the best games of the year and attended The Game Awards to cap off 2025.
But that background radiation was always there. Multiple news stories emerged alleging that games were being made with generative AI. Fans have become increasingly wary, and studios started to respond by posting public assurances that their games weren’t made with AI. After the Indie Game Awards revoked its award to Clair Obscur: Expedition 33 and granted it to the runner-up, Blue Prince, the gaming website The Escapist put out an alarmist article claiming the latter may have used AI.
The article, which has since been corrected, prompted its publisher Raw Fury to post on Bluesky that AI was not used in Blue Prince’s creation. The kerfuffle represents the tenuous state of gaming and suspicion by fans about how much digital automation went into making their favorite entertainment.
That isn’t to say that gamers should expect generative AI to play a role in every game going forward, especially since the technology is still in its early stages. I chatted with The Witness and Braid creator Jonathan Blow about his upcoming game, Order of the Sinking Star, which was revealed at The Game Awards. He recounted predictions that people wouldn’t even be programming anymore by the end of 2025 — which, he told me, is patently false.
«You could certainly get something on the screen a lot faster with AI than you could before, but you still have the task of evolving that into something that people actually want to play, and past a certain point, AI can’t take you there yet,» Blow said. «The thing it leaves you with is a total mess that programmers wouldn’t really want.»
Though he acknowledged others’ concerns that AI shouldn’t be used in gaming, Blow said he believed that if and when generative AI improves, it’ll help people expand their creativity. He also said he doesn’t expect it to threaten jobs.
As 2026 begins, gamers have a lot to look forward to, with blockbuster games like Grand Theft Auto 6, Resident Evil: Requiem, Tomb Raider: Legacy of Atlantis, 007: First Light, Control Resonant and more titles. But they’ll enter the year with a sense of uncertainty, no longer able to trust that their games are completely made by humans.

