AI image and video models aren’t human, but they do have distinct «personalities,» according to the creators who use them. The new phenomenon is a reaction to the rapidly expanding generative AI industry, highlighting how creators manage a dizzying number of choices.
Generative AI has experienced a massive surge in growth over the past few years, but it wasn’t until 2025 that AI image, video and other generative media models took center stage. Like how chatbots have redefined text generation, these creative AI models are transforming content creation and creative work, for better or worse.
Google and OpenAI have long been leaders in the AI race. Prior to this year, they were known for their Gemini and ChatGPT chatbots. Now, Veo 3, nano banana and Sora 2 have put the tech titans firmly at the head of the pack among creative AI models. New AI innovations from Adobe and AI creative start-ups, such as Runway, Pika and Luma, have also bolstered the field this year.
For AI companies to stay competitive in a crowded market, generative media has evolved from a niche offering to a must-have necessity. Companies are focusing on upgrading their AI models to maintain an edge and attract new users. Improvements typically involve creating content that is detailed, at a higher resolution and, for video, includes sound and extends the duration of clips. Hallucinations, or errors, are disappearing with every model update, which is part of why it’s becoming increasingly difficult to spot AI-generated content.
Altogether, there have never been so many options for creating AI content. When creators have to choose a model to use, it’s no longer about which model will produce serviceable results. Now, it’s a debate about which will be the best fit for a specific project or task. As a result, each AI model now has its own personality.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Humanizing AI tools with personalities
Creators use the term «personalities» colloquially — AI models aren’t human and therefore don’t have personalities. The term really refers to a model’s ability to handle specific tasks and its reputation for excelling in particular areas. It also refers to each model’s individual style at a baseline.
«Creators are humanizing these tools. They call them ‘the creative one’ or ‘the detailed one’ because they’re building actual relationships with their AI. It’s not just software anymore,» said Tiffany Kyazze (@TechTiff), founder of the AI Flow Club, which teaches people how to use AI tools. «These personalities help creators build trust with their tools, work through creative blocks, and find workflow comfort.»
For creators who use AI tools daily, selecting the right model has become an integral part of the creative process, much like choosing the right camera lens or paintbrush.
«Each model interprets the world differently; some lean cinematic, others more surreal or dreamlike,» said Dave Clark, director and chief creative officer at Promise AI, an AI production studio. «The key for me is knowing how to take my creative vision and translate it into visual language prompts that allow me to achieve the artistry I want.»
There’s a learning curve to discovering each model’s personality. Sometimes, it isn’t even determined by the company creating it; it varies between images and videos, and between different generations of the same model. Part of that is due to how the models are created.
«Part of what we’re learning when we train our own models is at the tail end of the training process, you can show the model a particular style, and the model will overfit to some extent, or adapt, to that style and basically gain the personality,» said Alexandru Costin, vice president of generative AI at Adobe. «So we see very opinionated models that do that. Others try to be more neutral.»
The training data that’s used to create and refine a model also plays a role in developing each model’s baseline style. For example, Adobe’s Firefly models were trained using licensed Adobe Stock imagery, which is why Firefly-generated content often has a stock-like appearance. (Costin said the company is working on fixing that to create more realistic outputs.)
What is each AI image and video model’s personality?
I’ve spent a lot of time with these AI models, and the creators I spoke with had similar ideas and experiences to my own when it came to each model’s personality. Here are some of the most popular models’ personalities.
- Google’s Veo 3 (video): Cinematic, natural motion, high quality
- Flux (video): Excels at realism, especially for human features
- Runway (video): Full creative studio, great for those who need hands-on control
- Sora (video): Good for ideation and exploration, memes for Sora social media app
- Midjourney (image and video): Most creative models, best for artistic or stylized work
- Google’s nano banana (image): Best for character consistency, good for e-commerce and social media work
- Adobe Firefly Image Model 5 (image): Commercially safe results for professional work
You’ll notice distinct personalities among chatbots, too. ChatGPT is known for its affectionate, personable tone (sometimes annoyingly so), whereas Claude is a go-to research tool, and Gemini is a convenient choice for Google users. However, the different personalities of AI image and video models — styles, aesthetics, innate preferences, etc — are much more immediately obvious.
While you can create nearly any scene with AI image and video generators, they aren’t the «everything machines» that chatbots can be. Creators who use AI creative tools for professional work often need to leave them with a specific piece of content. Understanding each model’s personality is crucial.
Benefits of using multiple models
The idea of bouncing between AI models and programs might not seem appealing at first, but there are benefits to expanding your AI roster.
Clark and his team used a variety of AI models for a new short film he directed named My Friend, Zeph. This method of hybrid filmmaking, as Clark calls it, involved the team using AI tools such as Adobe Firefly, Google’s Veo 3.1 and Luma’s Ray3, as well as Adobe’s traditional software, including Photoshop and Premiere Pro.
«By blending multiple models, you get creative range and precision, almost like having a team of specialists,» said Clark. «We can visualize the world of a story much earlier, iterate faster, and make stronger creative choices before we ever step on set.»
Some creators are loyal to specific AI tools and platforms and might be hesitant to branch out. This idea of AI loyalty is slightly misguided, Kyazze said. The creators getting the best results are «tool-agnostic and goal-focused.»
«The real benefit of multimodel workflows is that you’re not forcing one tool to do everything. You’re leveraging each model’s actual strengths. That’s not just more efficient. It gets you better results because you’re using the right tool for each specific part of your project,» said Kyazze.
Evolving personalities
The concept of AI models possessing personalities is relatively new, thanks to the recent surge in models available to creators. But they aren’t static labels; a model’s reputation and personality can change over time. As new updates roll out, models once known for being terrible at a specific task might be improved.
This trend is yet another sign that AI is playing a growing role in creative work. That’s not true for all creators, as there are many who are opposed to AI and don’t want to use it. But for those who are interested, there have never been more choices.
Creating distinct personalities for AI image and video models is one way, or solution, to help them pick the right tool to achieve better results — without wasting too much time and money on AI tools that aren’t the best fit.
While generative media models have improved a lot, they still aren’t perfect. Adapting to each model’s strengths and weaknesses is a smart workflow design, Kyazze said. Remembering that AI models are just tools is important, too, Clark said.
«The human expression of the artist — our personality and creative point of view — is what truly drives the outcomes,» said Clark. «It’s not about replacing the traditional process; it’s about expanding what’s possible and bringing imagination closer to the screen than ever before.»
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
