If you’ve been anywhere near social media over the past few weeks, then you would have seen a wave of AI-generated videos floating around out there, racking up millions of views. Many of them are produced in Sora, ChatGPT’s sister AI tool.
Sora is a generative video model developed by OpenAI that transforms text descriptions, images or video inputs into short video clips. The tool lets you type something like «a plastic bag floating around the air, carried by the wind» and receive a matching video clip.
OpenAI first revealed Sora in early 2024 and made it available to ChatGPT Plus and Pro subscribers in December of last year. The model builds on OpenAI’s earlier text-to-image systems, such as Dall-E, but uses new architectures designed for more natural motion and visual consistency.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Don’t confuse OpenAI’s Sora video-generation desktop-based tool with the new social iOS and Android app of the same name, or with the unrelated Sora reading app. The social app runs on Sora 2, while the desktop version can use either the original model or Sora 2, depending on the region.
How Sora works
Sora is a diffusion model. It starts video creation with a screen of static noise and gradually removes it until shapes, textures and motion form a coherent scene that matches the text prompt. The Sora 2 model, released on Sept. 30, also supports synchronized dialogue and sound effects, while earlier versions produced only mute clips.
Sora breaks images and frames into small chunks of data called patches, which help it understand motion, texture and detail across different formats and lengths. These patches function similarly to tokens in language models, which break down text into smaller units, such as words or punctuation, allowing the AI tool to process and generate output.
You can upload text, still images and short video clips as starting points, and set the length between 5 and 20 seconds at resolutions from 480p to 1080p in the current public version.
Beyond understanding what the prompt describes, Sora also models how those elements behave and interact in the real world. Older models had issues simulating those actions. For example, a video of someone eating a cookie might omit the bite mark. Sora now simulates those cause-and-effect details more accurately. Even so, OpenAI acknowledges that Sora 2 «still makes certain mistakes,» despite being «better about obeying the laws of physics compared to prior systems.»
For detailed instructions on how to use Sora to create an AI video, read our guide next.
What you can do with Sora
In its effort to establish a closer relationship with professional creators, Sora has introduced features previously reserved for advanced video tools. The new storyboarding option, available to Plus and Pro users on the desktop, allows creators to outline scenes before generating videos, much like filmmakers plan shots.
Until now, most Sora clips have been short and casual. However, updates such as storyboarding, longer runtimes and higher resolutions suggest that OpenAI aims to make the platform suitable for more polished and professional work.
Some artists, like Arvida Byström, have successfully used AI imagery in imaginative ways, expanding the possibilities creatively. When the AI tool distorts a body — say, by adding an extra limb or reshaping it in strange ways — Byström treats it as part of the art rather than a mistake. She leaves room for the model’s interpretation, finding beauty in those accidents and in the unfamiliar forms that emerge from «AI misunderstanding the body.»
But for most people, it’s about convenience, not artistry. Generative AI becomes a shortcut for churning out quick, trend-driven content that offers little to no value but is purely for entertainment purposes, called AI slop.
«Best case scenario, people just ignore it,» says Nathaniel Fast, director of USC Marshall’s Neely Center for Ethical Leadership and Decision Making. «Second best case scenario, it ends up being a big distraction … at worst, it will really erode our sense of trust and our ability to understand what’s real.»
Byström echoes that concern about the challenges of differentiating what’s real and fake.
«Maybe one good thing is that we’ll finally start questioning what we see,» Byström says. «The visual has always been powerful, but when it becomes so easy to fake, people might return to more trusted sources.»
Availability, access and cost of Sora
OpenAI has split Sora’s accessibility into two components: a desktop web tool designed for professional use and a mobile app intended primarily for social video creation and sharing.
If you want high-quality, long-form content creation, the web interface is your best bet, as it offers advanced features like storyboarding and longer video durations.
The free Sora apps on iOS and Android started as invite-only. Since late October, people in the US, Canada, Japan and South Korea have been able to log in without a code. The company intends to expand access to additional countries.
The mobile app focuses heavily on creation, remixing and sharing short-form video clips, resembling TikTok, making it a social-first experience.
The cost to use Sora is integrated into the existing ChatGPT subscription plans. If you have a free ChatGPT account, as a teaser, you receive a limited daily allowance of around 30 Sora generations.
Core Sora functionality is available to ChatGPT Plus subscribers for $20 per month, granting a generous daily allowance of video generations. For professionals needing better output, the Pro subscription costs $200 per month and unlocks superior features, including higher-resolution videos, the longest durations and the ability to download creations without a watermark.
As the platform’s demand skyrocketed, OpenAI introduced a pay-as-you-go model for everyone who hits their daily free limit. This lets you purchase small bundles of extra video generations for around $4 per pack of 10.
Controversies and other issues
With Sora, OpenAI transitioned from image generation to video, further extending the disruption that image models have brought to the graphics and illustration industries. Video creation, which once required large teams or specialized software, can now be done from a prompt on your phone. This could alter the economics of film, entertainment and media production, as well as the level of trust that people place in what they see.
When manipulated video spreads misinformation or impersonates public figures, it’s a problem we shouldn’t ignore. OpenAI’s Likeness Misuse filter is designed to stop you from generating videos that depict real people without consent. If someone tries to prompt Sora with a celebrity name or recognizable individual, the system either blocks the request or returns an error message.
Sora 2 also introduced a Cameo feature that lets you upload your own likeness to create an AI version of yourself and control how it’s used. You decide who can include your cameo in videos, remove access or delete clips that feature you at any time. Soon after launch, celebrity video platform Cameo filed a lawsuit against OpenAI, alleging the feature could create brand confusion and mislead the public by making it seem associated with or endorsed by the company.
Initially, Sora 2 used an opt-out policy for copyrighted characters, meaning rights holders had to request exclusion if they didn’t want their material used. However, in response to backlash, OpenAI announced it’s giving rights holders «more granular control,» moving closer to an opt-in model where content creators must grant permission, rather than simply excluding content after the fact.
William Schultz, partner at Merchant and Gould, focusing on internet law and emerging technology, tells CNET that while Sora’s safeguards are improving, they’re still imperfect. You can sometimes work around likeness filters, and the system occasionally flags harmless content. He says it ultimately «comes down to transparency and responsible use.»
«Companies that are relying on AI systems to generate ads and content may not have the ability to obtain a copyright registration, which is required to enforce a copyright,» he says, adding that a potential solution could be to «add human-generated content to the output.»
Aside from legal concerns, there are also ethical ones.
«I would like to see OpenAI put out products that are aimed at serving, like either solving problems or helping us meet these aspirational goals that we have of making ourselves better. It’s hard for me to understand what Sora 2 is doing other than just trying to make money,» Fast tells CNET.
If video generation becomes widespread, the economics of creation, distribution and authenticity will change dramatically.This signals a pivot in generative AI from silly images at the beginning to motion pictures in the near future. For some creators, that means new potential. For everyone else, it means new caution.
Fast says that new tools are always exciting and unlock new potential, but warns that «the overall mission is to shift the paradigm in the tech ecosystem away from a profit-first-purpose-later kind of mentality to a purpose-first AI mentality.»
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

