Runway, a pioneer in generative video, just unveiled its latest AI model called Runway Aleph that aims to redefine how people create and edit video content.
Aleph builds on Runway’s research into General World Models and Simulation Models, giving users a conversational AI tool that can instantly make complex edits to video footage, whether generated or existing. For instance, want to remove a car from a shot? Swap out a background? Restyle an entire scene? According to Runway, Aleph lets you do all that with a simple prompt.
Read also: What Are AI Video Generators? What to Know About Google’s Veo 3, Sora and More
Unlike previous models that focused mostly on video generation from text, Aleph emphasizes «fluid editing.» It can add or erase objects, tweak actions, change lighting and maintain continuity across frames, which are challenges that have historically tripped up AI video tools. The company says Aleph’s local and global editing capabilities keep scenes, characters and environments consistent, so creators don’t have to fix frame-by-frame glitches.
«Runway Aleph is more than a new model — it’s a new way of thinking about video altogether,» Runway wrote in its announcement.
The launch comes as AI video creation heats up. Big players like OpenAI, Google, Microsoft and Meta have all showcased AI video models this year. But Runway, which helped popularize AI video with its earlier Gen-1 and Gen-2 models, says Aleph pushes things further by combining high-fidelity generation with real-time, conversational editing — which could be significant for filmmakers, studios and advertisers who want faster workflows.
Runway says Aleph is already being used by major studios, ad agencies, architecture firms, gaming companies and eCommerce teams. The company is giving early access to enterprise customers and creative partners starting now, with broader availability rolling out in the coming days.
Read also: How to Use Sora to Create an AI Video