Adobe has announced a multi-year strategic partnership with Runway aimed at integrating generative video capabilities into Adobe’s creative applications. According to the companies, the collaboration brings together Runway’s generative video models and Adobe’s creative software portfolio to support AI-assisted video production for creators, studios and brands.
As part of the partnership, Adobe has been named Runway’s preferred API creativity partner. Adobe said this will allow it to offer customers early access to Runway’s latest generative video models through its own platforms. The first outcome of this collaboration is the availability of Runway’s Gen-4.5 model for a limited period within Adobe Firefly, Adobe’s creative AI platform, as well as on Runway’s own platform.
Adobe and Runway also stated that they plan to co-develop new AI-driven video capabilities that will be made available within Adobe applications, beginning with Adobe Firefly.
“As AI continues to influence video production workflows, creators are increasingly relying on integrated creative ecosystems to scale content across formats and channels,” said Ely Greenfield, Chief Technology Officer and Senior Vice President, Digital Media at Adobe. He said the combination of Runway’s generative video technology and Adobe’s professional creative tools is intended to help creators and brands address the growing demands of modern content production.
Runway described the partnership as a way to extend the reach of its generative video technology. “This collaboration places our latest video generation models inside tools that many creators already use as part of their daily workflows,” said Cristóbal Valenzuela, Co-founder and CEO of Runway. He added that the aim is to make generative video capabilities more accessible to storytellers across different creative domains.
Early access to generative video models through Firefly
According to the companies, Firefly users will receive early access to Runway models following new releases, as part of Adobe’s role as Runway’s preferred API partner. Starting with Gen-4.5, Adobe Firefly users can access the model ahead of a broader public rollout, the companies said.
Runway stated that the Gen-4.5 model is designed to improve motion consistency, prompt adherence and visual fidelity across a range of video generation modes. According to the company, the model supports the creation of complex scenes with multiple elements and enables creators to generate video from text prompts, experiment with different visual directions and motion styles, and assemble outputs within Adobe Firefly’s video editing environment.
Adobe said generated content can then be further refined using Creative Cloud applications such as Adobe Premiere Pro and After Effects, allowing professional users to integrate AI-generated assets into existing post-production workflows.
Focus on professional and creator workflows
Adobe and Runway said the partnership is focused on making generative video a more dependable component of professional creative workflows. According to the companies, they plan to work with independent filmmakers, studios, agencies, streaming platforms and enterprise brands to co-develop video capabilities that integrate directly into Adobe’s creative tools.
Adobe also highlighted its broader approach to model choice within Firefly. The company said Firefly is designed to allow creators to work with multiple generative models depending on the requirements of a project. In addition to Adobe’s own Firefly models, the platform includes partner models from companies such as Runway, Black Forest Labs, ElevenLabs, Google, Luma AI, OpenAI and Topaz Labs, as well as custom models trained on users’ own content.
Adobe reiterated its position that AI is intended to augment, rather than replace, human creativity. The company stated that content generated through Firefly is not used to train generative AI models, regardless of which underlying model is selected by the creator.