You can now turn still images into AI videos with Runway Gen-3 Alpha


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Despite facing controversy — as many AI model providers have lately — over its data scraping and training practices, the New York City-based startup Runway is plowing ahead with new marquee features for its realistic generative AI video platform.

Today, the company announced on its X account and its Discord server that Gen-3 Alpha, the new AI video model it unveiled back in mid June 2024 capable of generating incredibly realistic video in seconds from simple text prompts, now accepts still images as prompts as well.

The user simply has to navigate to Runway’s website, click on the “try Gen 3-Alpha” language and they will be taken to a screen where they can upload imagery and/or continue to enter text prompts to guide new AI video generations that are 5 or 10 seconds in length (the user chooses). A 10-second generated video requires 40 credits through Runway’s pay-to-play and subscription tiers, while a 5-second generation costs 20 credits.

Tests reveal Runway’s Gen-3 Alpha image-to-video update is surprisingly fast while retaining high quality

Screenshot 2024 07 29 at 1.19.18%E2%80%AFPM

We tested the feature briefly on my personal Runway account and I was personally blown away by both the speed (it took less than a minute to generate the video from a still image I provided) and the quality. See my still (generated with separate AI service Midjourney) and video made with Runway Gen-3 Alpha embedded below.

cfr0z3n man wearing intricate brass exoskeleton looking down at 79c393f8 d032 4afa aa35 b3910ee35a14

Safety measures in place

The model automatically attempts to detect and block the creation of video from explicit still imagery or well-known figures such as politicians.

Screenshot 2024 07 29 at 1.51.47%E2%80%AFPM

The feature also made its way online early briefly for some Runway subscribers over the weekend before being yanked down and then put back up today.

Initial examples abound

On its X account, Runway also highlighted 10 interesting videos generated from still images with the Gen-3 Alpha model.

As Runway posted in its tweet: “Image to Video is major update that greatly improves the artistic control and consistency of your generations.”

Runway’s co-founder and CEO Cristóbal Valenzuela introduced the new feature on his own personal account on X with the simple phrase “it’s time” followed by a post of his own image-to-video creation.

The video AI race is neck and neck — pending litigation

Runway is one of several companies — including OpenAI with its Sora model, Kuaishou Technology with its Kling AI model, Luma AI with its Dream Machine, and Pika with its self-titled model — vying to offer fast and high quality generative video to users.

However, OpenAI’s Sora still remains non-public for now, while the other models are accessible to members of the public to use.

But with Runway’s new still image-to-video update promising yet another way for users to make Hollywood-quality videos on the fly, it’s clear that the entire field of filmmaking and video creation is being upended like never before.

But will the courts stifle the innovation at the behest of aggrieved artists and rights’ holders of copyrighted material?

Runway is, after all, among several AI companies currently being sued in class action lawsuits by creators who allege that the general practice of AI companies scraping and training on publicly posted material — including copyrighted material — without express permission, authorization, compensation, or consent is a violation of copyright law. How the courts rule on this issue will no doubt greatly impact the present and future of AI video and creative tools.



Source link

About The Author

Scroll to Top