People are also having a field day with Runway’s video-to-video generation, which was released on September 13. Essentially, the feature allows you to radically transform the visual style of a given video clip using text prompts.

Check out the video below for a mind-altering example of what’s possible.

Runway Gen-3 Alpha just leveled up with Video-to-Video

Now you can transform any video's style using just text prompts at amazing quality.

10 wild examples of what's possible:pic.twitter.com/onh12zCzpI

— Min Choi (@minchoi) September 15, 2024

AI enthusiasts are also producing stunning visual effects that can be displayed on Apple’s Vision Pro headset, giving us a potential hint at what developers leveraging the recently announced API will be able to accomplish.

X (formerly Twitter) user Cristóbal Valenzuela posted a brief clip to the social media site on Monday showing off the combined capabilities of Gen-3 and Apple Vision Pro.

Early experiments rendering Gen-3 on top of the Apple Vision Pro, made by @Nymarius_ pic.twitter.com/SiUNR0vX0G

— Cristóbal Valenzuela (@c_valenzuelab) September 15, 2024

The video depicts an open-plan office space with a generated overlay that makes the room appear to be deep jungle ruins. Some users remained unconvinced of the video’s veracity, but according to the post, it was generated by someone who actually works at Runway.

Twitter user and content creator Cosmo Scharf showed off similar effects in their post, as well as provided additional visual evidence to back up their claims.

Runway announced Monday the release of a new API that will enable developers to add video generation capabilities to a variety of devices and apps, though there reportedly are a few restrictions on who can actually access the API. For one, it’s only in limited release for the moment, but you can sign up for a waitlist here. You’ll also need to be either a Build or Enterprise plan subscriber. Once you are granted access, you’ll only be able to leverage the Gen-3 Alpha Turbo model iteration, which is a bit less capable than the company’s flagship Gen-3 Alpha.

The company plans to charge a penny per generation credit to use the service. For context, a single second of video generation costs five credits so, basically, developers will be paying 5 cents per second of video. Devs will also be required to “prominently display” a “Powered by Runway” banner that links back to the company’s website in any interface that calls on the API.

While the commercial video generation space grows increasingly crowded — with Adobe’s Firefly, Meta’s upcoming Sora, Canva’s AI video generator, Kuaishou Technology’s Kling, and Video-01 by Minimax, to name but a handful — Runway is setting itself apart by being one of the first to offer its models as an API. Whether that will be enough to recoup the company’s exorbitant training costs and lead it to profitability remains to be seen.






Share.
Exit mobile version