Editor's Pick

We made a cat drink a beer with Runway’s AI video generator, and it sprouted hands

Enlarge

In June, Runway debuted a new text-to-video synthesis model called Gen-3 Alpha. It converts written descriptions called “prompts” into HD video clips without sound. We’ve since had a chance to use it and wanted to share our results. Our tests show that careful prompting isn’t as important as matching concepts likely found in the training data, and that achieving amusing results likely requires many generations and selective cherry-picking.

An enduring theme of all generative AI models we’ve seen since 2022 is that they can be excellent at mixing concepts found in training data but are typically very poor at generalizing (applying learned “knowledge” to new situations the model has not explicitly been trained on). That means they can excel at stylistic and thematic novelty but struggle at fundamental structural novelty that goes beyond the training data.

What does all that mean? In the case of Runway Gen-3, lack of generalization means you might ask for a sailing ship in a swirling cup of coffee, and provided that Gen-3’s training data includes video examples of sailing ships and swirling coffee, that’s an “easy” novel combination for the model to make fairly convincingly. But if you ask for a cat drinking a can of beer (in a beer commercial), it will generally fail because there aren’t likely many videos of photorealistic cats drinking human beverages in the training data. Instead, the model will pull from what it has learned about videos of cats and videos of beer commercials and combine them. The result is a cat with human hands pounding back a brewsky.

Read 26 remaining paragraphs | Comments

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like