RunwayML Gen 1 vs Automatic1111 - Video creation with StableDiffusion
In this video we take a deeper dive into creating videos with StableDiffusion, comparing RunwayML Gen 1 with Automatic1111.
First we will create a short animation in Blender, then feed it first into RunwayML Gen1 and later into Automatic1111, Batch img2img / ControlNet and explain the whole workflow of creating a video animation in each system. We also will show, how to reduce flickering and temporal incoherences in StableDiffusion videos considerably.
Finally, we will compare the results and talk about the pros and cons of each system, as well as the requirements, support and give a final…
Watch on YouTube ↗
(saves to browser)
Chapters (6)
Introduction
1:16
Preparing some simple video footage in Blender
5:00
Creating a video animation with RunwayML Gen 1, using our video footage as an in
9:54
Creating a video animation with Automatic1111/batch img2img/controlnet , using t
13:40
Improving the temporal coherence in Automatic1111 videos
14:42
Comparison, pros and cons of RunwayML Gen 1 vs Automatic1111, final conclusion
DeepCamp AI