Best AI Video: Pika VS Stable Video Diffusion VS Runway VS AnimateDiff

It's been a minute. Welcome back to AiTuts, the only AI newsletter that's as fast and practical as a 3-in-1 pressure cooker.

In today's email:

  • The state of AI video: Movie trailers galore
  • Closed VS Open source: How do the top AI video options compare?
  • SDXL Turbo: Generate in real time


AI Videos are popping off right now

Just look at all these trailers for movies that don’t exist.

Videos today are where AI images were just a year ago.

Midjourney V4 was released around this time last year. The same time as the Cambrian explosion of custom Stable Diffusion models.

Overnight we went from: “that’s cool but you can’t really use it for anything” to “holy cow that’s not a photograph?”

So what exactly does the AI video landscape look like right now?

What is awesome and what is not too great?

To get an idea of where we're at, let's take a look at the top AI video tools today and how they compare:


Pika Labs

Pika Labs is the Midjourney of AI video:

  • You run generations by sending prompts to the Pika Discord bot
  • Settings go inside prompts, just like Midjourney. Examples: -fps and -camera pan up
  • Very cinematic results. Most “fake movie trailers” are made with Pika Labs

Of the video apps I’ve tested, Pika is the best at people. And at motion. Which is pretty much 80% of what I want from AI video.

I don’t just want the image to “move”, I want it to move in a specific way. And Pika is good at knowing what that specific way is.

The cool part is you include the motion you want in your prompt, such as “clouds drifting slowly in the background”.

Just a few days ago, Pika Labs released Pika 1.0. The trailer is amazing.

I haven’t been able to get such high quality motion out of Pika yet. So the examples are very cherrypicked, very marketed.

Fingers are crossed though.

Try Pika Labs:


Runway Gen-2

Runway is a company that offers a bunch of AI tools, bunch they are most famous for Gen-2, their video generation model.

Unlike Pika Labs, where everything goes into the prompt, Runway has a more traditional user interface:

While Runway has higher overall image/video quality than Pika, I often don’t get the specific motion I want.

Runway is good at linear movements like camera motion, but character and subject movements feel slow and clunky.

People have very limited motion.

However, Runway does have very impressive video editing tools.

Motion Brush is a feature that lets you paint over a part of a video to apply motion specifically to that part.

Here's a side by side comparison of Pika Labs VS Runway Gen 2.


AnimateDiff for Stable Diffusion

AnimateDiff is your open source option for AI video.

Think of it as an extension of Stable Diffusion.

You use your regular ol’ Stable Diffusion checkpoint models to generate images, but you pair them with special motion models.

Viola, your Stable Diffusion images can move!

AnimateDiff’s biggest strength is video-to-video generation.

It’s responsible for pretty much all of the AI-anime Tiktok dances.

You take an existing video, and apply any style to it with Stable Diffusion models and LoRAs.

If you want to learn Animatediff, join Banodoco. It's the strongest Discord community centered around AnimateDiff, filled with creators and developers.

Get started:


Stable Video Diffusion

Stable Video Diffusion (SVD) is the first Stable Diffusion model designed to generate videos. It was released at the end of November.

You will need 40GB VRAM... so running it locally is out of the question for most people.

How does it perform though?

The backgrounds it creates are incredible.

People don’t move very much.

AnimateDiff for Stable Diffusion blows SVD out of the water for characters.

But that’s OK.

SVD was only released around the end of November, and has a lot of catching up to do. By comparison, AnimateDiff was released in June of this year.

All I can say about this one, is stay tuned.

Get started:


The AI Tool Report

I like trying things out for myself.

That's the problem with most of these AI newsletters.

They don't feature things I can try.

It's just another day of "oh cool, the models are getting even bigger".

That's why I like the AI Tool Report. You get it 5x a week, it's simple and practical. Every issue comes with:

📈 Trending Tools: Today’s Most Popular Tools

💡 Training: Curated ChatGPT & Midjourney Prompts


SDXL Turbo has been released. This new model generates every image using only 1 sampling step. This is so fast that it enables real time prompting... which is exactly what it sounds like. The image changes as you type your prompt!

Fooocus is an image generating software that takes the best from Stable Diffusion and Midjourney. It combines the open-source aspect of Stable Diffusion with the ease-of-use of Midjourney. Highly, highly recommended.

OneTrainer is a life-changer. Model training geeks normally use Kohya SS to train custom Stable Diffusion models. OneTrainer is faster and easier to use.

More and more video optimizations and papers are coming out every day. Check out StyleCrafter for keeping consistent styles throughout videos.


Google releases Google Gemini Pro, a multimodal model capable of reasoning across text, images, video, audio, and code. You can try it in Bard now. The demo is very impressive, but a day later Google admits parts of it are faked.

Claude 2.1 is out. It takes 150k words, and is much more accurate than Claude 1. You can use it on the website or access it by API.

A fun, comprehensive Chatbot app tutorial I tried last week. Build a very complete, very pretty Chatbot app in 4 hours. Learn a lot, and build a side hustle?

That's all folks!

You can reply directly to this email with feedback or comments. We read every reply.

Till next time,

Yubin & Crew

1223 Cleveland Ave #200, San Diego, CA 92103
Unsubscribe · Preferences

AiTuts Newsletter

The most practical AI Newsletter for creative people. 5 minute read, everything you need to know about all that matters

Read more from AiTuts Newsletter

Good morning. Welcome back to Aituts. Do we have some goodies for you today! In this email: Niji V6 is out and it looks incredible + Midjourney V6 tips: memory and more Comfy Textures: automatically textures 3D models Taiyi: The first bilingual open-source text-to-image model for Chinese & English HEADLINE field of poppies with a village in the distance, aerial view, nestled in snowcapped mountains in switzerland, 1980 1990 anime retro nostalgia --ar 16:9 --s 90 --niji 6 Niji V6 (Midjourney's...

Good morning, this is Aituts. AI is like an onion - it many layers to unpeel and makes many people cry. So let us do the peeling for you. In today’s email: Nightshade: a potent or pointless poison? InstantID: the end of face LoRAs? Game Studio Survey: 50% of game studios are using generative AI HEADLINE Nightshade: a potent or pointless poison? Researchers at the University of Chicago have unveiled Nightshade, a new tool that allows artists to "poison" their images so that AI models cannot be...

So I want to drop the the "t" from AiTuts because it's cleaner. Do you like it or hate it? eg.: Welcome to Aituts, the only AI newsletter for creative professionals. What we've got today: The first real AI-generated TV show trailer? 5 more really cool things about Midjourney V6 Artists are suing Midjourney HEADLINE The first real AI-generated TV show trailer? The German production company PANTALEON Films has asked freelance studio Storybook to create an AI TV-show trailer. Here's the link to...