HowTo WAN & HunyuanVideo with local GPU and cloud GPU: Image-to-Video and Text-to-Video
Skills:
Multimodal LLMs90%Prompt Craft80%Prompt Systems Engineering70%Advanced Prompting60%CV Basics50%
Learn how to set up the WAN and Hunyuan video models in ComfyUI, whether you're using a local or cloud GPU. We'll have a look at technical details like generation times, file sizes, and workflow configurations. Additionally we compare results.
Videos:
ComfyUI Introduction: https://youtu.be/52YAQZ-1nOA
ComfyUI Cloud GPU: https://youtu.be/i_9OO3EmBJo
Local vs. Cloud GPUs Performance & Costs: https://youtu.be/WVPJ8CuTB00
Fast GGUF for Flux: https://youtu.be/B-Sx_XCAqzk
Fast GGUF for Stable Diffusion 3.5: https://youtu.be/xcxj4HfuUU4
Workflows on Patreon:
https://www.patreon.com/posts/125252963/
Links:
LightningAI: https://lightning.ai/
Chapters:
0:00 About WAN and Hunyuan Video
1:05 ComfyUI Setup
1:43 Hunyuan Video
6:15 Wan2.1 Video
9:50 Generation Time and File Size
12:07 Video Comparison
13:21 Conclusion and Recommendation
#ComfyUI #hunyuan #wan2 #cloudgpu
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Multimodal LLMs
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
What makes an AI image workflow useful for real commercial output?
Dev.to AI
How to Write Better AI Image Prompts for Midjourney (With Examples That Actually Work)
Medium · ChatGPT
Image to Video AI: The Complete Workflow Playbook That Actually Produces Results
Medium · AI
Image Harvest v1.0.2: Internationalization, Free Pro Trial & Quality-of-Life Improvements
Dev.to · kyriewen
Chapters (7)
About WAN and Hunyuan Video
1:05
ComfyUI Setup
1:43
Hunyuan Video
6:15
Wan2.1 Video
9:50
Generation Time and File Size
12:07
Video Comparison
13:21
Conclusion and Recommendation
🎓
Tutor Explanation
DeepCamp AI