Video To Anime - Generate An EPIC Animation From Your Phone Recording By Using Stable Diffusion AI
RunPod: Discord : Turn your real video into an animation with just 1 click by using Stable Diffusion for free. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰
Playlist of StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img:
Davinci Resolve Free:
FFmpeg:
Davinci Resolve Tutorial:
Gist file where commands, scripts, settings shared:
How to install Automatic1111:
How to use custom models on Automatic1111:
How to use RunPod tutorial :
How to install DreamBooth on RunPod IO :
How to train yourself and a style tutorial :
Pre-generated classification images:
Master tutorial for DreamBooth:
DreamBooth best settings experiment:
What is ControlNet, how to install and use it:
The used script in this video :
Upscale models database:
0:00 How to turn a video into an animation in a fully automated manner for free
0:54 Introduction to Davinci Resolve free edition
1:08 Introduction to FFmpeg
1:15 Short tutorial video for Davinci Resolve
1:19 How to prepare your real video footage
1:32 How to change timeline resolution in Davinci Resolve
2:00 Imported video image scaling Davinci Resolve - mismatched resolution files
2:12 Edit tab and video importing in Davinci Resolve
2:21 Where to see properties of your video in Davinci Resolve
2:36 How to crop a video into square or any aspect ratio with Davinci Resolve
3:17 Negative side of using distant position
3:49 How to export / render your video in Davinci Resolve with best settings
4:30 How to export all frames of a video by using FFmpeg
5:16 Parameters of extracting all frames of a video by using FFmpeg
5:30 File names importance for batch processing scripts
6:31 How to install and use Automatic1111 Web UI
6:57 Which training dataset I made to train myself from video exported frames
7:26 Why and how I used RunPod IO for training
8:00 Why I trained myself and a style into SD 1.5 for video to animation
8:47 How to train yourself and a style tutorial
9:08 How to do 2 concept training in Stable Diffusion 1.5 DreamBooth
9:38 What DreamBooth settings I have used to train myself and style into SD 1.5 in this tutorial
11:37 Master tutorial for DreamBooth
12:03 What is ControlNet, how to install and use it
12:24 Settings change of ControlNet to use in video to anime process
12:52 Multi-frame video rendering for Stable Diffusion script for video to animation with consistency
13:37 How install external scripts to Automatic1111 Web UI
14:44 How to change commit version of git repos, ex. Automatic1111 web UI
15:57 When you are ready to start processing real video frames into anime
16:16 You don’t have to do pre-training to follow this tutorial
16:46 First step of video to animation
17:31 Importance of first generated frame
17:39 How to generate your first converted driving frame
18:59 What kind of initial frame to image conversation you should aim
20:10 What is the difference if we don’t train ourselves and just use a custom model
20:48 Next step after you got first converted frame of your anime video
21:20 The settings used in img2img tab for batch frame processing
21:57 Multi-frame Video rendering for StableDiffusion settings
23:25 How to upscale all video to animation generated frames
23:43 How to fix naming of the batch generated images for upscaling
25:19 How to do batch upscaling by AI in Automatic1111 Web UI
26:05 How to improve faces, eyes, when doing upscaling by AI
26:25 How to animate generated frame images
26:40 How to import images as an image sequence into Davinci Resolve for animating
27:31 Clip attributes fixing
27:39 First time playing our animated video
27:50 How to improve flickering problem with a very simple trick
28:30 How to move clip frame by frame in Davinci Resolve
29:05 Which composite mode to reduce flickering problem in Davinci Resolve
29:40 How to apply deflickering in Davinci Resolve
30:35 How animation made in this video could have been improved significantly
31:54 Which other technique I have tested - img2img alternative test
32:33 img2img alternative test video to animation results
33:15 Searched for freely available deflickering tools - models - libraries
33:43 All-In-One-Deflicker
34:55 My videos have fully corrected subtitles