diff --git a/README.md b/README.md
index 43b5c0b..8bff14c 100644
--- a/README.md
+++ b/README.md
@@ -19,6 +19,8 @@ Bo Dai
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
+## Next
+One with better controllability and quality is coming soon. Stay tuned.
## Features
- **[2023/09/25]** Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
@@ -78,12 +80,23 @@ Bo Dai
- GPU Memory Optimization, ~12GB VRAM to inference
-- User Interface:
+
+
+## Quick Demo
+
+User Interface developed by community:
- A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
- ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
- - [Gradio](#gradio-demo)
- Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
+We also create a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
+```
+conda activate animatediff
+python app.py
+```
+By default, the demo will run at `localhost:7860`.
+
+
## Model Zoo
dev branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
+## Training and inference
+Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed setup.
## Gallery
-Here we demonstrate several best results we found in our experiments.
-
-
![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model:ToonYou
- -![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model:Counterfeit V3.0
- -![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model:Realistic Vision V2.0
- -![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model: majicMIX Realistic
- -![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model:RCNZ Cartoon
- -![]() |
- ![]() |
- ![]() |
- ![]() |
-
Model:FilmVelvia
- -#### Community Cases -Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚. - -![]() |
- ![]() |
- ![]() |
- ![]() |
- ![]() |
-
-Character Model:Yoimiya -(with an initial reference image, see WIP fork for the extended implementation.) - - -
![]() |
- ![]() |
- ![]() |
- ![]() |
-
-Character Model:Paimon; -Pose Model:Hold Sign
+We collect several generated results in [GALLERY](./__assets__/docs/gallery.md). ## BibTeX ``` diff --git a/__assets__/docs/animatediff.md b/__assets__/docs/animatediff.md new file mode 100644 index 0000000..6e1f26b --- /dev/null +++ b/__assets__/docs/animatediff.md @@ -0,0 +1,112 @@ +# AnimateDiff: training and inference setup +## Setups for Inference + +### Prepare Environment + +***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!*** + +``` +git clone https://github.com/guoyww/AnimateDiff.git +cd AnimateDiff + +conda env create -f environment.yaml +conda activate animatediff +``` + +### Download Base T2I & Motion Module Checkpoints +We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. +It's recommanded to try both of them for best results. +``` +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/ + +bash download_bashscripts/0-MotionModule.sh +``` +You may also directly download the motion module checkpoints from [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules), then put them in `models/Motion_Module/` folder. + +### Prepare Personalize T2I +Here we provide inference configs for 6 demo T2I on CivitAI. +You may run the following bash scripts to download these checkpoints. +``` +bash download_bashscripts/1-ToonYou.sh +bash download_bashscripts/2-Lyriel.sh +bash download_bashscripts/3-RcnzCartoon.sh +bash download_bashscripts/4-MajicMix.sh +bash download_bashscripts/5-RealisticVision.sh +bash download_bashscripts/6-Tusun.sh +bash download_bashscripts/7-FilmVelvia.sh +bash download_bashscripts/8-GhibliBackground.sh +``` + +### Inference +After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder. +``` +python -m scripts.animate --config configs/prompts/1-ToonYou.yaml +python -m scripts.animate --config configs/prompts/2-Lyriel.yaml +python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml +python -m scripts.animate --config configs/prompts/4-MajicMix.yaml +python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml +python -m scripts.animate --config configs/prompts/6-Tusun.yaml +python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml +python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml +``` + +To generate animations with a new DreamBooth/LoRA model, you may create a new config `.yaml` file in the following format: +``` +NewModel: + inference_config: "[path to motion module config file]" + + motion_module: + - "models/Motion_Module/mm_sd_v14.ckpt" + - "models/Motion_Module/mm_sd_v15.ckpt" + + motion_module_lora_configs: + - path: "[path to MotionLoRA model]" + alpha: 1.0 + - ... + + dreambooth_path: "[path to your DreamBooth model .safetensors file]" + lora_model_path: "[path to your LoRA model .safetensors file, leave it empty string if not needed]" + + steps: 25 + guidance_scale: 7.5 + + prompt: + - "[positive prompt]" + + n_prompt: + - "[negative prompt]" +``` +Then run the following commands: +``` +python -m scripts.animate --config [path to the config file] +``` + + +## Steps for Training + +### Dataset +Before training, download the videos files and the `.csv` annotations of [WebVid10M](https://maxbain.com/webvid-dataset/) to the local mechine. +Note that our examplar training script requires all the videos to be saved in a single folder. You may change this by modifying `animatediff/data/dataset.py`. + +### Configuration +After dataset preparations, update the below data paths in the config `.yaml` files in `configs/training/` folder: +``` +train_data: + csv_path: [Replace with .csv Annotation File Path] + video_folder: [Replace with Video Folder Path] + sample_size: 256 +``` +Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files. + +### Training +To train motion modules +``` +torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/training.yaml +``` + +To finetune the unet's image layers +``` +torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/image_finetune.yaml +``` + diff --git a/__assets__/docs/gallery.md b/__assets__/docs/gallery.md new file mode 100644 index 0000000..8891dd2 --- /dev/null +++ b/__assets__/docs/gallery.md @@ -0,0 +1,93 @@ +# Gallery +Here we demonstrate several best results we found in our experiments. + +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model:ToonYou
+ +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model:Counterfeit V3.0
+ +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model:Realistic Vision V2.0
+ +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model: majicMIX Realistic
+ +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model:RCNZ Cartoon
+ +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
Model:FilmVelvia
+ +#### Community Cases +Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚. + +![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+
+Character Model:Yoimiya +(with an initial reference image, see WIP fork for the extended implementation.) + + +
![]() |
+ ![]() |
+ ![]() |
+ ![]() |
+