2023-07-03 00:32:41 +08:00
# AnimateDiff
2023-07-11 10:54:17 +08:00
This repository is the official implementation of [AnimateDiff ](https://arxiv.org/abs/2307.04725 ).
2023-07-03 00:32:41 +08:00
2023-07-11 10:54:17 +08:00
**[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning ](https://arxiv.org/abs/2307.04725 )**
2023-07-03 00:32:41 +08:00
</br>
Yuwei Guo,
2023-07-03 01:13:53 +08:00
Ceyuan Yang*,
2023-07-03 00:32:41 +08:00
Anyi Rao,
Yaohui Wang,
Yu Qiao,
Dahua Lin,
2023-07-03 01:13:53 +08:00
Bo Dai
2023-07-09 23:37:00 +08:00
<p style="font-size: 0.8em; margin-top: -1em">*Corresponding Author</p>
2023-07-03 00:32:41 +08:00
2023-07-11 10:54:17 +08:00
[Arxiv Report ](https://arxiv.org/abs/2307.04725 ) | [Project Page ](https://animatediff.github.io/ )
2023-07-03 00:32:41 +08:00
2023-07-09 22:40:48 +08:00
## Todo
2023-07-09 22:41:48 +08:00
- [x] Code Release
2023-07-11 10:54:17 +08:00
- [x] Arxiv Report
2023-07-12 17:38:29 +08:00
- [x] GPU Memory Optimization
2023-07-09 22:40:48 +08:00
- [ ] Gradio Interface
2023-07-16 10:40:59 +08:00
## Common Issues
<details>
<summary>Installation</summary>
Please ensure the installation of [xformer ](https://github.com/facebookresearch/xformers ) that is applied to reduce the inference memory.
</details>
<details>
<summary>Various resolution or number of frames</summary>
Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.
</details>
<details>
<summary>Animating a given image</summary>
We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the [talesofai ](https://github.com/talesofai/AnimateDiff ).
</details>
<details>
<summary>Contributions from community</summary>
Contributions are always welcome!! We will create another branch which community could contribute to. As for the main branch, we would like to align it with the original technical report:)
</details>
2023-07-09 22:40:48 +08:00
## Setup for Inference
### Prepare Environment
2023-07-12 17:07:44 +08:00
~~Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.~~
2023-07-12 17:40:23 +08:00
***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!***
2023-07-09 22:40:48 +08:00
```
2023-07-12 17:07:44 +08:00
git clone https://github.com/guoyww/AnimateDiff.git
2023-07-12 16:57:50 +08:00
cd AnimateDiff
2023-07-09 22:40:48 +08:00
2023-07-12 16:57:50 +08:00
conda env create -f environment.yaml
2023-07-09 22:40:48 +08:00
conda activate animatediff
```
### Download Base T2I & Motion Module Checkpoints
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately.
It's recommanded to try both of them for best results.
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
bash download_bashscripts/0-MotionModule.sh
```
2023-07-18 19:46:55 +08:00
You may also directly download the motion module checkpoints from [Google Drive ](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing ) / [HuggingFace ](https://huggingface.co/guoyww/animatediff ) / [CivitAI ](https://civitai.com/models/108836 ), then put them in `models/Motion_Module/` folder.
2023-07-09 22:40:48 +08:00
### Prepare Personalize T2I
Here we provide inference configs for 6 demo T2I on CivitAI.
You may run the following bash scripts to download these checkpoints.
```
bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh
```
### Inference
2023-07-12 16:57:50 +08:00
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder.
2023-07-09 22:40:48 +08:00
```
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
```
2023-07-03 01:09:13 +08:00
2023-07-14 21:36:01 +08:00
To generate animations with a new DreamBooth/LoRA model, you may create a new config `.yaml` file in the following format:
```
NewModel:
path: "[path to your DreamBooth/LoRA model .safetensors file]"
base: "[path to LoRA base model .safetensors file, leave it empty string if not needed]"
motion_module:
- "models/Motion_Module/mm_sd_v14.ckpt"
- "models/Motion_Module/mm_sd_v15.ckpt"
steps: 25
guidance_scale: 7.5
prompt:
- "[positive prompt]"
n_prompt:
- "[negative prompt]"
```
Then run the following commands:
```
python -m scripts.animate --config [path to the config file]
```
2023-07-18 16:41:22 +08:00
## Gradio Demo
2023-07-18 16:43:10 +08:00
We have created a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
2023-07-18 16:41:22 +08:00
```
conda activate animatediff
python app.py
```
2023-07-18 16:43:10 +08:00
By default, the demo will run at `localhost:7860` .
2023-07-18 16:41:22 +08:00
<br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
2023-07-09 22:40:48 +08:00
## Gallery
2023-07-13 19:32:50 +08:00
Here we demonstrate several best results we found in our experiments.
2023-07-03 01:09:13 +08:00
<table class="center">
<tr>
<td><img src="__assets__/animations/model_01/01.gif"></td>
<td><img src="__assets__/animations/model_01/02.gif"></td>
<td><img src="__assets__/animations/model_01/03.gif"></td>
<td><img src="__assets__/animations/model_01/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
<table>
<tr>
<td><img src="__assets__/animations/model_02/01.gif"></td>
<td><img src="__assets__/animations/model_02/02.gif"></td>
<td><img src="__assets__/animations/model_02/03.gif"></td>
<td><img src="__assets__/animations/model_02/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/4468/counterfeit-v30">Counterfeit V3.0</a></p>
<table>
<tr>
<td><img src="__assets__/animations/model_03/01.gif"></td>
<td><img src="__assets__/animations/model_03/02.gif"></td>
<td><img src="__assets__/animations/model_03/03.gif"></td>
<td><img src="__assets__/animations/model_03/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
<table>
<tr>
<td><img src="__assets__/animations/model_04/01.gif"></td>
<td><img src="__assets__/animations/model_04/02.gif"></td>
<td><img src="__assets__/animations/model_04/03.gif"></td>
<td><img src="__assets__/animations/model_04/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/43331/majicmix-realistic">majicMIX Realistic</a></p>
<table>
<tr>
<td><img src="__assets__/animations/model_05/01.gif"></td>
<td><img src="__assets__/animations/model_05/02.gif"></td>
<td><img src="__assets__/animations/model_05/03.gif"></td>
<td><img src="__assets__/animations/model_05/04.gif"></td>
</tr>
</table>
2023-07-09 23:49:59 +08:00
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/66347/rcnz-cartoon-3d">RCNZ Cartoon</a></p>
2023-07-03 01:09:13 +08:00
<table>
<tr>
<td><img src="__assets__/animations/model_06/01.gif"></td>
<td><img src="__assets__/animations/model_06/02.gif"></td>
<td><img src="__assets__/animations/model_06/03.gif"></td>
<td><img src="__assets__/animations/model_06/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/33208/filmgirl-film-grain-lora-and-loha">FilmVelvia</a></p>
2023-07-09 22:40:48 +08:00
2023-07-13 19:32:50 +08:00
#### Community Cases
Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.
2023-07-13 17:40:54 +08:00
<table>
<tr>
2023-07-13 19:32:50 +08:00
<td><img src="__assets__/animations/model_07/init.jpg"></td>
2023-07-13 17:40:54 +08:00
<td><img src="__assets__/animations/model_07/01.gif"></td>
<td><img src="__assets__/animations/model_07/02.gif"></td>
<td><img src="__assets__/animations/model_07/03.gif"></td>
<td><img src="__assets__/animations/model_07/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">
Character Model: <a href="https://civitai.com/models/13237/genshen-impact-yoimiya">Yoimiya</a>
2023-07-13 19:35:09 +08:00
(with an initial reference image, see <a href="https://github.com/talesofai/AnimateDiff">WIP fork</a> for the extended implementation.)
2023-07-13 19:32:50 +08:00
2023-07-13 17:40:54 +08:00
<table>
<tr>
<td><img src="__assets__/animations/model_08/01.gif"></td>
<td><img src="__assets__/animations/model_08/02.gif"></td>
<td><img src="__assets__/animations/model_08/03.gif"></td>
<td><img src="__assets__/animations/model_08/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">
2023-07-13 19:32:50 +08:00
Character Model: <a href="https://civitai.com/models/9850/paimon-genshin-impact">Paimon</a>;
Pose Model: <a href="https://civitai.com/models/107295/or-holdingsign">Hold Sign</a></p>
2023-07-13 17:40:54 +08:00
2023-07-10 13:46:01 +08:00
## BibTeX
2023-07-11 10:54:17 +08:00
```
2023-07-16 10:40:59 +08:00
@article {guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Wang, Yaohui and Qiao, Yu and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2307.04725},
year={2023}
2023-07-11 10:54:17 +08:00
}
```
2023-07-09 22:40:48 +08:00
2023-07-10 13:52:01 +08:00
## Contact Us
**Yuwei Guo**: [guoyuwei@pjlab.org.cn ](mailto:guoyuwei@pjlab.org.cn )
**Ceyuan Yang**: [yangceyuan@pjlab.org.cn ](mailto:yangceyuan@pjlab.org.cn )
**Bo Dai**: [daibo@pjlab.org.cn ](mailto:daibo@pjlab.org.cn )
2023-07-09 22:40:48 +08:00
## Acknowledgements
Codebase built upon [Tune-a-Video ](https://github.com/showlab/Tune-A-Video ).