mirror of
https://github.com/guoyww/AnimateDiff.git
synced 2025-12-16 16:38:01 +01:00
update readme
This commit is contained in:
337
README.md
337
README.md
@@ -1,12 +1,12 @@
|
|||||||
# AnimateDiff
|
# AnimateDiff
|
||||||
|
|
||||||
This repository is the official implementation of [AnimateDiff](https://arxiv.org/abs/2307.04725) [ICLR2024 Spotlight].
|
This repository is the official implementation of [AnimateDiff](https://arxiv.org/abs/2307.04725) [ICLR2024 Spotlight].
|
||||||
It is a plug-and-play module turning most community models into animation generators, without the need of additional training.
|
It is a plug-and-play module turning most community text-to-image models into animation generators, without the need of additional training.
|
||||||
|
|
||||||
**[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725)**
|
**[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725)**
|
||||||
</br>
|
</br>
|
||||||
[Yuwei Guo](https://guoyww.github.io/),
|
[Yuwei Guo](https://guoyww.github.io/),
|
||||||
[Ceyuan Yang*](https://ceyuan.me/),
|
[Ceyuan Yang✝](https://ceyuan.me/),
|
||||||
[Anyi Rao](https://anyirao.com/),
|
[Anyi Rao](https://anyirao.com/),
|
||||||
[Zhengyang Liang](https://maxleung99.github.io/),
|
[Zhengyang Liang](https://maxleung99.github.io/),
|
||||||
[Yaohui Wang](https://wyhsirius.github.io/),
|
[Yaohui Wang](https://wyhsirius.github.io/),
|
||||||
@@ -14,96 +14,155 @@ It is a plug-and-play module turning most community models into animation genera
|
|||||||
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
|
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
|
||||||
[Dahua Lin](http://dahua.site),
|
[Dahua Lin](http://dahua.site),
|
||||||
[Bo Dai](https://daibo.info)
|
[Bo Dai](https://daibo.info)
|
||||||
(*Corresponding Author)
|
(✝Corresponding Author)
|
||||||
|
|
||||||
<!-- [Arxiv Report](https://arxiv.org/abs/2307.04725) | [Project Page](https://animatediff.github.io/) -->
|
|
||||||
[](https://arxiv.org/abs/2307.04725)
|
[](https://arxiv.org/abs/2307.04725)
|
||||||
[](https://animatediff.github.io/)
|
[](https://animatediff.github.io/)
|
||||||
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
|
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
|
||||||
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
|
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
|
||||||
|
|
||||||
We developed four versions of AnimateDiff: `v1`, `v2` and `v3` for [Stable Diffusion V1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5); `sdxl-beta` for [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
|
***Note:*** The `main` branch is for [Stable Diffusion V1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5); for [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), please refer `sdxl-beta` branch.
|
||||||
|
|
||||||
## Next
|
|
||||||
- [ ] Update to latest diffusers version
|
|
||||||
- [ ] Update Gradio demo
|
|
||||||
- [ ] Release training scripts
|
|
||||||
- [x] Release AnimateDiff v3 and SparseCtrl
|
|
||||||
|
|
||||||
## Gallery
|
## Quick Demos
|
||||||
We show some results in the [GALLERY](./__assets__/docs/gallery.md).
|
More results can be found in the [Gallery](__assets__/docs/gallery.md).
|
||||||
Some of them are contributed by the community.
|
Some of them are contributed by the community.
|
||||||
|
|
||||||
## Preparations
|
<table class="center">
|
||||||
|
<tr>
|
||||||
|
<td><img src="__assets__/animations/model_01/01.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_01/02.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_01/03.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_01/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
|
||||||
|
|
||||||
Note: see [ANIMATEDIFF](__assets__/docs/animatediff.md) for detailed setup.
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="__assets__/animations/model_03/01.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_03/02.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_03/03.gif"></td>
|
||||||
|
<td><img src="__assets__/animations/model_03/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
|
||||||
|
|
||||||
### Setup repository and conda environment
|
|
||||||
|
## Quick Start
|
||||||
|
***Note:*** AnimateDiff is also offically supported by Diffusers.
|
||||||
|
Visit [AnimateDiff Diffusers Tutorial](https://huggingface.co/docs/diffusers/api/pipelines/animatediff) for more details.
|
||||||
|
*Following instructions is for working with this repository*.
|
||||||
|
|
||||||
|
***Note:*** For all scripts, checkpoint downloading will be *automatically* handled, so the script running may take longer time when first executed.
|
||||||
|
|
||||||
|
### 1. Setup repository and environment
|
||||||
|
|
||||||
```
|
```
|
||||||
git clone https://github.com/guoyww/AnimateDiff.git
|
git clone https://github.com/guoyww/AnimateDiff.git
|
||||||
cd AnimateDiff
|
cd AnimateDiff
|
||||||
|
|
||||||
conda env create -f environment.yaml
|
pip install -r requirements.txt
|
||||||
conda activate animatediff
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Download Stable Diffusion V1.5
|
### 2. Launch the sampling script!
|
||||||
|
The generated samples can be found in `samples/` folder.
|
||||||
|
|
||||||
|
#### 2.1 Generate animations with comunity models
|
||||||
```
|
```
|
||||||
git lfs install
|
python -m scripts.animate --config configs/prompts/1_animate/1_1_animate_RealisticVision.yaml
|
||||||
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
|
python -m scripts.animate --config configs/prompts/1_animate/1_2_animate_FilmVelvia.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/1_animate/1_3_animate_ToonYou.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/1_animate/1_4_animate_MajicMix.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/1_animate/1_5_animate_RcnzCartoon.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/1_animate/1_6_animate_Lyriel.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/1_animate/1_7_animate_Tusun.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### Prepare Community Models
|
#### 2.2 Generate animation with MotionLoRA control
|
||||||
|
```
|
||||||
|
python -m scripts.animate --config configs/prompts/2_motionlora/2_motionlora_RealisticVision.yaml
|
||||||
|
```
|
||||||
|
|
||||||
Manually download the community `.safetensors` models from [CivitAI](https://civitai.com), and save them to `models/DreamBooth_LoRA`. We recommand [RealisticVision V5.1](https://civitai.com/models/4201?modelVersionId=130072) and [ToonYou Beta6](https://civitai.com/models/30240?modelVersionId=125771).
|
#### 2.3 More control with SparseCtrl RGB and sketch
|
||||||
|
```
|
||||||
|
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_1_sparsectrl_i2v.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_2_sparsectrl_rgb_RealisticVision.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_3_sparsectrl_sketch_RealisticVision.yaml
|
||||||
|
```
|
||||||
|
|
||||||
### Prepare AnimateDiff Modules
|
#### 2.4 Gradio app
|
||||||
|
We created a Gradio demo to make AnimateDiff easier to use.
|
||||||
Manually download the AnimateDiff modules. The download links can be found in each version's model zoo, as provided in the following. Save the modules to `models/Motion_Module`.
|
By default, the demo will run at `localhost:7860`.
|
||||||
|
```
|
||||||
|
python -u app.py
|
||||||
|
```
|
||||||
|
<img src="__assets__/figs/gradio.jpg" style="width: 75%">
|
||||||
|
|
||||||
|
|
||||||
## [2023.12] AnimateDiff v3 and SparseCtrl
|
## Technical Explanation
|
||||||
|
<details close>
|
||||||
|
<summary>Technical Explanation</summary>
|
||||||
|
|
||||||
In this version, we did the image model finetuning through **Domain Adapter LoRA** for more flexiblity at inference time.
|
### AnimateDiff
|
||||||
|
|
||||||
Additionally, we implement two (RGB image/scribble) [SparseCtrl](https://arxiv.org/abs/2311.16933) Encoders, which can take abitary number of condition maps to control the generation process.
|
**AnimateDiff aims to learn transferable motion priors that can be applied to other variants of Stable Diffusion family.**
|
||||||
|
To this end, we design the following training pipeline consisting of three stages.
|
||||||
|
|
||||||
- **Explanation:** Domain Adapter is a LoRA module trained on static frames of the training video dataset. This process is done before training the motion module, and helps the motion module focus on motion modeling, as shown in the figure below. At inference, By adjusting the LoRA scale of the Domain Adapter, some visual attributes of the training video, e.g., the watermarks, can be removed. To utilize the SparseCtrl encoder, it's necessary to use a full Domain Adapter in the pipeline.
|
<img src="__assets__/figs/adapter_explain.png" style="width:100%">
|
||||||
|
|
||||||
<img src="__assets__/figs/adapter_explain.png" style="width:60%">
|
- In **1. Alleviate Negative Effects** stage, we train the **domain adapter**, e.g., `v3_sd15_adapter.ckpt`, to fit defective visual aritfacts (e.g., watermarks) in the training dataset.
|
||||||
|
This can also benefit the distangled learning of motion and spatial appearance.
|
||||||
|
By default, the adapter can be removed at inference. It can also be integrated into the model and its effects can be adjusted by a lora scaler.
|
||||||
|
|
||||||
|
- In **2. Learn Motion Priors** stage, we train the **motion module**, e.g., `v3_sd15_mm.ckpt`, to learn the real-world motion patterns from videos.
|
||||||
|
|
||||||
Technical details of SparseCtrl can be found in this research paper:
|
- In **3. (optional) Adapt to New Patterns** stage, we train **MotionLoRA**, e.g., `v2_lora_ZoomIn.ckpt`, to efficiently adapt motion module for specific motion patterns (camera zooming, rolling, etc.).
|
||||||
|
|
||||||
|
### SparseCtrl
|
||||||
|
|
||||||
|
**SparseCtrl aims to add more control to text-to-video models by adopting some sparse inputs (e.g., few RGB images or sketch inputs).**
|
||||||
|
Its technicall details can be found in the following paper:
|
||||||
|
|
||||||
**[SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://arxiv.org/abs/2311.16933)**
|
**[SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://arxiv.org/abs/2311.16933)**
|
||||||
</br>
|
|
||||||
[Yuwei Guo](https://guoyww.github.io/),
|
[Yuwei Guo](https://guoyww.github.io/),
|
||||||
[Ceyuan Yang*](https://ceyuan.me/),
|
[Ceyuan Yang✝](https://ceyuan.me/),
|
||||||
[Anyi Rao](https://anyirao.com/),
|
[Anyi Rao](https://anyirao.com/),
|
||||||
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
|
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
|
||||||
[Dahua Lin](http://dahua.site),
|
[Dahua Lin](http://dahua.site),
|
||||||
[Bo Dai](https://daibo.info)
|
[Bo Dai](https://daibo.info)
|
||||||
(*Corresponding Author)
|
(✝Corresponding Author)
|
||||||
|
|
||||||
[](https://arxiv.org/abs/2311.16933)
|
[](https://arxiv.org/abs/2311.16933)
|
||||||
[](https://guoyww.github.io/projects/SparseCtrl/)
|
[](https://guoyww.github.io/projects/SparseCtrl/)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
## Model Versions
|
||||||
<details open>
|
<details open>
|
||||||
|
<summary>Model Versions</summary>
|
||||||
|
|
||||||
|
### AnimateDiff v3 and SparseCtrl (2023.12)
|
||||||
|
|
||||||
|
In this version, we use **Domain Adapter LoRA** for image model finetuning, which provides more flexiblity at inference.
|
||||||
|
We also implement two (RGB image/scribble) [SparseCtrl](https://arxiv.org/abs/2311.16933) encoders, which can take abitary number of condition maps to control the animation contents.
|
||||||
|
|
||||||
|
<details close>
|
||||||
<summary>AnimateDiff v3 Model Zoo</summary>
|
<summary>AnimateDiff v3 Model Zoo</summary>
|
||||||
|
|
||||||
| Name | HuggingFace | Type | Storage Space | Description |
|
| Name | HuggingFace | Type | Storage | Description |
|
||||||
|-------------------------------|--------------------------------------------------------------------------------------------|---------------------|---------------|------------------------------------|
|
| - | - | - | - | - |
|
||||||
| `v3_adapter_sd_v15.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt) | Domain Adapter | 97.4 MB | |
|
| `v3_adapter_sd_v15.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt) | Domain Adapter | 97.4 MB | |
|
||||||
| `v3_sd15_mm.ckpt.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt) | Motion Module | 1.56 GB | |
|
| `v3_sd15_mm.ckpt.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt) | Motion Module | 1.56 GB | |
|
||||||
| `v3_sd15_sparsectrl_scribble.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_scribble.ckpt) | SparseCtrl Encoder | 1.86 GB | scribble condition |
|
| `v3_sd15_sparsectrl_scribble.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_scribble.ckpt) | SparseCtrl Encoder | 1.86 GB | scribble condition |
|
||||||
| `v3_sd15_sparsectrl_rgb.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_rgb.ckpt) | SparseCtrl Encoder | 1.85 GB | RGB image condition |
|
| `v3_sd15_sparsectrl_rgb.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_rgb.ckpt) | SparseCtrl Encoder | 1.85 GB | RGB image condition |
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
### Quick Demos
|
#### Limitations
|
||||||
|
1. Small fickering is noticable;
|
||||||
|
2. To stay compatible with comunity models, there is no specific optimizations for general T2V, leading to limited visual quality under this setting;
|
||||||
|
3. **(Style Alignment) For usage such as image animation/interpolation, it's recommanded to use images generated by the same community model.**
|
||||||
|
|
||||||
|
#### Demos
|
||||||
<table class="center">
|
<table class="center">
|
||||||
<tr style="line-height: 0">
|
<tr style="line-height: 0">
|
||||||
<td width=25% style="border: none; text-align: center">Input (by RealisticVision)</td>
|
<td width=25% style="border: none; text-align: center">Input (by RealisticVision)</td>
|
||||||
@@ -135,42 +194,22 @@ Technical details of SparseCtrl can be found in this research paper:
|
|||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
||||||
### Inference
|
### AnimateDiff SDXL-Beta (2023.11)
|
||||||
|
|
||||||
Here we provide three demo inference scripts. The corresponding AnimateDiff modules and community models need to be downloaded in advance. Put motion module in `models/Motion_Module`; put SparseCtrl encoders in `models/SparseCtrl`.
|
|
||||||
```
|
|
||||||
# under general T2V setting
|
|
||||||
python -m scripts.animate --config configs/prompts/v3/v3-1-T2V.yaml
|
|
||||||
|
|
||||||
# image animation (on RealisticVision)
|
|
||||||
python -m scripts.animate --config configs/prompts/v3/v3-2-animation-RealisticVision.yaml
|
|
||||||
|
|
||||||
# sketch-to-animation and storyboarding (on RealisticVision)
|
|
||||||
python -m scripts.animate --config configs/prompts/v3/v3-3-sketch-RealisticVision.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Limitations
|
|
||||||
1. Small fickering is noticable. To be solved in future versions;
|
|
||||||
2. To stay compatible with comunity models, there is no specific optimizations for general T2V, leading to limited visual quality under this setting;
|
|
||||||
3. **(Style Alignment) For usage such as image animation/interpolation, it's recommanded to use images generated by the same community model.**
|
|
||||||
|
|
||||||
|
|
||||||
## [2023.11] AnimateDiff SDXL-Beta
|
|
||||||
|
|
||||||
Release the Motion Module (beta version) on SDXL, available at [Google Drive](https://drive.google.com/file/d/1EK_D9hDOPfJdK4z8YDB8JYvPracNx2SX/view?usp=share_link
|
Release the Motion Module (beta version) on SDXL, available at [Google Drive](https://drive.google.com/file/d/1EK_D9hDOPfJdK4z8YDB8JYvPracNx2SX/view?usp=share_link
|
||||||
) / [HuggingFace](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt
|
) / [HuggingFace](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt
|
||||||
) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules). High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced **with/without** personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., #sampling steps), depending on the chosen personalized models.
|
) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules). High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced **with/without** personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., sampling steps), depending on the chosen personalized models.
|
||||||
|
Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) for more details of the inference.
|
||||||
|
|
||||||
Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) for more details of the inference. More checkpoints with better-quality would be available soon. Stay tuned. Examples below are manually downsampled for fast loading.
|
<details close>
|
||||||
|
|
||||||
<details open>
|
|
||||||
<summary>AnimateDiff SDXL-Beta Model Zoo</summary>
|
<summary>AnimateDiff SDXL-Beta Model Zoo</summary>
|
||||||
|
|
||||||
| Name | HuggingFace | Type | Storage Space |
|
| Name | HuggingFace | Type | Storage Space |
|
||||||
|-------------------------------|-----------------------------------------------------------------------------------|---------------------|---------------|
|
| - | - | - | - |
|
||||||
| `mm_sdxl_v10_beta.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt) | Motion Module | 950 MB |
|
| `mm_sdxl_v10_beta.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt) | Motion Module | 950 MB |
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
#### Demos
|
||||||
<table class="center">
|
<table class="center">
|
||||||
<tr style="line-height: 0">
|
<tr style="line-height: 0">
|
||||||
<td width=52% style="border: none; text-align: center">Original SDXL</td>
|
<td width=52% style="border: none; text-align: center">Original SDXL</td>
|
||||||
@@ -185,36 +224,31 @@ Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) f
|
|||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
||||||
|
### AnimateDiff v2 (2023.09)
|
||||||
|
|
||||||
## [2023.09] AnimateDiff v2
|
In this version, the motion module `mm_sd_v15_v2.ckpt` ([Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules)) is trained upon larger resolution and batch size.
|
||||||
|
We found that the scale-up training significantly helps improve the motion quality and diversity.
|
||||||
|
We also support **MotionLoRA** of eight basic camera movements.
|
||||||
|
MotionLoRA checkpoints take up only **77 MB storage per model**, and are available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules).
|
||||||
|
|
||||||
In this version, the motion module is trained upon larger resolution and batch size.
|
<details close>
|
||||||
We observe this significantly helps improve the sample quality.
|
|
||||||
|
|
||||||
Moreover, we support **MotionLoRA** for eight basic camera movements.
|
|
||||||
|
|
||||||
<details open>
|
|
||||||
<summary>AnimateDiff v2 Model Zoo</summary>
|
<summary>AnimateDiff v2 Model Zoo</summary>
|
||||||
|
|
||||||
| Name | HuggingFace | Type | Parameter | Storage Space |
|
| Name | HuggingFace | Type | Parameter | Storage |
|
||||||
|--------------------------------------|--------------------------------------------------------------------------------------------------|---------------|-----------|---------------|
|
| - | - | - | - | - |
|
||||||
| mm_sd_v15_v2.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15_v2.ckpt) | Motion Module | 453 M | 1.7 GB |
|
| `mm_sd_v15_v2.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15_v2.ckpt) | Motion Module | 453 M | 1.7 GB |
|
||||||
| v2_lora_ZoomIn.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomIn.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_ZoomIn.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomIn.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_ZoomOut.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomOut.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_ZoomOut.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomOut.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_PanLeft.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanLeft.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_PanLeft.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanLeft.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_PanRight.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanRight.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_PanRight.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanRight.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_TiltUp.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltUp.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_TiltUp.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltUp.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_TiltDown.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltDown.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_TiltDown.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltDown.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_RollingClockwise.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingClockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_RollingClockwise.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingClockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
| v2_lora_RollingAnticlockwise.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingAnticlockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
|
| `v2_lora_RollingAnticlockwise.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingAnticlockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
||||||
- Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
|
#### Demos (MotionLoRA)
|
||||||
```
|
|
||||||
python -m scripts.animate --config configs/prompts/v2/5-RealisticVision-MotionLoRA.yaml
|
|
||||||
```
|
|
||||||
<table class="center">
|
<table class="center">
|
||||||
<tr style="line-height: 0">
|
<tr style="line-height: 0">
|
||||||
<td colspan="2" style="border: none; text-align: center">Zoom In</td>
|
<td colspan="2" style="border: none; text-align: center">Zoom In</td>
|
||||||
@@ -250,11 +284,10 @@ Moreover, we support **MotionLoRA** for eight basic camera movements.
|
|||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
- New Motion Module release! `mm_sd_v15_v2.ckpt` was trained on larger resolution & batch size, and gains noticeable quality improvements. Check it out at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) and use it with `configs/inference/inference-v2.yaml`. Example:
|
|
||||||
```
|
#### Demos (Improved Motions)
|
||||||
python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml
|
Here's a comparison between `mm_sd_v15.ckpt` (left) and improved `mm_sd_v15_v2.ckpt` (right).
|
||||||
```
|
|
||||||
Here is a qualitative comparison between `mm_sd_v15.ckpt` (left) and `mm_sd_v15_v2.ckpt` (right):
|
|
||||||
<table class="center">
|
<table class="center">
|
||||||
<tr>
|
<tr>
|
||||||
<td><img src="__assets__/animations/compare/old_0.gif"></td>
|
<td><img src="__assets__/animations/compare/old_0.gif"></td>
|
||||||
@@ -269,113 +302,44 @@ Moreover, we support **MotionLoRA** for eight basic camera movements.
|
|||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
||||||
## [2023.07] AnimateDiff v1
|
### AnimateDiff v1 (2023.07)
|
||||||
|
|
||||||
<details open>
|
The first version of AnimateDiff!
|
||||||
|
|
||||||
|
<details close>
|
||||||
<summary>AnimateDiff v1 Model Zoo</summary>
|
<summary>AnimateDiff v1 Model Zoo</summary>
|
||||||
|
|
||||||
| Name | HuggingFace | Parameter | Storage Space |
|
| Name | HuggingFace | Parameter | Storage Space |
|
||||||
|-----------------|------------------------------------------------------------------------------|-----------|---------------|
|
| - | - | - | - |
|
||||||
| mm_sd_v14.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v14.ckpt) | 417 M | 1.6 GB |
|
| mm_sd_v14.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v14.ckpt) | 417 M | 1.6 GB |
|
||||||
| mm_sd_v15.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15.ckpt) | 417 M | 1.6 GB |
|
| mm_sd_v15.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15.ckpt) | 417 M | 1.6 GB |
|
||||||
|
</details>
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
### Quick Demos
|
|
||||||
<table class="center">
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_01/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
|
|
||||||
|
|
||||||
<table>
|
## Training
|
||||||
<tr>
|
Please check [Steps for Training](__assets__/docs/animatediff.md) for details.
|
||||||
<td><img src="__assets__/animations/model_03/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
|
|
||||||
|
|
||||||
|
|
||||||
### Inference
|
## Related Resources
|
||||||
|
|
||||||
Here we provide several demo inference scripts. The corresponding AnimateDiff modules and community models need to be downloaded in advance. See [ANIMATEDIFF](__assets__/docs/animatediff.md) for detailed setup.
|
AnimateDiff for Stable Diffusion WebUI: [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
|
||||||
|
AnimateDiff for ComfyUI: [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
|
||||||
```
|
Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
|
||||||
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Community Contributions
|
## Disclaimer
|
||||||
|
This project is released for academic use.
|
||||||
User Interface developed by community:
|
We disclaim responsibility for user-generated content.
|
||||||
- A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
|
Also, please be advised that our only official website are https://github.com/guoyww/AnimateDiff and https://animatediff.github.io, and all the other websites are NOT associated with us at AnimateDiff.
|
||||||
- ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
|
|
||||||
- Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
|
|
||||||
|
|
||||||
## Gradio Demo
|
|
||||||
|
|
||||||
We created a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
|
|
||||||
|
|
||||||
```
|
|
||||||
conda activate animatediff
|
|
||||||
python app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
By default, the demo will run at `localhost:7860`.
|
|
||||||
<br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
|
|
||||||
|
|
||||||
|
|
||||||
## Common Issues
|
## Contact Us
|
||||||
<details>
|
Yuwei Guo: [guoyw@ie.cuhk.edu.hk](mailto:guoyw@ie.cuhk.edu.hk)
|
||||||
<summary>Installation</summary>
|
Ceyuan Yang: [limbo0066@gmail.com](mailto:limbo0066@gmail.com)
|
||||||
|
Bo Dai: [doubledaibo@gmail.com](mailto:doubledaibo@gmail.com)
|
||||||
|
|
||||||
Please ensure the installation of [xformer](https://github.com/facebookresearch/xformers) that is applied to reduce the inference memory.
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Various resolution or number of frames</summary>
|
|
||||||
Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>How to use it without any coding</summary>
|
|
||||||
|
|
||||||
1) Get lora models: train lora model with [A1111](https://github.com/continue-revolution/sd-webui-animatediff) based on a collection of your own favorite images (e.g., tutorials [English](https://www.youtube.com/watch?v=mfaqqL5yOO4), [Japanese](https://www.youtube.com/watch?v=N1tXVR9lplM), [Chinese](https://www.bilibili.com/video/BV1fs4y1x7p2/))
|
|
||||||
or download Lora models from [Civitai](https://civitai.com/).
|
|
||||||
|
|
||||||
2) Animate lora models: using gradio interface or A1111
|
|
||||||
(e.g., tutorials [English](https://github.com/continue-revolution/sd-webui-animatediff), [Japanese](https://www.youtube.com/watch?v=zss3xbtvOWw), [Chinese](https://941ai.com/sd-animatediff-webui-1203.html))
|
|
||||||
|
|
||||||
3) Be creative togther with other techniques, such as, super resolution, frame interpolation, music generation, etc.
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Animating a given image</summary>
|
|
||||||
|
|
||||||
We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the [talesofai](https://github.com/talesofai/AnimateDiff).
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Contributions from community</summary>
|
|
||||||
Contributions are always welcome!! The <code>dev</code> branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## Training and inference
|
|
||||||
Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed setup.
|
|
||||||
|
|
||||||
<!-- ## Gallery -->
|
|
||||||
<!-- We collect several generated results in [GALLERY](./__assets__/docs/gallery.md). -->
|
|
||||||
|
|
||||||
## BibTeX
|
## BibTeX
|
||||||
```
|
```
|
||||||
@@ -394,15 +358,6 @@ Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Disclaimer
|
|
||||||
This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.
|
|
||||||
Please be advised that our only official website is https://github.com/guoyww/AnimateDiff, and all the other websites are NOT associated with us at AnimateDiff.
|
|
||||||
|
|
||||||
|
|
||||||
## Contact Us
|
|
||||||
**Yuwei Guo**: [guoyuwei@pjlab.org.cn](mailto:guoyuwei@pjlab.org.cn)
|
|
||||||
**Ceyuan Yang**: [yangceyuan@pjlab.org.cn](mailto:yangceyuan@pjlab.org.cn)
|
|
||||||
**Bo Dai**: [daibo@pjlab.org.cn](mailto:daibo@pjlab.org.cn)
|
|
||||||
|
|
||||||
## Acknowledgements
|
## Acknowledgements
|
||||||
Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).
|
Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).
|
||||||
|
|||||||
@@ -1,87 +1,3 @@
|
|||||||
# AnimateDiff: training and inference setup
|
|
||||||
## Setups for Inference
|
|
||||||
|
|
||||||
### Prepare Environment
|
|
||||||
|
|
||||||
***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!***
|
|
||||||
|
|
||||||
```
|
|
||||||
git clone https://github.com/guoyww/AnimateDiff.git
|
|
||||||
cd AnimateDiff
|
|
||||||
|
|
||||||
conda env create -f environment.yaml
|
|
||||||
conda activate animatediff
|
|
||||||
```
|
|
||||||
|
|
||||||
### Download Base T2I & Motion Module Checkpoints
|
|
||||||
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately.
|
|
||||||
It's recommanded to try both of them for best results.
|
|
||||||
```
|
|
||||||
git lfs install
|
|
||||||
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
|
|
||||||
|
|
||||||
bash download_bashscripts/0-MotionModule.sh
|
|
||||||
```
|
|
||||||
You may also directly download the motion module checkpoints from [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules), then put them in `models/Motion_Module/` folder.
|
|
||||||
|
|
||||||
### Prepare Personalize T2I
|
|
||||||
Here we provide inference configs for 6 demo T2I on CivitAI.
|
|
||||||
You may run the following bash scripts to download these checkpoints.
|
|
||||||
```
|
|
||||||
bash download_bashscripts/1-ToonYou.sh
|
|
||||||
bash download_bashscripts/2-Lyriel.sh
|
|
||||||
bash download_bashscripts/3-RcnzCartoon.sh
|
|
||||||
bash download_bashscripts/4-MajicMix.sh
|
|
||||||
bash download_bashscripts/5-RealisticVision.sh
|
|
||||||
bash download_bashscripts/6-Tusun.sh
|
|
||||||
bash download_bashscripts/7-FilmVelvia.sh
|
|
||||||
bash download_bashscripts/8-GhibliBackground.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inference
|
|
||||||
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder.
|
|
||||||
```
|
|
||||||
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
To generate animations with a new DreamBooth/LoRA model, you may create a new config `.yaml` file in the following format:
|
|
||||||
```
|
|
||||||
- inference_config: "[path to motion module config file]"
|
|
||||||
|
|
||||||
motion_module:
|
|
||||||
- "models/Motion_Module/mm_sd_v14.ckpt"
|
|
||||||
- "models/Motion_Module/mm_sd_v15.ckpt"
|
|
||||||
|
|
||||||
motion_module_lora_configs:
|
|
||||||
- path: "[path to MotionLoRA model]"
|
|
||||||
alpha: 1.0
|
|
||||||
- ...
|
|
||||||
|
|
||||||
dreambooth_path: "[path to your DreamBooth model .safetensors file]"
|
|
||||||
lora_model_path: "[path to your LoRA model .safetensors file, leave it empty string if not needed]"
|
|
||||||
|
|
||||||
steps: 25
|
|
||||||
guidance_scale: 7.5
|
|
||||||
|
|
||||||
prompt:
|
|
||||||
- "[positive prompt]"
|
|
||||||
|
|
||||||
n_prompt:
|
|
||||||
- "[negative prompt]"
|
|
||||||
```
|
|
||||||
Then run the following commands:
|
|
||||||
```
|
|
||||||
python -m scripts.animate --config [path to the config file]
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Steps for Training
|
## Steps for Training
|
||||||
|
|
||||||
### Dataset
|
### Dataset
|
||||||
|
|||||||
Binary file not shown.
|
Before Width: | Height: | Size: 374 KiB After Width: | Height: | Size: 309 KiB |
Reference in New Issue
Block a user