Files
AnimateDiff/README.md

364 lines
19 KiB
Markdown
Raw Permalink Normal View History

2023-07-03 00:32:41 +08:00
# AnimateDiff
2024-02-18 21:42:32 -08:00
This repository is the official implementation of [AnimateDiff](https://arxiv.org/abs/2307.04725) [ICLR2024 Spotlight].
2024-07-17 08:09:19 +00:00
It is a plug-and-play module turning most community text-to-image models into animation generators, without the need of additional training.
2023-07-03 00:32:41 +08:00
2024-02-18 21:42:32 -08:00
**[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725)**
2023-07-03 00:32:41 +08:00
</br>
2023-12-15 20:52:49 +08:00
[Yuwei Guo](https://guoyww.github.io/),
2024-07-17 08:09:19 +00:00
[Ceyuan Yang✝](https://ceyuan.me/),
2023-12-15 20:52:49 +08:00
[Anyi Rao](https://anyirao.com/),
2024-02-18 21:30:35 -08:00
[Zhengyang Liang](https://maxleung99.github.io/),
2023-12-15 20:52:49 +08:00
[Yaohui Wang](https://wyhsirius.github.io/),
[Yu Qiao](https://scholar.google.com.hk/citations?user=gFtI-8QAAAAJ),
2024-02-18 21:37:34 -08:00
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
2023-12-15 20:52:49 +08:00
[Dahua Lin](http://dahua.site),
[Bo Dai](https://daibo.info)
2024-07-17 08:09:19 +00:00
(✝Corresponding Author)
2023-07-21 10:36:35 +08:00
[![arXiv](https://img.shields.io/badge/arXiv-2307.04725-b31b1b.svg)](https://arxiv.org/abs/2307.04725)
[![Project Page](https://img.shields.io/badge/Project-Website-green)](https://animatediff.github.io/)
[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/guoyww/AnimateDiff)
2023-07-03 00:32:41 +08:00
2024-07-17 08:09:19 +00:00
***Note:*** The `main` branch is for [Stable Diffusion V1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5); for [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), please refer `sdxl-beta` branch.
2023-09-25 11:39:05 +08:00
2023-12-15 21:25:27 +08:00
2024-07-17 08:09:19 +00:00
## Quick Demos
More results can be found in the [Gallery](__assets__/docs/gallery.md).
2023-12-15 20:52:49 +08:00
Some of them are contributed by the community.
2024-07-17 08:09:19 +00:00
<table class="center">
<tr>
<td><img src="__assets__/animations/model_01/01.gif"></td>
<td><img src="__assets__/animations/model_01/02.gif"></td>
<td><img src="__assets__/animations/model_01/03.gif"></td>
<td><img src="__assets__/animations/model_01/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
<table>
<tr>
<td><img src="__assets__/animations/model_03/01.gif"></td>
<td><img src="__assets__/animations/model_03/02.gif"></td>
<td><img src="__assets__/animations/model_03/03.gif"></td>
<td><img src="__assets__/animations/model_03/04.gif"></td>
</tr>
</table>
<p style="margin-left: 2em; margin-top: -1em">Model<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
## Quick Start
***Note:*** AnimateDiff is also offically supported by Diffusers.
Visit [AnimateDiff Diffusers Tutorial](https://huggingface.co/docs/diffusers/api/pipelines/animatediff) for more details.
*Following instructions is for working with this repository*.
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
***Note:*** For all scripts, checkpoint downloading will be *automatically* handled, so the script running may take longer time when first executed.
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
### 1. Setup repository and environment
2023-12-15 20:52:49 +08:00
```
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
2024-07-17 08:09:19 +00:00
pip install -r requirements.txt
2023-12-15 20:52:49 +08:00
```
2024-07-17 08:09:19 +00:00
### 2. Launch the sampling script!
The generated samples can be found in `samples/` folder.
#### 2.1 Generate animations with comunity models
```
python -m scripts.animate --config configs/prompts/1_animate/1_1_animate_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_2_animate_FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_3_animate_ToonYou.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_4_animate_MajicMix.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_5_animate_RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_6_animate_Lyriel.yaml
python -m scripts.animate --config configs/prompts/1_animate/1_7_animate_Tusun.yaml
```
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
#### 2.2 Generate animation with MotionLoRA control
2023-12-15 20:52:49 +08:00
```
2024-07-17 08:09:19 +00:00
python -m scripts.animate --config configs/prompts/2_motionlora/2_motionlora_RealisticVision.yaml
2023-12-15 20:52:49 +08:00
```
2024-07-17 08:09:19 +00:00
#### 2.3 More control with SparseCtrl RGB and sketch
```
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_1_sparsectrl_i2v.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_2_sparsectrl_rgb_RealisticVision.yaml
python -m scripts.animate --config configs/prompts/3_sparsectrl/3_3_sparsectrl_sketch_RealisticVision.yaml
```
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
#### 2.4 Gradio app
We created a Gradio demo to make AnimateDiff easier to use.
By default, the demo will run at `localhost:7860`.
```
python -u app.py
```
<img src="__assets__/figs/gradio.jpg" style="width: 75%">
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
## Technical Explanation
<details close>
<summary>Technical Explanation</summary>
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
### AnimateDiff
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
**AnimateDiff aims to learn transferable motion priors that can be applied to other variants of Stable Diffusion family.**
To this end, we design the following training pipeline consisting of three stages.
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
<img src="__assets__/figs/adapter_explain.png" style="width:100%">
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
- In **1. Alleviate Negative Effects** stage, we train the **domain adapter**, e.g., `v3_sd15_adapter.ckpt`, to fit defective visual aritfacts (e.g., watermarks) in the training dataset.
This can also benefit the distangled learning of motion and spatial appearance.
By default, the adapter can be removed at inference. It can also be integrated into the model and its effects can be adjusted by a lora scaler.
2023-12-16 01:23:33 +08:00
2024-07-17 08:09:19 +00:00
- In **2. Learn Motion Priors** stage, we train the **motion module**, e.g., `v3_sd15_mm.ckpt`, to learn the real-world motion patterns from videos.
2023-12-16 01:20:48 +08:00
2024-07-17 08:09:19 +00:00
- In **3. (optional) Adapt to New Patterns** stage, we train **MotionLoRA**, e.g., `v2_lora_ZoomIn.ckpt`, to efficiently adapt motion module for specific motion patterns (camera zooming, rolling, etc.).
2023-12-16 01:20:48 +08:00
2024-07-17 08:09:19 +00:00
### SparseCtrl
2023-12-08 12:36:57 +08:00
2024-07-17 08:09:19 +00:00
**SparseCtrl aims to add more control to text-to-video models by adopting some sparse inputs (e.g., few RGB images or sketch inputs).**
Its technicall details can be found in the following paper:
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
**[SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://arxiv.org/abs/2311.16933)**
2023-12-15 21:25:27 +08:00
[Yuwei Guo](https://guoyww.github.io/),
2024-07-17 08:09:19 +00:00
[Ceyuan Yang✝](https://ceyuan.me/),
2023-12-15 20:52:49 +08:00
[Anyi Rao](https://anyirao.com/),
[Maneesh Agrawala](https://graphics.stanford.edu/~maneesh/),
[Dahua Lin](http://dahua.site),
[Bo Dai](https://daibo.info)
2024-07-17 08:09:19 +00:00
(✝Corresponding Author)
2023-12-15 21:25:27 +08:00
[![arXiv](https://img.shields.io/badge/arXiv-2311.16933-b31b1b.svg)](https://arxiv.org/abs/2311.16933)
2023-12-15 20:52:49 +08:00
[![Project Page](https://img.shields.io/badge/Project-Website-green)](https://guoyww.github.io/projects/SparseCtrl/)
2024-07-17 08:09:19 +00:00
</details>
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
## Model Versions
2024-07-17 08:19:47 +00:00
<details close>
2024-07-17 08:09:19 +00:00
<summary>Model Versions</summary>
### AnimateDiff v3 and SparseCtrl (2023.12)
In this version, we use **Domain Adapter LoRA** for image model finetuning, which provides more flexiblity at inference.
We also implement two (RGB image/scribble) [SparseCtrl](https://arxiv.org/abs/2311.16933) encoders, which can take abitary number of condition maps to control the animation contents.
<details close>
2023-12-15 20:52:49 +08:00
<summary>AnimateDiff v3 Model Zoo</summary>
2024-07-17 08:09:19 +00:00
| Name | HuggingFace | Type | Storage | Description |
| - | - | - | - | - |
| `v3_adapter_sd_v15.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt) | Domain Adapter | 97.4 MB | |
| `v3_sd15_mm.ckpt.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt) | Motion Module | 1.56 GB | |
| `v3_sd15_sparsectrl_scribble.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_scribble.ckpt) | SparseCtrl Encoder | 1.86 GB | scribble condition |
| `v3_sd15_sparsectrl_rgb.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_rgb.ckpt) | SparseCtrl Encoder | 1.85 GB | RGB image condition |
2023-12-15 20:52:49 +08:00
</details>
2024-07-17 08:09:19 +00:00
#### Limitations
1. Small fickering is noticable;
2. To stay compatible with comunity models, there is no specific optimizations for general T2V, leading to limited visual quality under this setting;
3. **(Style Alignment) For usage such as image animation/interpolation, it's recommanded to use images generated by the same community model.**
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
#### Demos
2023-12-15 20:52:49 +08:00
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Input (by RealisticVision)</td>
<td width=25% style="border: none; text-align: center">Animation</td>
<td width=25% style="border: none; text-align: center">Input</td>
<td width=25% style="border: none; text-align: center">Animation</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="__assets__/demos/image/RealisticVision_firework.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="__assets__/animations/v3/animation_fireworks.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="__assets__/demos/image/RealisticVision_sunset.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="__assets__/animations/v3/animation_sunset.gif" style="width:100%"></td>
</tr>
</table>
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Input Scribble</td>
<td width=25% style="border: none; text-align: center">Output</td>
<td width=25% style="border: none; text-align: center">Input Scribbles</td>
<td width=25% style="border: none; text-align: center">Output</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="__assets__/demos/scribble/scribble_1.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="__assets__/animations/v3/sketch_boy.gif" style="width:100%"></td>
2023-12-15 21:25:27 +08:00
<td width=25% style="border: none"><img src="__assets__/demos/scribble/scribble_2_readme.png" style="width:100%"></td>
2023-12-15 20:52:49 +08:00
<td width=25% style="border: none"><img src="__assets__/animations/v3/sketch_city.gif" style="width:100%"></td>
</tr>
</table>
2024-07-17 08:09:19 +00:00
### AnimateDiff SDXL-Beta (2023.11)
2023-12-15 20:52:49 +08:00
Release the Motion Module (beta version) on SDXL, available at [Google Drive](https://drive.google.com/file/d/1EK_D9hDOPfJdK4z8YDB8JYvPracNx2SX/view?usp=share_link
2023-11-10 11:55:23 +08:00
) / [HuggingFace](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt
2024-07-17 08:09:19 +00:00
) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules). High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced **with/without** personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., sampling steps), depending on the chosen personalized models.
Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) for more details of the inference.
2023-11-10 11:55:23 +08:00
2024-07-17 08:09:19 +00:00
<details close>
2023-12-15 20:52:49 +08:00
<summary>AnimateDiff SDXL-Beta Model Zoo</summary>
2024-07-17 08:09:19 +00:00
| Name | HuggingFace | Type | Storage Space |
| - | - | - | - |
| `mm_sdxl_v10_beta.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt) | Motion Module | 950 MB |
2023-12-15 20:52:49 +08:00
</details>
2024-07-17 08:09:19 +00:00
#### Demos
2023-12-15 20:52:49 +08:00
<table class="center">
<tr style="line-height: 0">
<td width=52% style="border: none; text-align: center">Original SDXL</td>
<td width=30% style="border: none; text-align: center">Community SDXL</td>
<td width=18% style="border: none; text-align: center">Community SDXL</td>
</tr>
<tr>
<td width=52% style="border: none"><img src="__assets__/animations/motion_xl/01.gif" style="width:100%"></td>
<td width=30% style="border: none"><img src="__assets__/animations/motion_xl/02.gif" style="width:100%"></td>
<td width=18% style="border: none"><img src="__assets__/animations/motion_xl/03.gif" style="width:100%"></td>
</tr>
</table>
2024-07-17 08:09:19 +00:00
### AnimateDiff v2 (2023.09)
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
In this version, the motion module `mm_sd_v15_v2.ckpt` ([Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules)) is trained upon larger resolution and batch size.
We found that the scale-up training significantly helps improve the motion quality and diversity.
We also support **MotionLoRA** of eight basic camera movements.
MotionLoRA checkpoints take up only **77 MB storage per model**, and are available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules).
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
<details close>
2023-12-15 20:52:49 +08:00
<summary>AnimateDiff v2 Model Zoo</summary>
2024-07-17 08:09:19 +00:00
| Name | HuggingFace | Type | Parameter | Storage |
| - | - | - | - | - |
| `mm_sd_v15_v2.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15_v2.ckpt) | Motion Module | 453 M | 1.7 GB |
| `v2_lora_ZoomIn.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomIn.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_ZoomOut.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_ZoomOut.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_PanLeft.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanLeft.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_PanRight.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_PanRight.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_TiltUp.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltUp.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_TiltDown.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_TiltDown.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_RollingClockwise.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingClockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
| `v2_lora_RollingAnticlockwise.ckpt` | [Link](https://huggingface.co/guoyww/animatediff/blob/main/v2_lora_RollingAnticlockwise.ckpt) | MotionLoRA | 19 M | 74 MB |
2023-12-15 20:52:49 +08:00
</details>
2023-11-10 11:55:23 +08:00
2024-07-17 08:09:19 +00:00
#### Demos (MotionLoRA)
2023-12-15 20:52:49 +08:00
<table class="center">
2024-07-17 08:09:19 +00:00
<tr style="line-height: 0">
<td colspan="2" style="border: none; text-align: center">Zoom In</td>
<td colspan="2" style="border: none; text-align: center">Zoom Out</td>
<td colspan="2" style="border: none; text-align: center">Zoom Pan Left</td>
<td colspan="2" style="border: none; text-align: center">Zoom Pan Right</td>
</tr>
<tr>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/01.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/02.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/02.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/01.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/03.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/04.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/04.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/03.gif"></td>
</tr>
<tr style="line-height: 0">
<td colspan="2" style="border: none; text-align: center">Tilt Up</td>
<td colspan="2" style="border: none; text-align: center">Tilt Down</td>
<td colspan="2" style="border: none; text-align: center">Rolling Anti-Clockwise</td>
<td colspan="2" style="border: none; text-align: center">Rolling Clockwise</td>
</tr>
<tr>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/05.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/05.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/06.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/06.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/07.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/07.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/08.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/08.gif"></td>
</tr>
2023-12-15 20:52:49 +08:00
</table>
2024-07-17 08:09:19 +00:00
#### Demos (Improved Motions)
Here's a comparison between `mm_sd_v15.ckpt` (left) and improved `mm_sd_v15_v2.ckpt` (right).
2023-11-03 09:49:15 +08:00
2024-07-17 08:09:19 +00:00
<table class="center">
<tr>
<td><img src="__assets__/animations/compare/old_0.gif"></td>
<td><img src="__assets__/animations/compare/new_0.gif"></td>
<td><img src="__assets__/animations/compare/old_1.gif"></td>
<td><img src="__assets__/animations/compare/new_1.gif"></td>
<td><img src="__assets__/animations/compare/old_2.gif"></td>
<td><img src="__assets__/animations/compare/new_2.gif"></td>
<td><img src="__assets__/animations/compare/old_3.gif"></td>
<td><img src="__assets__/animations/compare/new_3.gif"></td>
</tr>
</table>
2023-12-15 20:52:49 +08:00
2024-07-17 08:09:19 +00:00
### AnimateDiff v1 (2023.07)
2023-11-03 09:49:15 +08:00
2024-07-17 08:09:19 +00:00
The first version of AnimateDiff!
2023-09-25 11:39:05 +08:00
2024-07-17 08:09:19 +00:00
<details close>
<summary>AnimateDiff v1 Model Zoo</summary>
2024-07-17 08:09:19 +00:00
| Name | HuggingFace | Parameter | Storage Space |
| - | - | - | - |
| mm_sd_v14.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v14.ckpt) | 417 M | 1.6 GB |
| mm_sd_v15.ckpt | [Link](https://huggingface.co/guoyww/animatediff/blob/main/mm_sd_v15.ckpt) | 417 M | 1.6 GB |
2023-07-16 10:40:59 +08:00
</details>
</details>
2024-07-17 08:09:19 +00:00
## Training
Please check [Steps for Training](__assets__/docs/animatediff.md) for details.
2024-07-17 08:09:19 +00:00
## Related Resources
2024-07-17 08:09:19 +00:00
AnimateDiff for Stable Diffusion WebUI: [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
AnimateDiff for ComfyUI: [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
2024-07-17 08:09:19 +00:00
## Disclaimer
This project is released for academic use.
We disclaim responsibility for user-generated content.
Also, please be advised that our only official website are https://github.com/guoyww/AnimateDiff and https://animatediff.github.io, and all the other websites are NOT associated with us at AnimateDiff.
2023-07-16 10:40:59 +08:00
2024-07-17 08:09:19 +00:00
## Contact Us
Yuwei Guo: [guoyw@ie.cuhk.edu.hk](mailto:guoyw@ie.cuhk.edu.hk)
Ceyuan Yang: [limbo0066@gmail.com](mailto:limbo0066@gmail.com)
Bo Dai: [doubledaibo@gmail.com](mailto:doubledaibo@gmail.com)
2023-07-18 16:41:22 +08:00
2023-07-13 17:40:54 +08:00
2023-07-10 13:46:01 +08:00
## BibTeX
2023-07-11 10:54:17 +08:00
```
2023-07-16 10:40:59 +08:00
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
2024-02-18 21:30:35 -08:00
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={International Conference on Learning Representations},
year={2024}
2023-07-11 10:54:17 +08:00
}
2023-12-15 20:52:49 +08:00
@article{guo2023sparsectrl,
title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2311.16933},
year={2023}
}
2023-07-11 10:54:17 +08:00
```
2023-07-09 22:40:48 +08:00
2023-07-10 13:52:01 +08:00
2023-07-09 22:40:48 +08:00
## Acknowledgements
2023-09-25 19:51:52 -07:00
Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).