mirror of
https://github.com/guoyww/AnimateDiff.git
synced 2025-12-16 16:38:01 +01:00
update readme
This commit is contained in:
231
README.md
231
README.md
@@ -19,6 +19,8 @@ Bo Dai
|
|||||||
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
|
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
|
||||||
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
|
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
|
||||||
|
|
||||||
|
## Next
|
||||||
|
One with better controllability and quality is coming soon. Stay tuned.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
- **[2023/09/25]** Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
|
- **[2023/09/25]** Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
|
||||||
@@ -78,12 +80,23 @@ Bo Dai
|
|||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
- GPU Memory Optimization, ~12GB VRAM to inference
|
- GPU Memory Optimization, ~12GB VRAM to inference
|
||||||
- User Interface:
|
|
||||||
|
|
||||||
|
## Quick Demo
|
||||||
|
|
||||||
|
User Interface developed by community:
|
||||||
- A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
|
- A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
|
||||||
- ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
|
- ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
|
||||||
- [Gradio](#gradio-demo)
|
|
||||||
- Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
|
- Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
|
||||||
|
|
||||||
|
We also create a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
|
||||||
|
```
|
||||||
|
conda activate animatediff
|
||||||
|
python app.py
|
||||||
|
```
|
||||||
|
By default, the demo will run at `localhost:7860`.
|
||||||
|
<br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
|
||||||
|
|
||||||
|
|
||||||
## Model Zoo
|
## Model Zoo
|
||||||
<details open>
|
<details open>
|
||||||
@@ -151,219 +164,11 @@ We totally agree that animating a given image is an appealing feature, which we
|
|||||||
Contributions are always welcome!! The <code>dev</code> branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
|
Contributions are always welcome!! The <code>dev</code> branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
## Training and inference
|
||||||
## Setups for Inference
|
Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed setup.
|
||||||
|
|
||||||
### Prepare Environment
|
|
||||||
|
|
||||||
***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!***
|
|
||||||
|
|
||||||
```
|
|
||||||
git clone https://github.com/guoyww/AnimateDiff.git
|
|
||||||
cd AnimateDiff
|
|
||||||
|
|
||||||
conda env create -f environment.yaml
|
|
||||||
conda activate animatediff
|
|
||||||
```
|
|
||||||
|
|
||||||
### Download Base T2I & Motion Module Checkpoints
|
|
||||||
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately.
|
|
||||||
It's recommanded to try both of them for best results.
|
|
||||||
```
|
|
||||||
git lfs install
|
|
||||||
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
|
|
||||||
|
|
||||||
bash download_bashscripts/0-MotionModule.sh
|
|
||||||
```
|
|
||||||
You may also directly download the motion module checkpoints from [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules), then put them in `models/Motion_Module/` folder.
|
|
||||||
|
|
||||||
### Prepare Personalize T2I
|
|
||||||
Here we provide inference configs for 6 demo T2I on CivitAI.
|
|
||||||
You may run the following bash scripts to download these checkpoints.
|
|
||||||
```
|
|
||||||
bash download_bashscripts/1-ToonYou.sh
|
|
||||||
bash download_bashscripts/2-Lyriel.sh
|
|
||||||
bash download_bashscripts/3-RcnzCartoon.sh
|
|
||||||
bash download_bashscripts/4-MajicMix.sh
|
|
||||||
bash download_bashscripts/5-RealisticVision.sh
|
|
||||||
bash download_bashscripts/6-Tusun.sh
|
|
||||||
bash download_bashscripts/7-FilmVelvia.sh
|
|
||||||
bash download_bashscripts/8-GhibliBackground.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inference
|
|
||||||
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder.
|
|
||||||
```
|
|
||||||
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
|
|
||||||
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
To generate animations with a new DreamBooth/LoRA model, you may create a new config `.yaml` file in the following format:
|
|
||||||
```
|
|
||||||
NewModel:
|
|
||||||
inference_config: "[path to motion module config file]"
|
|
||||||
|
|
||||||
motion_module:
|
|
||||||
- "models/Motion_Module/mm_sd_v14.ckpt"
|
|
||||||
- "models/Motion_Module/mm_sd_v15.ckpt"
|
|
||||||
|
|
||||||
motion_module_lora_configs:
|
|
||||||
- path: "[path to MotionLoRA model]"
|
|
||||||
alpha: 1.0
|
|
||||||
- ...
|
|
||||||
|
|
||||||
dreambooth_path: "[path to your DreamBooth model .safetensors file]"
|
|
||||||
lora_model_path: "[path to your LoRA model .safetensors file, leave it empty string if not needed]"
|
|
||||||
|
|
||||||
steps: 25
|
|
||||||
guidance_scale: 7.5
|
|
||||||
|
|
||||||
prompt:
|
|
||||||
- "[positive prompt]"
|
|
||||||
|
|
||||||
n_prompt:
|
|
||||||
- "[negative prompt]"
|
|
||||||
```
|
|
||||||
Then run the following commands:
|
|
||||||
```
|
|
||||||
python -m scripts.animate --config [path to the config file]
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Steps for Training
|
|
||||||
|
|
||||||
### Dataset
|
|
||||||
Before training, download the videos files and the `.csv` annotations of [WebVid10M](https://maxbain.com/webvid-dataset/) to the local mechine.
|
|
||||||
Note that our examplar training script requires all the videos to be saved in a single folder. You may change this by modifying `animatediff/data/dataset.py`.
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
After dataset preparations, update the below data paths in the config `.yaml` files in `configs/training/` folder:
|
|
||||||
```
|
|
||||||
train_data:
|
|
||||||
csv_path: [Replace with .csv Annotation File Path]
|
|
||||||
video_folder: [Replace with Video Folder Path]
|
|
||||||
sample_size: 256
|
|
||||||
```
|
|
||||||
Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files.
|
|
||||||
|
|
||||||
### Training
|
|
||||||
To train motion modules
|
|
||||||
```
|
|
||||||
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/training.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
To finetune the unet's image layers
|
|
||||||
```
|
|
||||||
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/image_finetune.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Gradio Demo
|
|
||||||
We have created a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
|
|
||||||
```
|
|
||||||
conda activate animatediff
|
|
||||||
python app.py
|
|
||||||
```
|
|
||||||
By default, the demo will run at `localhost:7860`.
|
|
||||||
<br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
|
|
||||||
|
|
||||||
## Gallery
|
## Gallery
|
||||||
Here we demonstrate several best results we found in our experiments.
|
We collect several generated results in [GALLERY](./__assets__/docs/gallery.md).
|
||||||
|
|
||||||
<table class="center">
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_01/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_01/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_02/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_02/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_02/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_02/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4468/counterfeit-v30">Counterfeit V3.0</a></p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_03/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_03/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_04/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_04/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_04/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_04/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/43331/majicmix-realistic">majicMIX Realistic</a></p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_05/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_05/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_05/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_05/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/66347/rcnz-cartoon-3d">RCNZ Cartoon</a></p>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_06/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_06/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_06/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_06/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/33208/filmgirl-film-grain-lora-and-loha">FilmVelvia</a></p>
|
|
||||||
|
|
||||||
#### Community Cases
|
|
||||||
Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_07/init.jpg"></td>
|
|
||||||
<td><img src="__assets__/animations/model_07/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_07/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_07/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_07/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">
|
|
||||||
Character Model:<a href="https://civitai.com/models/13237/genshen-impact-yoimiya">Yoimiya</a>
|
|
||||||
(with an initial reference image, see <a href="https://github.com/talesofai/AnimateDiff">WIP fork</a> for the extended implementation.)
|
|
||||||
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="__assets__/animations/model_08/01.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_08/02.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_08/03.gif"></td>
|
|
||||||
<td><img src="__assets__/animations/model_08/04.gif"></td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
<p style="margin-left: 2em; margin-top: -1em">
|
|
||||||
Character Model:<a href="https://civitai.com/models/9850/paimon-genshin-impact">Paimon</a>;
|
|
||||||
Pose Model:<a href="https://civitai.com/models/107295/or-holdingsign">Hold Sign</a></p>
|
|
||||||
|
|
||||||
## BibTeX
|
## BibTeX
|
||||||
```
|
```
|
||||||
|
|||||||
112
__assets__/docs/animatediff.md
Normal file
112
__assets__/docs/animatediff.md
Normal file
@@ -0,0 +1,112 @@
|
|||||||
|
# AnimateDiff: training and inference setup
|
||||||
|
## Setups for Inference
|
||||||
|
|
||||||
|
### Prepare Environment
|
||||||
|
|
||||||
|
***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!***
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://github.com/guoyww/AnimateDiff.git
|
||||||
|
cd AnimateDiff
|
||||||
|
|
||||||
|
conda env create -f environment.yaml
|
||||||
|
conda activate animatediff
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Base T2I & Motion Module Checkpoints
|
||||||
|
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately.
|
||||||
|
It's recommanded to try both of them for best results.
|
||||||
|
```
|
||||||
|
git lfs install
|
||||||
|
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
|
||||||
|
|
||||||
|
bash download_bashscripts/0-MotionModule.sh
|
||||||
|
```
|
||||||
|
You may also directly download the motion module checkpoints from [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules), then put them in `models/Motion_Module/` folder.
|
||||||
|
|
||||||
|
### Prepare Personalize T2I
|
||||||
|
Here we provide inference configs for 6 demo T2I on CivitAI.
|
||||||
|
You may run the following bash scripts to download these checkpoints.
|
||||||
|
```
|
||||||
|
bash download_bashscripts/1-ToonYou.sh
|
||||||
|
bash download_bashscripts/2-Lyriel.sh
|
||||||
|
bash download_bashscripts/3-RcnzCartoon.sh
|
||||||
|
bash download_bashscripts/4-MajicMix.sh
|
||||||
|
bash download_bashscripts/5-RealisticVision.sh
|
||||||
|
bash download_bashscripts/6-Tusun.sh
|
||||||
|
bash download_bashscripts/7-FilmVelvia.sh
|
||||||
|
bash download_bashscripts/8-GhibliBackground.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Inference
|
||||||
|
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder.
|
||||||
|
```
|
||||||
|
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
|
||||||
|
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
To generate animations with a new DreamBooth/LoRA model, you may create a new config `.yaml` file in the following format:
|
||||||
|
```
|
||||||
|
NewModel:
|
||||||
|
inference_config: "[path to motion module config file]"
|
||||||
|
|
||||||
|
motion_module:
|
||||||
|
- "models/Motion_Module/mm_sd_v14.ckpt"
|
||||||
|
- "models/Motion_Module/mm_sd_v15.ckpt"
|
||||||
|
|
||||||
|
motion_module_lora_configs:
|
||||||
|
- path: "[path to MotionLoRA model]"
|
||||||
|
alpha: 1.0
|
||||||
|
- ...
|
||||||
|
|
||||||
|
dreambooth_path: "[path to your DreamBooth model .safetensors file]"
|
||||||
|
lora_model_path: "[path to your LoRA model .safetensors file, leave it empty string if not needed]"
|
||||||
|
|
||||||
|
steps: 25
|
||||||
|
guidance_scale: 7.5
|
||||||
|
|
||||||
|
prompt:
|
||||||
|
- "[positive prompt]"
|
||||||
|
|
||||||
|
n_prompt:
|
||||||
|
- "[negative prompt]"
|
||||||
|
```
|
||||||
|
Then run the following commands:
|
||||||
|
```
|
||||||
|
python -m scripts.animate --config [path to the config file]
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Steps for Training
|
||||||
|
|
||||||
|
### Dataset
|
||||||
|
Before training, download the videos files and the `.csv` annotations of [WebVid10M](https://maxbain.com/webvid-dataset/) to the local mechine.
|
||||||
|
Note that our examplar training script requires all the videos to be saved in a single folder. You may change this by modifying `animatediff/data/dataset.py`.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
After dataset preparations, update the below data paths in the config `.yaml` files in `configs/training/` folder:
|
||||||
|
```
|
||||||
|
train_data:
|
||||||
|
csv_path: [Replace with .csv Annotation File Path]
|
||||||
|
video_folder: [Replace with Video Folder Path]
|
||||||
|
sample_size: 256
|
||||||
|
```
|
||||||
|
Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files.
|
||||||
|
|
||||||
|
### Training
|
||||||
|
To train motion modules
|
||||||
|
```
|
||||||
|
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/training.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
To finetune the unet's image layers
|
||||||
|
```
|
||||||
|
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/image_finetune.yaml
|
||||||
|
```
|
||||||
|
|
||||||
93
__assets__/docs/gallery.md
Normal file
93
__assets__/docs/gallery.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# Gallery
|
||||||
|
Here we demonstrate several best results we found in our experiments.
|
||||||
|
|
||||||
|
<table class="center">
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_01/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_01/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_01/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_01/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/30240/toonyou">ToonYou</a></p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_02/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_02/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_02/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_02/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4468/counterfeit-v30">Counterfeit V3.0</a></p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_03/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_03/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_03/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_03/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/4201/realistic-vision-v20">Realistic Vision V2.0</a></p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_04/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_04/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_04/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_04/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model: <a href="https://civitai.com/models/43331/majicmix-realistic">majicMIX Realistic</a></p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_05/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_05/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_05/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_05/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/66347/rcnz-cartoon-3d">RCNZ Cartoon</a></p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_06/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_06/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_06/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_06/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/33208/filmgirl-film-grain-lora-and-loha">FilmVelvia</a></p>
|
||||||
|
|
||||||
|
#### Community Cases
|
||||||
|
Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_07/init.jpg"></td>
|
||||||
|
<td><img src="../animations/model_07/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_07/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_07/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_07/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">
|
||||||
|
Character Model:<a href="https://civitai.com/models/13237/genshen-impact-yoimiya">Yoimiya</a>
|
||||||
|
(with an initial reference image, see <a href="https://github.com/talesofai/AnimateDiff">WIP fork</a> for the extended implementation.)
|
||||||
|
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="../animations/model_08/01.gif"></td>
|
||||||
|
<td><img src="../animations/model_08/02.gif"></td>
|
||||||
|
<td><img src="../animations/model_08/03.gif"></td>
|
||||||
|
<td><img src="../animations/model_08/04.gif"></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
<p style="margin-left: 2em; margin-top: -1em">
|
||||||
|
Character Model:<a href="https://civitai.com/models/9850/paimon-genshin-impact">Paimon</a>;
|
||||||
|
Pose Model:<a href="https://civitai.com/models/107295/or-holdingsign">Hold Sign</a></p>
|
||||||
|
|
||||||
|
|
||||||
Reference in New Issue
Block a user