From 454f493dee346e8148455ca933f5c370ad576d50 Mon Sep 17 00:00:00 2001 From: ray Date: Thu, 20 Jul 2023 12:58:29 -0700 Subject: [PATCH] update "usage without coding", fix hyperlink --- README.md | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 88f97fb..45d6761 100644 --- a/README.md +++ b/README.md @@ -16,27 +16,41 @@ Bo Dai [Arxiv Report](https://arxiv.org/abs/2307.04725) | [Project Page](https://animatediff.github.io/) -## Todo -- [x] Code Release -- [x] Arxiv Report -- [x] GPU Memory Optimization -- [x] Gradio Interface -- [x] A1111 WebUI Extension (contributed by [@continue-revolution](https://github.com/continue-revolution), see [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff)) +## Features +- GPU Memory Optimization, ~12GB VRAM to inference +- User Interface: [Gradio](#gradio-demo), A1111 WebUI Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) ([@continue-revolution](https://github.com/continue-revolution)) ## Common Issues
Installation + Please ensure the installation of [xformer](https://github.com/facebookresearch/xformers) that is applied to reduce the inference memory.
+
Various resolution or number of frames Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.
+ +
+How to use it without any coding + +1) Get lora models: train lora model with [A1111](https://github.com/continue-revolution/sd-webui-animatediff) based on a collection of your own favorite images (e.g., tutorials [English](https://www.youtube.com/watch?v=mfaqqL5yOO4), [Japanese](https://www.youtube.com/watch?v=N1tXVR9lplM), [Chinese](https://www.bilibili.com/video/BV1fs4y1x7p2/)) +or download Lora models from [Civitai](https://civitai.com/). + +2) Animate lora models: using gradio interface or A1111 +(e.g., tutorials [English](https://github.com/continue-revolution/sd-webui-animatediff), [Japanese](https://www.youtube.com/watch?v=zss3xbtvOWw), [Chinese](https://941ai.com/sd-animatediff-webui-1203.html)) + +3) Be creative togther with other techniques, such as, super resolution, frame interpolation, music generation, etc. +
+ +
Animating a given image + We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the [talesofai](https://github.com/talesofai/AnimateDiff).
@@ -49,7 +63,6 @@ Contributions are always welcome!! The dev branch is for community ## Setup for Inference ### Prepare Environment -~~Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.~~ ***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!***