diff --git a/README.md b/README.md index 64d0586..5b0aac9 100644 --- a/README.md +++ b/README.md @@ -27,10 +27,12 @@ Bo Dai ## Setup for Inference ### Prepare Environment -Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded. +~~Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.~~ + +***We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and can be run on a single RTX3090 !!*** ``` -git clone https://github.com/guoyww/animatediff.git +git clone https://github.com/guoyww/AnimateDiff.git cd AnimateDiff conda env create -f environment.yaml @@ -107,7 +109,7 @@ Here we demonstrate several best results we found in our experiments or generate

Model:holding_sign (samples are contributed by CivitAI artists)
+Model:holding_sign, etc. (samples are contributed by CivitAI artists)