Files

27 lines
997 B
Markdown
Raw Permalink Normal View History

2023-11-03 09:49:15 +08:00
## Steps for Training
### Dataset
Before training, download the videos files and the `.csv` annotations of [WebVid10M](https://maxbain.com/webvid-dataset/) to the local mechine.
Note that our examplar training script requires all the videos to be saved in a single folder. You may change this by modifying `animatediff/data/dataset.py`.
### Configuration
After dataset preparations, update the below data paths in the config `.yaml` files in `configs/training/` folder:
```
train_data:
2024-07-17 08:09:19 +00:00
csv_path: [Replace with .csv Annotation File Path]
2023-11-03 09:49:15 +08:00
video_folder: [Replace with Video Folder Path]
2024-07-17 08:09:19 +00:00
sample_size: 256
2023-11-03 09:49:15 +08:00
```
Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files.
### Training
2023-12-15 21:25:27 +08:00
To finetune the unet's image layers
2023-11-03 09:49:15 +08:00
```
2023-12-15 21:25:27 +08:00
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/v1/image_finetune.yaml
2023-11-03 09:49:15 +08:00
```
2023-12-15 21:25:27 +08:00
To train motion modules
2023-11-03 09:49:15 +08:00
```
2023-12-15 21:25:27 +08:00
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/v1/training.yaml
2023-11-03 09:49:15 +08:00
```