mirror of
https://github.com/jasonppy/VoiceCraft.git
synced 2026-04-03 09:46:45 +02:00
Merged changes from upstream
This commit is contained in:
107
README.md
107
README.md
@@ -1,7 +1,5 @@
|
||||
# VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild
|
||||
[Demo](https://jasonppy.github.io/VoiceCraft_web) [Paper](https://jasonppy.github.io/assets/pdfs/VoiceCraft.pdf)
|
||||
[](https://replicate.com/cjwbw/voicecraft)
|
||||
|
||||
[](https://jasonppy.github.io/assets/pdfs/VoiceCraft.pdf) [](https://jasonppy.github.io/VoiceCraft_web/) [](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio) [](https://colab.research.google.com/drive/1IOjpglQyMTO2C3Y94LD9FY0Ocn-RJRg6?usp=sharing) [](https://replicate.com/cjwbw/voicecraft)
|
||||
|
||||
|
||||
### TL;DR
|
||||
@@ -9,8 +7,23 @@ VoiceCraft is a token infilling neural codec language model, that achieves state
|
||||
|
||||
To clone or edit an unseen voice, VoiceCraft needs only a few seconds of reference.
|
||||
|
||||
## How to run inference
|
||||
There are three ways (besides running Gradio in Colab):
|
||||
|
||||
1. More flexible inference beyond Gradio UI in Google Colab. see [quickstart colab](#quickstart-colab)
|
||||
2. with docker. see [quickstart docker](#quickstart-docker)
|
||||
3. without docker. see [environment setup](#environment-setup). You can also run gradio locally if you choose this option
|
||||
|
||||
When you are inside the docker image or you have installed all dependencies, Checkout [`inference_tts.ipynb`](./inference_tts.ipynb).
|
||||
|
||||
If you want to do model development such as training/finetuning, I recommend following [envrionment setup](#environment-setup) and [training](#training).
|
||||
|
||||
## News
|
||||
:star: 03/28/2024: Model weights are up on HuggingFace🤗 [here](https://huggingface.co/pyp1/VoiceCraft/tree/main)!
|
||||
:star: 04/11/2024: VoiceCraft Gradio is now available on HuggingFace Spaces [here](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio)! Major thanks to [@zuev-stepan](https://github.com/zuev-stepan), [@Sewlell](https://github.com/Sewlell), [@pgsoar](https://github.com/pgosar) [@Ph0rk0z](https://github.com/Ph0rk0z).
|
||||
|
||||
:star: 04/05/2024: I finetuned giga330M with the TTS objective on gigaspeech and 1/5 of librilight. Weights are [here](https://huggingface.co/pyp1/VoiceCraft/tree/main). Make sure maximal prompt + generation length <= 16 seconds (due to our limited compute, we had to drop utterances longer than 16s in training data). Even stronger models forthcomming, stay tuned!
|
||||
|
||||
:star: 03/28/2024: Model weights for giga330M and giga830M are up on HuggingFace🤗 [here](https://huggingface.co/pyp1/VoiceCraft/tree/main)!
|
||||
|
||||
## TODO
|
||||
- [x] Codebase upload
|
||||
@@ -18,22 +31,25 @@ To clone or edit an unseen voice, VoiceCraft needs only a few seconds of referen
|
||||
- [x] Inference demo for speech editing and TTS
|
||||
- [x] Training guidance
|
||||
- [x] RealEdit dataset and training manifest
|
||||
- [x] Model weights (both 330M and 830M, the former seems to be just as good)
|
||||
- [ ] Write colab notebooks for better hands-on experience
|
||||
- [ ] HuggingFace Spaces demo
|
||||
- [ ] Better guidance on training/finetuning
|
||||
- [x] Model weights (giga330M.pth, giga830M.pth, and gigaHalfLibri330M_TTSEnhanced_max16s.pth)
|
||||
- [x] Better guidance on training/finetuning
|
||||
- [x] Colab notebooks
|
||||
- [x] HuggingFace Spaces demo
|
||||
- [ ] Command line
|
||||
- [ ] Improve efficiency
|
||||
|
||||
## How to run TTS inference
|
||||
There are two ways:
|
||||
1. with docker. see [quickstart](#quickstart)
|
||||
2. without docker. see [envrionment setup](#environment-setup)
|
||||
|
||||
When you are inside the docker image or you have installed all dependencies, Checkout [`inference_tts.ipynb`](./inference_tts.ipynb).
|
||||
|
||||
If you want to do model development such as training/finetuning, I recommend following [envrionment setup](#environment-setup) and [training](#training).
|
||||
## QuickStart Colab
|
||||
|
||||
## QuickStart
|
||||
:star: To try out TTS inference with VoiceCraft, the best way is using docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
|
||||
:star: To try out speech editing or TTS Inference with VoiceCraft, the simplest way is using Google Colab.
|
||||
Instructions to run are on the Colab itself.
|
||||
|
||||
1. To try [Speech Editing](https://colab.research.google.com/drive/1FV7EC36dl8UioePY1xXijXTMl7X47kR_?usp=sharing)
|
||||
2. To try [TTS Inference](https://colab.research.google.com/drive/1lch_6it5-JpXgAQlUTRRI2z2_rk5K67Z?usp=sharing)
|
||||
|
||||
## QuickStart Docker
|
||||
:star: To try out TTS inference with VoiceCraft, you can also use docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
|
||||
|
||||
Tested on Linux and Windows and should work with any host with docker installed.
|
||||
```bash
|
||||
@@ -84,6 +100,10 @@ pip install datasets==2.16.0
|
||||
pip install torchmetrics==0.11.1
|
||||
# install MFA for getting forced-alignment, this could take a few minutes
|
||||
conda install -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068
|
||||
# install MFA english dictionary and model
|
||||
mfa model download dictionary english_us_arpa
|
||||
mfa model download acoustic english_us_arpa
|
||||
# pip install huggingface_hub
|
||||
# conda install pocl # above gives an warning for installing pocl, not sure if really need this
|
||||
|
||||
# to run ipynb
|
||||
@@ -95,6 +115,46 @@ If you have encountered version issues when running things, checkout [environmen
|
||||
## Inference Examples
|
||||
Checkout [`inference_speech_editing.ipynb`](./inference_speech_editing.ipynb) and [`inference_tts.ipynb`](./inference_tts.ipynb)
|
||||
|
||||
## Gradio
|
||||
### Run in colab
|
||||
|
||||
[](https://colab.research.google.com/drive/1IOjpglQyMTO2C3Y94LD9FY0Ocn-RJRg6?usp=sharing)
|
||||
|
||||
### Run locally
|
||||
After environment setup install additional dependencies:
|
||||
```bash
|
||||
apt-get install -y espeak espeak-data libespeak1 libespeak-dev
|
||||
apt-get install -y festival*
|
||||
apt-get install -y build-essential
|
||||
apt-get install -y flac libasound2-dev libsndfile1-dev vorbis-tools
|
||||
apt-get install -y libxml2-dev libxslt-dev zlib1g-dev
|
||||
pip install -r gradio_requirements.txt
|
||||
```
|
||||
|
||||
Run gradio server from terminal or [`gradio_app.ipynb`](./gradio_app.ipynb):
|
||||
```bash
|
||||
python gradio_app.py
|
||||
```
|
||||
It is ready to use on [default url](http://127.0.0.1:7860).
|
||||
|
||||
### How to use it
|
||||
1. (optionally) Select models
|
||||
2. Load models
|
||||
3. Transcribe
|
||||
4. (optionally) Tweak some parameters
|
||||
5. Run
|
||||
6. (optionally) Rerun part-by-part in Long TTS mode
|
||||
|
||||
### Some features
|
||||
Smart transcript: write only what you want to generate
|
||||
|
||||
TTS mode: Zero-shot TTS
|
||||
|
||||
Edit mode: Speech editing
|
||||
|
||||
Long TTS mode: Easy TTS on long texts
|
||||
|
||||
|
||||
## Training
|
||||
To train an VoiceCraft model, you need to prepare the following parts:
|
||||
1. utterances and their transcripts
|
||||
@@ -134,19 +194,16 @@ cd ./z_scripts
|
||||
bash e830M.sh
|
||||
```
|
||||
|
||||
It's the same procedure to prepare your own custom dataset. Make sure that if
|
||||
|
||||
## Finetuning
|
||||
You also need to do step 1-4 as Training, and I recommend to use AdamW for optimization if you finetune a pretrained model for better stability. checkout script `./z_scripts/e830M_ft.sh`.
|
||||
|
||||
If your dataset introduce new phonemes (which is very likely) that doesn't exist in the giga checkpoint, make sure you combine the original phonemes with the phoneme from your data when construction vocab. And you need to adjust `--text_vocab_size` and `--text_pad_token` so that the former is bigger than or equal to you vocab size, and the latter has the same value as `--text_vocab_size` (i.e. `--text_pad_token` is always the last token). Also since the text embedding are now of a different size, make sure you modify the weights loading part so that I won't crash (you could skip loading `text_embedding` or only load the existing part, and randomly initialize the new)
|
||||
|
||||
## License
|
||||
The codebase is under CC BY-NC-SA 4.0 ([LICENSE-CODE](./LICENSE-CODE)), and the model weights are under Coqui Public Model License 1.0.0 ([LICENSE-MODEL](./LICENSE-MODEL)). Note that we use some of the code from other repository that are under different licenses: `./models/codebooks_patterns.py` is under MIT license; `./models/modules`, `./steps/optim.py`, `data/tokenizer.py` are under Apache License, Version 2.0; the phonemizer we used is under GNU 3.0 License.
|
||||
|
||||
<!-- How to use g2p to convert english text into IPA phoneme sequence
|
||||
first install it with `pip install g2p`
|
||||
```python
|
||||
from g2p import make_g2p
|
||||
transducer = make_g2p('eng', 'eng-ipa')
|
||||
transducer("hello").output_string
|
||||
# it will output: 'hʌloʊ'
|
||||
``` -->
|
||||
|
||||
## Acknowledgement
|
||||
We thank Feiteng for his [VALL-E reproduction](https://github.com/lifeiteng/vall-e), and we thank audiocraft team for open-sourcing [encodec](https://github.com/facebookresearch/audiocraft).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user