2022-10-15 19:11:16 +02:00
2023-04-10 14:03:08 +02:00
## 🐸Coqui.ai News
2023-05-24 12:07:26 +02:00
- 📣 You can use [~1100 Fairseq models ](https://github.com/facebookresearch/fairseq/tree/main/examples/mms ) with 🐸TTS.
- 📣 🐸TTS now supports 🐢Tortoise with faster inference.
- 📣 **Coqui Studio API ** is landed on 🐸TTS. - [Example ](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api )
- 📣 [**Coqui Sudio API** ](https://docs.coqui.ai/docs ) is live.
- 📣 Voice generation with prompts - **Prompt to Voice ** - is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin )!! - [Blog Post ](https://coqui.ai/blog/tts/prompt-to-voice )
- 📣 Voice generation with fusion - **Voice fusion ** - is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin ).
- 📣 Voice cloning is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin ).
2023-04-10 14:03:08 +02:00
## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
2022-10-15 19:11:16 +02:00
2020-07-17 12:37:18 +02:00
2021-03-09 10:39:17 -05:00
🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
2021-12-17 11:03:03 +01:00
🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages ** for products and research projects.
2019-07-19 12:37:35 +02:00
2022-11-09 11:27:07 +01:00
[](https://discord.gg/5eXr5seRrv)
2022-05-13 14:56:49 +02:00
[](https://opensource.org/licenses/MPL-2.0)
2021-01-26 03:20:24 +01:00
[](https://badge.fury.io/py/TTS)
2021-03-07 03:58:46 +01:00
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
2021-04-01 16:10:52 +02:00
[](https://pepy.tech/project/tts)
2021-04-08 15:09:25 +02:00
[](https://zenodo.org/badge/latestdoi/265612440)
2022-05-13 14:56:49 +02:00








2022-12-12 12:20:50 +01:00



2021-09-06 14:21:47 +00:00
[](https://tts.readthedocs.io/en/latest/)
2020-07-17 12:37:18 +02:00
2021-03-18 14:57:08 +01:00
📰 [**Subscribe to 🐸Coqui.ai Newsletter** ](https://coqui.ai/?subscription=true )
2021-03-18 13:13:42 +01:00
2021-01-27 11:46:01 +01:00
📢 [English Voice Samples ](https://erogol.github.io/ddc-samples/ ) and [SoundCloud playlist ](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2 )
2018-01-22 01:48:59 -08:00
2021-01-27 11:46:01 +01:00
📄 [Text-to-Speech paper collection ](https://github.com/erogol/TTS-papers )
2020-10-15 03:49:39 +02:00
2021-10-26 13:04:51 +02:00
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" />
2021-01-06 14:46:28 +01:00
## 💬 Where to ask questions
2021-06-27 20:55:20 +02:00
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
2018-01-22 01:48:59 -08:00
2021-01-06 14:46:28 +01:00
| Type | Platforms |
| ------------------------------- | --------------------------------------- |
| 🚨 **Bug Reports ** | [GitHub Issue Tracker] |
2021-01-11 02:12:08 +01:00
| 🎁 **Feature Requests & Ideas ** | [GitHub Issue Tracker] |
2022-11-14 18:41:27 +09:00
| 👩💻 **Usage Questions ** | [GitHub Discussions] |
2022-11-14 10:44:17 +01:00
| 🗯 **General Discussion ** | [GitHub Discussions] or [Discord] |
2020-05-20 17:51:42 +02:00
2021-03-05 02:46:33 +01:00
[github issue tracker]: https://github.com/coqui-ai/tts/issues
[github discussions]: https://github.com/coqui-ai/TTS/discussions
2022-11-14 10:44:17 +01:00
[discord]: https://discord.gg/5eXr5seRrv
2021-03-05 02:46:33 +01:00
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
2021-01-06 14:46:28 +01:00
2021-01-11 02:12:08 +01:00
## 🔗 Links and Resources
| Type | Links |
| ------------------------------- | --------------------------------------- |
2021-06-27 20:55:20 +02:00
| 💼 **Documentation ** | [ReadTheDocs ](https://tts.readthedocs.io/en/latest/ )
2021-04-09 13:51:29 +02:00
| 💾 **Installation ** | [TTS/README.md ](https://github.com/coqui-ai/TTS/tree/dev#install-tts )|
2021-04-21 13:50:35 +02:00
| 👩💻 **Contributing ** | [CONTRIBUTING.md ](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md )|
| 📌 **Road Map ** | [Main Development Plans ](https://github.com/coqui-ai/TTS/issues/378 )
2021-04-09 13:51:29 +02:00
| 🚀 **Released Models ** | [TTS Releases ](https://github.com/coqui-ai/TTS/releases ) and [Experimental Models ](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models )|
2021-01-11 02:12:08 +01:00
2021-03-05 02:59:52 +01:00
## 🥇 TTS Performance
2021-03-11 13:02:46 +01:00
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p>
2019-05-01 00:56:58 +02:00
2021-03-09 10:39:17 -05:00
Underlined "TTS*" and "Judy*" are 🐸TTS models
2021-03-05 02:59:52 +01:00
<!-- [Details... ](https://github.com/coqui-ai/TTS/wiki/Mean-Opinion-Score-Results ) -->
2019-05-01 00:56:58 +02:00
2020-05-20 16:44:52 +02:00
## Features
2021-06-27 20:55:20 +02:00
- High-performance Deep Learning models for Text2Speech tasks.
2021-01-13 10:14:59 +00:00
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
2020-06-04 14:26:51 +02:00
- Speaker Encoder to compute speaker embeddings efficiently.
2021-01-13 10:14:59 +00:00
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
2020-06-19 14:52:43 +02:00
- Fast and efficient model training.
2021-06-27 20:55:20 +02:00
- Detailed training logs on the terminal and Tensorboard.
- Support for Multi-speaker TTS.
- Efficient, flexible, lightweight but feature complete `Trainer API` .
2022-03-06 14:10:16 +01:00
- Released and ready-to-use models.
2020-05-20 16:44:52 +02:00
- Tools to curate Text2Speech datasets under```dataset_analysis` ``.
2021-06-27 20:55:20 +02:00
- Utilities to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
2019-12-11 11:02:39 +01:00
2021-01-06 14:46:28 +01:00
## Implemented Models
2022-09-16 12:01:46 +02:00
### Spectrogram models
2021-01-06 14:46:28 +01:00
- Tacotron: [paper ](https://arxiv.org/abs/1703.10135 )
- Tacotron2: [paper ](https://arxiv.org/abs/1712.05884 )
- Glow-TTS: [paper ](https://arxiv.org/abs/2005.11129 )
2021-01-12 16:32:15 +01:00
- Speedy-Speech: [paper ](https://arxiv.org/abs/2008.03802 )
2021-03-23 03:21:27 +01:00
- Align-TTS: [paper ](https://arxiv.org/abs/2003.01950 )
2021-09-07 08:01:49 +00:00
- FastPitch: [paper ](https://arxiv.org/pdf/2006.06873.pdf )
2021-09-12 15:33:27 +00:00
- FastSpeech: [paper ](https://arxiv.org/abs/1905.09263 )
2023-02-06 11:15:43 +01:00
- FastSpeech2: [paper ](https://arxiv.org/abs/2006.04558 )
2022-09-08 06:10:39 -03:00
- SC-GlowTTS: [paper ](https://arxiv.org/abs/2104.05557 )
2022-09-16 12:01:46 +02:00
- Capacitron: [paper ](https://arxiv.org/abs/1906.03402 )
2022-12-12 12:44:15 +01:00
- OverFlow: [paper ](https://arxiv.org/abs/2211.06892 )
2023-01-23 11:53:04 +01:00
- Neural HMM TTS: [paper ](https://arxiv.org/abs/2108.13320 )
2021-01-06 14:46:28 +01:00
2021-08-09 13:12:51 +00:00
### End-to-End Models
- VITS: [paper ](https://arxiv.org/pdf/2106.06103 )
2022-09-08 06:10:39 -03:00
- YourTTS: [paper ](https://arxiv.org/abs/2112.02418 )
2023-05-16 01:33:56 +02:00
- Tortoise: [orig. repo ](https://github.com/neonbjb/tortoise-tts )
2021-08-09 13:12:51 +00:00
2021-01-06 14:46:28 +01:00
### Attention Methods
- Guided Attention: [paper ](https://arxiv.org/abs/1710.08969 )
- Forward Backward Decoding: [paper ](https://arxiv.org/abs/1907.09006 )
2021-08-09 10:20:05 +00:00
- Graves Attention: [paper ](https://arxiv.org/abs/1910.10288 )
2021-01-06 14:46:28 +01:00
- Double Decoder Consistency: [blog ](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/ )
2021-03-30 14:44:43 +02:00
- Dynamic Convolutional Attention: [paper ](https://arxiv.org/pdf/1910.10288.pdf )
2021-09-07 08:01:49 +00:00
- Alignment Network: [paper ](https://arxiv.org/abs/2108.10447 )
2021-01-06 14:46:28 +01:00
### Speaker Encoder
- GE2E: [paper ](https://arxiv.org/abs/1710.10467 )
2021-01-12 16:32:15 +01:00
- Angular Loss: [paper ](https://arxiv.org/pdf/2003.11982.pdf )
2018-07-11 17:40:30 +02:00
2021-01-06 14:46:28 +01:00
### Vocoders
2021-02-04 14:06:39 +09:00
- MelGAN: [paper ](https://arxiv.org/abs/1910.06711 )
2021-01-06 14:46:28 +01:00
- MultiBandMelGAN: [paper ](https://arxiv.org/abs/2005.05106 )
- ParallelWaveGAN: [paper ](https://arxiv.org/abs/1910.11480 )
- GAN-TTS discriminators: [paper ](https://arxiv.org/abs/1909.11646 )
- WaveRNN: [origin ](https://github.com/fatchord/WaveRNN/ )
- WaveGrad: [paper ](https://arxiv.org/abs/2009.00713 )
2021-04-15 03:15:12 +02:00
- HiFiGAN: [paper ](https://arxiv.org/abs/2010.05646 )
2021-06-27 20:55:20 +02:00
- UnivNet: [paper ](https://arxiv.org/abs/2106.07889 )
2018-09-27 15:10:15 +02:00
2023-04-10 14:03:08 +02:00
### Voice Conversion
- FreeVC: [paper ](https://arxiv.org/abs/2210.15418 )
2021-06-27 20:55:20 +02:00
You can also help us implement more models.
2018-09-27 15:10:15 +02:00
2021-01-06 14:46:28 +01:00
## Install TTS
2022-05-12 15:50:25 +02:00
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.7, < 3.11. ** .
2018-09-27 15:10:15 +02:00
2021-08-17 11:36:01 +00:00
If you are only interested in [synthesizing speech ](https://tts.readthedocs.io/en/latest/inference.html ) with the released 🐸TTS models, installing from PyPI is the easiest option.
2021-01-26 03:06:58 +01:00
2021-01-27 11:26:38 +01:00
```bash
2021-01-26 03:08:45 +01:00
pip install TTS
```
2021-01-26 03:06:58 +01:00
2021-03-09 10:39:17 -05:00
If you plan to code or train models, clone 🐸TTS and install it locally.
2021-01-26 03:06:58 +01:00
2021-01-27 11:26:38 +01:00
```bash
2021-03-05 02:46:33 +01:00
git clone https://github.com/coqui-ai/TTS
2022-02-10 12:14:54 -03:00
pip install -e .[all,dev,notebooks] # Select the relevant extras
2021-01-26 03:08:45 +01:00
```
2018-09-27 15:10:15 +02:00
2021-04-09 12:06:38 +02:00
If you are on Ubuntu (Debian), you can also run following commands for installation.
2021-04-09 13:51:29 +02:00
2021-04-09 12:06:38 +02:00
```bash
2022-07-26 04:28:21 -07:00
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
2021-04-09 12:06:38 +02:00
$ make install
```
2021-04-09 13:51:29 +02:00
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here ](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system ).
2021-06-27 20:55:20 +02:00
2022-11-16 00:21:56 +01:00
## Docker Image
You can also try TTS without install with the docker image.
Simply run the following command and you will be able to run TTS without installing it.
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
```
You can then enjoy the TTS server [here ](http://[::1]:5002/ )
2022-11-16 16:13:07 +01:00
More details about the docker images (like GPU support) can be found [here ](https://tts.readthedocs.io/en/latest/docker_images.html )
2022-11-16 00:21:56 +01:00
2022-12-12 12:20:50 +01:00
## Synthesizing speech by 🐸TTS
2021-12-17 11:02:34 +01:00
2022-12-12 12:20:50 +01:00
### 🐍 Python API
```python
from TTS.api import TTS
# Running a multi-speaker and multi-lingual model
# List available 🐸TTS models and choose the first one
model_name = TTS.list_models()[0]
# Init TTS
tts = TTS(model_name)
2023-05-24 12:07:26 +02:00
2022-12-12 12:20:50 +01:00
# Run TTS
2023-05-24 12:07:26 +02:00
2022-12-12 12:20:50 +01:00
# ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
# Text to speech with a numpy output
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
# Running a single speaker model
# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
2023-02-06 11:20:32 +01:00
2023-05-24 12:07:26 +02:00
# Example voice cloning with YourTTS in English, French and Portuguese
2023-02-06 11:20:32 +01:00
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
2023-04-05 12:23:07 +02:00
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
2023-04-05 15:06:50 +02:00
# Example voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav`
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
# Example voice cloning by a single speaker TTS model combining with the voice conversion model. This way, you can
# clone voices by using any model in 🐸TTS.
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="ouptut.wav"
)
2023-05-24 12:07:26 +02:00
# Example text to speech using [🐸Coqui Studio](https://coqui.ai) models.
# You can use all of your available speakers in the studio.
2023-04-05 15:06:50 +02:00
# [🐸Coqui Studio](https://coqui.ai) API token is required. You can get it from the [account page](https://coqui.ai/account).
# You should set the `COQUI_STUDIO_TOKEN` environment variable to use the API token.
# If you have a valid API token set you will see the studio speakers as separate models in the list.
# The name format is coqui_studio/en/<studio_speaker_name>/coqui_studio
models = TTS().list_models()
# Init TTS with the target studio speaker
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH)
# Run TTS with emotion and speed control
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
2023-05-24 12:07:26 +02:00
#Example text to speech using **Fairseq models in ~1100 languages** 🤯.
#For these models use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
#You can find the list of language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
# TTS with on the fly voice conversion
api = TTS("tts_models/deu/fairseq/vits")
api.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="ouptut.wav"
)
2022-12-12 12:20:50 +01:00
```
### Command line `tts`
#### Single Speaker Models
2021-12-17 11:02:34 +01:00
- List provided models:
```
$ tts --list_models
```
2022-08-01 19:17:47 +09:00
- Get model info (for both tts_models and vocoder_models):
- Query by type/name:
2022-09-16 12:01:46 +02:00
The model_info_by_name uses the name as it from the --list_models.
2022-08-01 19:17:47 +09:00
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
2022-09-16 12:01:46 +02:00
2022-08-01 19:17:47 +09:00
```
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
```
```
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
- Query by type/idx:
2022-09-16 12:01:46 +02:00
The model_query_idx uses the corresponding idx from --list_models.
2022-08-01 19:17:47 +09:00
```
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
```
For example:
2022-09-16 12:01:46 +02:00
2022-08-01 19:17:47 +09:00
```
2022-09-16 12:01:46 +02:00
$ tts --model_info_by_idx tts_models/3
2022-08-01 19:17:47 +09:00
```
2022-09-16 12:01:46 +02:00
2021-12-17 11:02:34 +01:00
- Run TTS with default models:
```
2022-06-27 10:32:43 +02:00
$ tts --text "Text for TTS" --out_path output/path/speech.wav
2021-12-17 11:02:34 +01:00
```
- Run a TTS model with its default vocoder model:
```
2022-06-27 10:32:43 +02:00
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```
2022-07-26 04:28:21 -07:00
For example:
2022-06-27 10:32:43 +02:00
```
2022-07-26 04:28:21 -07:00
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
2021-12-17 11:02:34 +01:00
```
- Run with specific TTS and vocoder models from the list:
```
2022-06-27 10:32:43 +02:00
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```
2022-07-26 04:28:21 -07:00
For example:
2022-06-27 10:32:43 +02:00
```
2022-07-26 04:28:21 -07:00
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
2021-12-17 11:02:34 +01:00
```
2022-06-27 10:32:43 +02:00
2021-12-17 11:02:34 +01:00
- Run your own TTS model (Using Griffin-Lim Vocoder):
```
2022-03-22 17:55:00 +01:00
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
2021-12-17 11:02:34 +01:00
```
- Run your own TTS and Vocoder models:
```
2022-11-16 12:27:58 +01:00
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
2022-03-22 17:55:00 +01:00
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
2021-12-17 11:02:34 +01:00
```
2022-12-12 12:20:50 +01:00
#### Multi-speaker Models
2021-12-17 11:02:34 +01:00
2023-04-26 09:22:57 -04:00
- List the available speakers and choose a <speaker_id> among them:
2021-12-17 11:02:34 +01:00
```
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```
- Run the multi-speaker TTS model with the target speaker ID:
```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```
- Run your own multi-speaker TTS model:
```
2022-11-16 12:27:58 +01:00
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
2021-12-17 11:02:34 +01:00
```
2021-01-11 02:12:08 +01:00
## Directory Structure
2020-06-19 15:56:02 +02:00
```
2020-07-17 12:50:20 +02:00
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/ (common utilities.)
2020-07-17 12:56:42 +02:00
|- TTS
|- bin/ (folder for all the executables.)
|- train*.py (train your target model.)
2021-06-27 20:55:20 +02:00
|- ...
2020-07-17 12:56:42 +02:00
|- tts/ (text to speech models)
|- layers/ (model layer definitions)
|- models/ (model definitions)
|- utils/ (model specific utilities.)
|- speaker_encoder/ (Speaker Encoder models.)
|- (same)
|- vocoder/ (Vocoder models.)
|- (same)
2021-09-04 08:36:28 +00:00
```