Mangio-RVC-Fork (Retrieval-based-Voice-Conversion) 💻
A fork of an easy-to-use SVC framework based on VITS with top1 retrieval 💯.💓 Please support the original RVC repository. Without it, obviously this fork wouldn't have been possible. The Mangio-RVC-Fork aims to essentially enhance the features that the original RVC repo has in my own way. Please note that this fork is NOT STABLE and was forked with the intention of experimentation. Do not use this Fork thinking it is a "better" version of the original repo. Think of it more like another "version" of the original repo. Please note that this doesn't have a google colab. If you want to use google colab, go to the original repository. This fork is intended to be used with paperspace and local machines for now.
Add me on discord: Funky Town#2048
I am able to communicate with you here and there.
Special thanks to discord user @kalomaze#2983 for creating a temporary colab notebook for this fork for the time being. Eventually, an official, more stable notebook will be included with this fork. Please use paperspace instead if you can as it is much more stable.
The original RVC Demo Video here!
Realtime Voice Conversion Software using RVC : w-okada/voice-changer
The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset.
High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
Summary 📘
Features that this fork (Mangio-RVC-Fork) has that the original repo doesn't ☑️
- Local inference with the conv2d 'Half' exception fix. apply the argument --use_gfloat to infer-web.py to use this fix.
- f0 Inference algorithm overhaul: 🌟
- Added pyworld dio f0 method.
- Added torchcrepe crepe f0 method. (Increases pitch accuracy and stability ALOT)
- Added torchcrepe crepe-tiny model. (Faster on inference, but probably worse quality than normal full crepe)
- Modifiable crepe_hop_length for the crepe algorithm via the web_gui
- f0 Crepe Pitch Extraction for training. 🌟 (EXPERIMENTAL) Works on paperspace machines but not local mac/windows machines. Potential memory leak. Watch out.
- Paperspace integration 🌟
- Paperspace argument on infer-web.py (--paperspace) that shares a gradio link
- Make file for paperspace users
- Tensorboard access via Makefile (make tensorboard)
- Total epoch slider for the training now limited to 10,000 not just 1000.
This repository has the following features too:
- Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval;
- Easy and fast training, even on relatively poor graphics cards;
- Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended);
- Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge);
- Easy-to-use Webui interface;
- Use the UVR5 model to quickly separate vocals and instruments.
Features planned to be added during the fork's development ▶️
- Improved GUI (More convenience).
- Automatic removal of old generations to save space.
- Potentially a pyin f0 method or a hybrid f0 crepe method.
- More Optimized training on paperspace machines
- A feature search ratio booster to emphasize the target timbre.
About this fork's crepe training:
Crepe training is still incredibly instable and there's been report of a memory leak. This will be fixed in the future, however it works quite well on paperspace machines. Please note that crepe training adds a little bit of difference against a harvest trained model. Crepe sounds clearer on some parts, but sounds more robotic on some parts too. Both I would say are equally good to train with, but I still think crepe on INFERENCE is not only quicker, but more pitch stable (especially with vocal layers). Right now, its quite stable to train with a harvest model and infer it with crepe. If you are training with crepe however (f0 feature extraction), please make sure your datasets are as dry as possible to reduce artifacts and unwanted harmonics as I assume the crepe pitch estimation latches on to reverb more.
If you get CUDA issues with crepe training, or pm and harvest etc.
This is due to the number of processes (n_p) being too high. Make sure to cut the number of threads down. Please lower the value of the "Number of CPU Threads to use" slider on the feature extraction GUI.
Installing the Dependencies 🖥️
Using pip (python3.9.8 is stable with this fork)
Paperspace Users:
cd Mangio-RVC-Fork
make install # Do this everytime you start your paperspace machine
Windows/MacOS
Notice: faiss 1.7.2 will raise Segmentation Fault: 11 under MacOS, please use pip install faiss-cpu==1.7.0 if you use pip to install it manually. Swig can be installed via brew under MacOS
brew install swig
Install requirements:
pip install -r requirements.txt
Preparation of other Pre-models ⬇️
Paperspace Users:
cd Mangio-RVC-Fork
make base # Do only once after cloning this fork (No need to do it again unless pre-models change on hugging face)
Local Users
RVC requires other pre-models to infer and train. You need to download them from our Huggingface space.
Here's a list of Pre-models and other files that RVC needs:
hubert_base.pt
./pretrained
./uvr5_weights
#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed
ffmpeg.exe
Running the Web GUI to Infer & Train 💪
For paperspace users:
cd Mangio-RVC-Fork
make run
Then click the gradio link it provides.
Running the Tensorboard 📉
cd Mangio-RVC-Fork
make tensorboard
Then click the tensorboard link it provides and refresh the data.
Other
If you are using Windows, you can download and extract RVC-beta.7z to use RVC directly and use go-web.bat to start Webui.
There's also a tutorial on RVC in Chinese and you can check it out if needed.
