Files
Mangio-RVC-Fork/i18n/en_US.json
kalomaze 72e273173d Update en_US.json
mangio please make ur fork up to date with latest commit on rvc
2023-06-22 07:12:53 -05:00

127 lines
15 KiB
JSON
Raw Permalink Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"很遗憾您这没有能用的显卡来支持您训练": "Unfortunately, there is no compatible GPU available to support your training.",
"是": "Yes",
"step1:正在处理数据": "Step 1: Processing data",
"step2a:无需提取音高": "Step 2a: Skipping pitch extraction",
"step2b:正在提取特征": "Step 2b: Extracting features",
"step3a:正在训练模型": "Step 3a: Model training started",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Training complete. You can check the training logs in the console or the 'train.log' file under the experiment folder.",
"全流程结束!": "All processes have been completed!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "This software is open source under the MIT license. The author does not have any control over the software. Users who use the software and distribute the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or reference any codes and files within the software package. See the root directory <b>Agreement-LICENSE.txt</b> for details.",
"": "Model Inference",
"": "Inferencing voice:",
"": "Refresh voice list and index path",
"": "Unload voice to save GPU memory:",
"id": "Select Speaker/Singer ID:",
"+12key, -12key, . ": "Recommended +12 key for male to female conversion, and -12 key for female to male conversion. If the sound range goes too far and the voice is distorted, you can also adjust it to the appropriate range by yourself.",
"(, , 12-12)": "Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):",
"()": "Enter the path of the audio file to be processed (default is the correct format example):",
",pm,harvest,crepeGPU": "Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'harvest': better bass but extremely slow; 'crepe': better quality but GPU intensive):",
"crepe_hop_length": "Mangio-Crepe Hop Length (Only applies to mangio-crepe): Hop length refers to the time it takes for the speaker to jump to a dramatic pitch. Lower hop lengths take more time to infer but are more pitch accurate.",
"": "Feature search database file path",
">=3使harvest使使": "If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.",
",使": "Path to the feature index file. Leave blank to use the selected result from the dropdown:",
"index,(dropdown)": "Auto-detect index path and select from the dropdown:",
"": "Path to feature file:",
"": "Search feature ratio:",
"0": "Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:",
"1使": "Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used:",
"artifact0.5": "Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy:",
"F0线, , , F0": "F0 curve file (optional). One pitch per line. Replaces the default F0 and pitch modulation:",
"": "Convert",
"": "Output information",
"(,)": "Export audio (click on the three dots in the lower right corner to download)",
", , , (opt). ": "Batch conversion. Enter the folder containing the audio files to be converted or upload multiple audio files. The converted audio will be output in the specified folder (default: 'opt').",
"": "Specify output folder:",
"()": "Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):",
", , ": "You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder.",
"": "Export file format",
"&&": "Vocals/Accompaniment Separation & Reverberation Removal",
"": "Enter the path of the audio folder to be processed:",
"": "Model",
"": "Specify the output folder for vocals:",
"": "Specify the output folder for accompaniment:",
"": "Train",
"step1: . logs, , , , , . ": "Step 1: Fill in the experimental configuration. Experimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.",
"": "Enter the experiment name:",
"": "Target sample rate:",
"(, )": "Whether the model has pitch guidance (required for singing, optional for speech):",
"": "Version",
"使CPU": "Number of CPU processes used for pitch extraction and data processing:",
"step2a: , 2wav; . ": "Step 2a: Automatically traverse all files in the training folder that can be decoded into audio and perform slice normalization. Generates 2 wav folders in the experiment directory. Currently, only single-singer/speaker training is supported.",
"": "Enter the path of the training folder:",
"id": "Please specify the speaker/singer ID:",
"": "Process data",
"step2b: 使CPU(), 使GPU()": "Step 2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (select GPU index):",
"-使, 0-1-2 使012": "Enter the GPU index(es) separated by '-', e.g., 0-1-2 to use GPU 0, 1, and 2:",
"": "GPU Information",
":pm,CPUdio,harvest": "Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'dio': improved speech but slower extraction; 'harvest': better quality but slower extraction):",
"": "Feature extraction",
"step3: , ": "Step 3: Fill in the training settings and start training the model and index",
"save_every_epoch": "Save frequency (save_every_epoch):",
"total_epoch": "Total training epochs (total_epoch):",
"batch_size": "Batch size per GPU:",
"ckpt": "Save only the latest '.ckpt' file to save disk space:",
"": "No",
". 10min, ": "Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement:",
"weights": "Save a small final model to the 'weights' folder at each save point:",
"G": "Load pre-trained base model G path:",
"D": "Load pre-trained base model D path:",
"": "Train model",
"": "Train feature index",
"": "One-click training",
"ckpt": "ckpt Processing",
", ": "Model fusion, can be used to test timbre fusion",
"A": "Path to Model A:",
"B": "Path to Model B:",
"A": "Weight (w) for Model A:",
"": "Whether the model has pitch guidance:",
"": "Model information to be placed:",
"": "Saved model name (without extension):",
"": "Model architecture version:",
"": "Fusion",
"(weights)": "Modify model information (only supported for small model files extracted from the 'weights' folder)",
"": "Path to Model:",
"": "Model information to be modified:",
", ": "Save file name (default: same as the source file):",
"": "Modify",
"(weights)": "View model information (only supported for small model files extracted from the 'weights' folder)",
"": "View",
"(logs),,": "Model extraction (enter the path of the large file model under the 'logs' folder). This is useful if you want to stop training halfway and manually extract and save a small model file, or if you want to test an intermediate model:",
"": "Save name:",
",10": "Whether the model has pitch guidance (1: yes, 0: no):",
"": "Extract",
"Onnx": "Export Onnx",
"RVC": "RVC Model Path:",
"Onnx": "Onnx Export Path:",
"MoeVS": "MoeVS Model",
"Onnx": "Export Onnx Model",
"": "FAQ (Frequently Asked Questions)",
"线": "Recruiting front-end editors for pitch curves",
"xxxxx": "Join the development group and contact me at xxxxx",
"": "Click to view the communication and problem feedback group number",
"xxxxx": "xxxxx",
"": "Load model",
"Hubert": "Hubert Model",
".pth": "Select the .pth file",
".index": "Select the .index file",
".npy": "Select the .npy file",
"": "Input device",
"": "Output device",
"(使)": "Audio device (please use the same type of driver)",
"": "Response threshold",
"": "Pitch settings",
"Index Rate": "Index Rate",
"": "General settings",
"": "Sample length",
"": "Fade length",
"": "Extra inference time",
"": "Input noise reduction",
"": "Output noise reduction",
"": "Performance settings",
"": "Start audio conversion",
"": "Stop audio conversion",
"(ms):": "Inference time (ms):",
" 使UVR5 <br> E:\\codes\\py39\\vits_vc_gpu\\() <br> <br>1HP5HP2HP3HP3HP2 <br>2HP5 <br> 3by FoxJoy<br>(1)MDX-Net(onnx_dereverb):<br>&emsp;(234)DeEcho:AggressiveNormalDeReverb<br>/<br>1DeEcho-DeReverb2DeEcho2<br>2MDX-Net-Dereverb<br>3MDX-NetDeEcho-Aggressive":"Batch processing for vocal accompaniment separation using the UVR5 model.<br>Example of a valid folder path format: D:\\path\\to\\input\\folder (copy it from the file manager address bar).<br>The model is divided into three categories:<br>1. Preserve vocals: Choose this option for audio without harmonies. It preserves vocals better than HP5. It includes two built-in models: HP2 and HP3. HP3 may slightly leak accompaniment but preserves vocals slightly better than HP2.<br>2. Preserve main vocals only: Choose this option for audio with harmonies. It may weaken the main vocals. It includes one built-in model: HP5.<br>3. De-reverb and de-delay models (by FoxJoy):<br>(1) MDX-Net: The best choice for stereo reverb removal but cannot remove mono reverb;<br>&emsp;(234) DeEcho: Removes delay effects. Aggressive mode removes more thoroughly than Normal mode. DeReverb additionally removes reverb and can remove mono reverb, but not very effectively for heavily reverberated high-frequency content.<br>De-reverb/de-delay notes:<br>1. The processing time for the DeEcho-DeReverb model is approximately twice as long as the other two DeEcho models.<br>2. The MDX-Net-Dereverb model is quite slow.<br>3. The recommended cleanest configuration is to apply MDX-Net first and then DeEcho-Aggressive."
}