Files
Mangio-RVC-Fork/i18n/en_US.json

117 lines
11 KiB
JSON

{
"很遗憾您这没有能用的显卡来支持您训练": "Unfortunately, you don't have a GPU to help you train",
"是": "yes",
"step1:正在处理数据": "step 1: processing data",
"step2a:无需提取音高": "step 2a: skipped extracting pitch",
"step2b:正在提取特征": "step 2b: extracting features",
"step3a:正在训练模型": "step 3a: training the model",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Training completed, you can view the training logs in the console or the train.log within the experiement folder",
"全流程结束!": "all processes have been completed!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "This software is open source under the MIT license, the author does not have any control over the software, and those who use the software and spread the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or quote any codes and files in the software package. See root directory <b>Agreement-LICENSE.txt</b> for details.",
"": "Model inference",
"": "Inferencing timbre",
"": "Refresh timbre list",
"": "Unload timbre to save GPU memory",
"id": "Please select a speaker id",
"+12key, -12key, . ": "It is recommended +12key for male to female conversion, and -12key for female to male conversion. If the sound range explodes and the timbre is distorted, you can also adjust it to the appropriate range by yourself. ",
"(, , 12-12)": "transpose(integer, number of semitones, octave sharp 12 octave flat -12)",
"()": "Enter the path of the audio file to be processed (the default is the correct format example)",
",pm,harvest": "Select the algorithm for pitch extraction. Use 'pm' to speed up for singing voices, or use 'harvest' for better low-pitched voices, but it is extremely slow.",
"crepe_hop_length": "Crepe Hop Length (Only applies to crepe): Hop length refers to the time it takes for the speaker to jump to a dramatic pitch. Lower hop lengths take more time to infer but are more pitch accurate.",
"": "Feature search database file path",
"": "Feature file path",
"": "Search feature ratio",
"F0线, , , F0": "F0 curve file, optional, one pitch per line, instead of the default F0 and ups and downs",
"": "Convert",
"": "Export message",
"(,)": "Export audio (three dots in the lower right corner, click to download)",
", , , (opt). ": "For batch conversion, input the audio folder to be converted, or upload multiple audio files, and output the converted audio in the specified folder (opt by default). ",
"": "Specify output folder",
"()": "Enter the path of the audio folder to be processed (just go to the address bar of the file manager and copy it)",
", , ": "You can also input audio files in batches, choose one of the two, and read the folder first",
"": "Accompaniment and vocal separation",
", 使UVR5. <br>HP2, HP5<br>: E:\\codes\\py39\\vits_vc_gpu\\()": "Batch processing of vocal accompaniment separation, using UVR5 model. <br>Without harmony, use HP2, with harmony and extracted vocals do not need harmony, use HP5<br>Example of qualified folder path format: E:\\ codes\\py39\\vits_vc_gpu\\Egret Shuanghua test sample (just go to the address bar of the file manager and copy it)",
"": "Input audio folder path",
"": "Model",
"": "vocal extraction aggressiveness",
"": "Specify vocals output folder",
"": "Specify instrumentals output folder",
"": "Train",
"step1: . logs, , , , , . ": "step1: Fill in the experimental configuration. The experimental data is placed under logs, and each experiment has a folder. You need to manually enter the experimental name path, which contains the experimental configuration, logs, and model files obtained from training. ",
"": "Input experiment name",
"": "Target sample rate",
"(, )": "Does the model have pitch guidance (singing must, voice can not.)",
"": "no",
"step2a: , 2wav; . ": "step2a: Automatically traverse all files that can be decoded into audio in the training folder and perform slice normalization, and generate 2 wav folders in the experiment directory; only single-person training is supported for the time being. ",
"": "Input training folder path",
"id": "Please specify speaker ID",
"": "Process data",
"step2b: 使CPU(), 使GPU()": "step2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (select card number)",
"-使, 0-1-2 使012": "Enter the card numbers used separated by -, for example 0-1-2 use card 0 and card 1 and card 2",
"": "GPU information",
"使CPU": "Number of CPU threads to use for pitch extraction",
":pm,CPUdio,harvest": "Select pitch extraction algorithm: Use 'pm' for faster processing of singing voice, 'dio' for high-quality speech but slower processing, and 'harvest' for the best quality but slowest processing.",
"": "Feature extraction",
"step3: , ": "step3: Fill in the training settings, start training the model and index",
"save_every_epoch": "Save frequency (save_every_epoch)",
"total_epoch": "Total training epochs (total_epoch)",
"batch_size": "batch_size for every GPU",
"ckpt": "Whether to save only the latest ckpt file to save disk space",
". 10min, ": "Whether to cache all training sets to video memory. Small data under 10 minutes can be cached to speed up training, and large data cache will blow up video memory and not increase the speed much",
"G": "Load pre-trained base model G path.",
"D": "Load pre-trained base model D path.",
"": "Train model.",
"": "Train feature index.",
"": "One-click training.",
"ckpt": "ckpt processing.",
", ": "Model Fusion, which can be used to test sound fusion",
"A": "A model path.",
"B": "B model path.",
"A": "A model weight for model A.",
"": "Whether the model has pitch guidance.",
"": "Model information to be placed.",
"": "Saved model name without extension.",
"": "Fusion.",
"(weights)": "Modify model information (only small model files extracted from the weights folder are supported)",
"": "Model path",
"": "Model information to be modified",
", ": "The saved file name, the default is empty and the same name as the source file",
"": "Modify",
"(weights)": "View model information (only small model files extracted from the weights folder are supported)",
"": "View",
"(logs),,": "Model extraction (enter the path of the large file model under the logs folder), which is suitable for half of the training and does not want to train the model without automatically extracting and saving the small file model, or if you want to test the intermediate model",
"": "Save Name",
",10": "Whether the model has pitch guidance, 1 for yes, 0 for no",
"": "Extract",
"Onnx": "Export Onnx",
"RVC": "RVC Model Path",
"Onnx": "Onnx Export Path",
"MoeVS": "MoeVS Model",
"Onnx": "Export Onnx Model",
"线": "Recruit front-end editors for pitch curves",
"xxxxx": "Add development group to contact me xxxxx",
"": "Click to view the communication and problem feedback group number",
"xxxxx": "xxxxx",
"": "load model",
"Hubert": "Hubert File",
".pth": "Select the .pth file",
".index": "Select the .index file",
".npy": "Select the .npy file",
"": "input device",
"": "output device",
"(使)": "Audio device (please use the same type of driver)",
"": "response threshold",
"": "tone setting",
"Index Rate": "Index Rate",
"": "general settings",
"": "Sample length",
"": "fade length",
"": "extra inference time",
"": "Input Noise Reduction",
"": "Output Noise Reduction",
"": "performance settings",
"": "start audio conversion",
"": "stop audio conversion",
"(ms):": "Infer Time(ms):"
}