🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time
Go to file
Limingrui0 dd703be2cd
Update Readme Doc
Add environmental requirement notes
2022-07-19 21:31:51 +08:00
.github Create FUNDING.yml 2022-04-02 10:16:02 +08:00
.vscode Refactor (#650) 2022-07-17 14:27:45 +08:00
archived_untest_files Allow to train encoder 2021-10-01 00:01:33 +08:00
encoder fix issue #496 (#498) 2022-04-11 17:26:52 +08:00
mkgui Add web gui of training and reconstruct taco model methods 2022-06-26 23:21:32 +08:00
ppg_extractor Init ppg extractor and ppg2mel (#375) 2022-03-03 23:38:12 +08:00
ppg2mel Upgrade to new web service (#529) 2022-05-09 18:44:02 +08:00
samples Upgrade to new web service (#529) 2022-05-09 18:44:02 +08:00
synthesizer Refactor (#650) 2022-07-17 14:27:45 +08:00
toolbox The new vocoder Fre-GAN is now supported (#546) 2022-05-12 12:27:17 +08:00
utils Upgrade to new web service (#529) 2022-05-09 18:44:02 +08:00
vocoder Added missing files for Fre-GAN (#579) 2022-05-25 23:29:59 +08:00
web Fix web generate with rnn bug 2022-03-19 12:16:55 +08:00
.gitattributes Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
.gitignore Upgrade to new web service (#529) 2022-05-09 18:44:02 +08:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2021-08-18 16:37:25 +08:00
demo_toolbox.py Init ppg extractor and ppg2mel (#375) 2022-03-03 23:38:12 +08:00
encoder_preprocess.py Allow to train encoder 2021-10-01 00:01:33 +08:00
encoder_train.py Allow to train encoder 2021-10-01 00:01:33 +08:00
gen_voice.py add gen_voice.py for handle by python command instead of demon_tool gui. (#560) 2022-05-22 16:28:58 +08:00
LICENSE.txt Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
ppg2mel_train.py Ppg vc init (#421) 2022-03-05 00:52:36 +08:00
pre.py Add new dataset support to preprocess parameter 2021-10-17 17:21:49 +08:00
pre4ppg.py Init ppg extractor and ppg2mel (#375) 2022-03-03 23:38:12 +08:00
README-CN.md Update Readme Doc 2022-07-19 21:25:46 +08:00
README.md Update Readme Doc 2022-07-19 21:31:51 +08:00
requirements.txt GAN training now supports DistributedDataParallel (DDP) (#558) 2022-05-22 16:24:50 +08:00
run.py Init ppg extractor and ppg2mel (#375) 2022-03-03 23:38:12 +08:00
synthesizer_preprocess_audio.py remove unused sample audios and mark deprecated files 2021-09-11 22:59:09 +08:00
synthesizer_preprocess_embeds.py remove unused sample audios and mark deprecated files 2021-09-11 22:59:09 +08:00
synthesizer_train.py Support tensorboard to trace the training of Synthesizer (#98) 2021-09-25 17:06:51 +08:00
train.py Init ppg extractor and ppg2mel (#375) 2022-03-03 23:38:12 +08:00
vocoder_preprocess.py Web server: Add latest changes (#96) 2021-09-24 09:47:51 +08:00
vocoder_train.py GAN training now supports DistributedDataParallel (DDP) (#558) 2022-05-22 16:24:50 +08:00
web.py Upgrade to new web service (#529) 2022-05-09 18:44:02 +08:00

mockingbird

MIT License

English | 中文

Features

🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc.

🤩 PyTorch worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060

🌍 Windows + Linux run in both Windows OS and linux OS (even in M1 MACOS)

🤩 Easy & Awesome effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder

🌍 Webserver Ready to serve your result with remote calling

DEMO VIDEO

Ongoing Works(Helps Needed)

  • Major upgrade on GUI/Client and unifying web and toolbox [X] Init framework ./mkgui and tech design [X] Add demo part of Voice Cloning and Conversion [X] Add preprocessing and training for Voice Conversion [ ] Add preprocessing and training for Encoder/Synthesizer/Vocoder
  • Major upgrade on model backend based on ESPnet2(not yet started)

Quick Start

1. Install Requirements

Follow the original repo to test if you got all environment ready. **Python 3.7 or higher ** is needed to run the toolbox.

If you get an ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2 ) This error is probably due to a low version of python, try using 3.9 and it will install successfully

  • Install ffmpeg.
  • Run pip install -r requirements.txt to install the remaining necessary packages.

The recommended environment here is Repo Tag 0.0.1 Pytorch1.9.0 with Torchvision0.10.0 and cudatoolkit10.2 requirements.txt webrtcvad-wheels because requiremants. txt was exported a few months ago, so it doesn't work with newer versions

  • Install webrtcvad pip install webrtcvad-wheels(If you need)

Note that we are using the pretrained encoder/vocoder but synthesizer since the original model is incompatible with the Chinese symbols. It means the demo_cli is not working at this moment.

2. Prepare your models

You can either train your models or use existing ones:

2.1 Train encoder with your dataset (Optional)

  • Preprocess with the audios and the mel spectrograms: python encoder_preprocess.py <datasets_root> Allowing parameter --dataset {dataset} to support the datasets you want to preprocess. Only the train set of these datasets will be used. Possible names: librispeech_other, voxceleb1, voxceleb2. Use comma to sperate multiple datasets.

  • Train the encoder: python encoder_train.py my_run <datasets_root>/SV2TTS/encoder

For training, the encoder uses visdom. You can disable it with --no_visdom, but it's nice to have. Run "visdom" in a separate CLI/process to start your visdom server.

2.2 Train synthesizer with your dataset

  • Download dataset and unzip: make sure you can access all .wav in folder

  • Preprocess with the audios and the mel spectrograms: python pre.py <datasets_root> Allowing parameter --dataset {dataset} to support aidatatang_200zh, magicdata, aishell3, data_aishell, etc.If this parameter is not passed, the default dataset will be aidatatang_200zh.

  • Train the synthesizer: python synthesizer_train.py mandarin <datasets_root>/SV2TTS/synthesizer

  • Go to next step when you see attention line show and loss meet your need in training folder synthesizer/saved_models/.

2.3 Use pretrained model of synthesizer

Thanks to the community, some models will be shared:

author Download link Preview Video Info
@author https://pan.baidu.com/s/1iONvRxmkI-t1nHqxKytY3g Baidu 4j5d 75k steps trained by multiple datasets
@author https://pan.baidu.com/s/1fMh9IlgKJlL2PIiRTYDUvw Baidu codeom7f 25k steps trained by multiple datasets, only works under version 0.0.1
@FawenYo https://drive.google.com/file/d/1H-YGOUHpmqKxJ9FRc6vAjPuqQki24UbC/view?usp=sharing https://u.teknik.io/AYxWf.pt input output 200k steps with local accent of Taiwan, only works under version 0.0.1
@miven https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ code: 2021 https://www.aliyundrive.com/s/AwPsbo8mcSP code: z2m0 https://www.bilibili.com/video/BV1uh411B7AD/ only works under version 0.0.1

2.4 Train vocoder (Optional)

note: vocoder has little difference in effect, so you may not need to train a new one.

  • Preprocess the data: python vocoder_preprocess.py <datasets_root> -m <synthesizer_model_path>

<datasets_root> replace with your dataset root<synthesizer_model_path>replace with directory of your best trained models of sythensizer, e.g. sythensizer\saved_mode\xxx

  • Train the wavernn vocoder: python vocoder_train.py mandarin <datasets_root>

  • Train the hifigan vocoder python vocoder_train.py mandarin <datasets_root> hifigan

3. Launch

3.1 Using the web server

You can then try to run:python web.py and open it in browser, default as http://localhost:8080

3.2 Using the Toolbox

You can then try the toolbox: python demo_toolbox.py -d <datasets_root>

3.3 Using the command line

You can then try the command: python gen_voice.py <text_file.txt> your_wav_file.wav you may need to install cn2an by "pip install cn2an" for better digital number result.

Reference

This repository is forked from Real-Time-Voice-Cloning which only support English.

URL Designation Title Implementation source
1803.09017 GlobalStyleToken (synthesizer) Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis This repo
2010.05646 HiFi-GAN (vocoder) Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis This repo
2106.02297 Fre-GAN (vocoder) Fre-GAN: Adversarial Frequency-consistent Audio Synthesis This repo
1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo
1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN
1703.10135 Tacotron (synthesizer) Tacotron: Towards End-to-End Speech Synthesis fatchord/WaveRNN
1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo

F Q&A

1.Where can I download the dataset?

Dataset Original Source Alternative Sources
aidatatang_200zh OpenSLR Google Drive
magicdata OpenSLR Google Drive (Dev set)
aishell3 OpenSLR Google Drive
data_aishell OpenSLR

After unzip aidatatang_200zh, you need to unzip all the files under aidatatang_200zh\corpus\train

2.What is<datasets_root>?

If the dataset path is D:\data\aidatatang_200zh,then <datasets_root> isD:\data

3.Not enough VRAM

Train the synthesizeradjust the batch_size in synthesizer/hparams.py

//Before
tts_schedule = [(2,  1e-3,  20_000,  12),   # Progressive training schedule
                (2,  5e-4,  40_000,  12),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  12),   #
                (2,  1e-4, 160_000,  12),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  12),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  12)],  # lr = learning rate
//After
tts_schedule = [(2,  1e-3,  20_000,  8),   # Progressive training schedule
                (2,  5e-4,  40_000,  8),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  8),   #
                (2,  1e-4, 160_000,  8),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  8),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  8)],  # lr = learning rate

Train Vocoder-Preprocess the dataadjust the batch_size in synthesizer/hparams.py

//Before
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 16,                  # For vocoder preprocessing and inference.
//After
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 8,                  # For vocoder preprocessing and inference.

Train Vocoder-Train the vocoderadjust the batch_size in vocoder/wavernn/hparams.py

//Before
# Training
voc_batch_size = 100
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad = 2

//After
# Training
voc_batch_size = 6
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad =2

4.If it happens RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]).

Please refer to issue #37

5. How to improve CPU and GPU occupancy rate?

Adjust the batch_size as appropriate to improve

6. What if it happens the page file is too small to complete the operation

Please refer to this video and change the virtual memory to 100G (102400), for example : When the file is placed in the D disk, the virtual memory of the D disk is changed.

7. When should I stop during training?

FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps. attention_step_20500_sample_1 step-135500-mel-spectrogram_sample_1