🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time
Go to file
2021-08-11 23:37:06 +08:00
archived_untest_files hide untest to avoid confusion when using this repro 2021-08-11 23:37:06 +08:00
encoder Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
samples Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
synthesizer Refactor preprocessor of synthesizer to prepare to supprot more datasets 2021-08-11 23:33:43 +08:00
toolbox Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
utils Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
vocoder Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
.gitattributes Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
.gitignore Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
demo_toolbox.py Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
LICENSE.txt Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
README-CN.md add instruction to use pretrained models 2021-08-09 19:46:18 +08:00
README.md Update note to avoid using wrong models 2021-08-11 09:51:02 +08:00
requirements.txt Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
synthesizer_preprocess_audio.py Refactor preprocessor of synthesizer to prepare to supprot more datasets 2021-08-11 23:33:43 +08:00
synthesizer_preprocess_embeds.py Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00
synthesizer_train.py Init to support Chinese Dataset. 2021-08-07 11:56:00 +08:00

WechatIMG2968

MIT License

This repository is forked from Real-Time-Voice-Cloning which only support English.

English | 中文

Features

🌍 Chinese supported mandarin and tested with dataset: aidatatang_200zh

🤩 PyTorch worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060

🌍 Windows + Linux tested in both Windows OS and linux OS after fixing nits

🤩 Easy & Awesome effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder

DEMO VIDEO

Quick Start

1. Install Requirements

Follow the original repo to test if you got all environment ready. **Python 3.7 or higher ** is needed to run the toolbox.

  • Install PyTorch.
  • Install ffmpeg.
  • Run pip install -r requirements.txt to install the remaining necessary packages.

2. Reuse the pretrained encoder/vocoder

Note that we need to specify the newly trained synthesizer model, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment.

3. Train synthesizer with aidatatang_200zh

  • Download aidatatang_200zh dataset and unzip: make sure you can access all .wav in train folder

  • Preprocess with the audios and the mel spectrograms: python synthesizer_preprocess_audio.py <datasets_root>

  • Preprocess the embeddings: python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer

  • Train the synthesizer: python synthesizer_train.py mandarin <datasets_root>/SV2TTS/synthesizer

  • Go to next step when you see attention line show and loss meet your need in training folder synthesizer/saved_models/.

FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps. attention_step_20500_sample_1 step-135500-mel-spectrogram_sample_1

4. Launch the Toolbox

You can then try the toolbox:

python demo_toolbox.py -d <datasets_root>
or
python demo_toolbox.py

TODO

  • Add demo video
  • Add support for more dataset
  • Upload pretrained model
  • 🙏 Welcome to add more