This commit is contained in:
Nemo 2021-08-25 23:11:34 +08:00
commit a6f8c8a39a
2 changed files with 21 additions and 11 deletions

View File

@ -6,6 +6,8 @@
### [English](README.md) | 中文 ### [English](README.md) | 中文
### [DEMO VIDEO](https://www.bilibili.com/video/BV1sA411P7wM/)
## 特性 ## 特性
🌍 **中文** 支持普通话并使用多种中文数据集进行测试adatatang_200zh, magicdata 🌍 **中文** 支持普通话并使用多种中文数据集进行测试adatatang_200zh, magicdata
@ -22,6 +24,7 @@
**Python 3.7 或更高版本** 需要运行工具箱。 **Python 3.7 或更高版本** 需要运行工具箱。
* 安装 [PyTorch](https://pytorch.org/get-started/locally/)。 * 安装 [PyTorch](https://pytorch.org/get-started/locally/)。
> 如果在用 pip 方式安装的时候出现 `ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)` 这个错误可能是 python 版本过低3.9 可以安装成功
* 安装 [ffmpeg](https://ffmpeg.org/download.html#get-packages)。 * 安装 [ffmpeg](https://ffmpeg.org/download.html#get-packages)。
* 运行`pip install -r requirements.txt` 来安装剩余的必要包。 * 运行`pip install -r requirements.txt` 来安装剩余的必要包。
* 安装 webrtcvad 用 `pip install webrtcvad-wheels` * 安装 webrtcvad 用 `pip install webrtcvad-wheels`
@ -33,6 +36,8 @@
可以传入参数 --dataset `{dataset}` 支持 adatatang_200zh, magicdata 可以传入参数 --dataset `{dataset}` 支持 adatatang_200zh, magicdata
> 假如你下载的 `aidatatang_200zh`文件放在D盘`train`文件路径为 `D:\data\aidatatang_200zh\corpus\train` , 你的`datasets_root`就是 `D:\data\` > 假如你下载的 `aidatatang_200zh`文件放在D盘`train`文件路径为 `D:\data\aidatatang_200zh\corpus\train` , 你的`datasets_root`就是 `D:\data\`
>假如發生 `頁面文件太小,無法完成操作`,請參考這篇[文章](https://blog.csdn.net/qq_17755303/article/details/112564030)將虛擬內存更改為100G(102400),例如:档案放置D槽就更改D槽的虚拟内存
* 预处理嵌入: * 预处理嵌入:
`python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer` `python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer`
@ -47,9 +52,9 @@
### 2.2 使用预先训练好的合成器 ### 2.2 使用预先训练好的合成器
> 实在没有设备或者不想慢慢调试,可以使用网友贡献的模型(欢迎持续分享): > 实在没有设备或者不想慢慢调试,可以使用网友贡献的模型(欢迎持续分享):
| 作者 | 下载链接 | 效果预览 | | 作者 | 下载链接 | 效果预览 |
| --- | ----------- | ----- | | --- | ----------- | ----- |
|@miven| https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ 提取码2021 | https://www.bilibili.com/video/BV1uh411B7AD/ |@miven| https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ 提取码2021 | https://www.bilibili.com/video/BV1uh411B7AD/(暂时不可访问)
### 3. 启动工具箱 ### 3. 启动工具箱
然后您可以尝试使用工具箱: 然后您可以尝试使用工具箱:

View File

@ -3,14 +3,14 @@
[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://choosealicense.com/licenses/mit/) [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://choosealicense.com/licenses/mit/)
> This repository is forked from [Real-Time-Voice-Cloning](https://github.com/CorentinJ/Real-Time-Voice-Cloning) which only support English. > This repository is forked from [Real-Time-Voice-Cloning](https://github.com/CorentinJ/Real-Time-Voice-Cloning) which only support English.
> English | [中文](README-CN.md) > English | [中文](README-CN.md)
## Features ## Features
🌍 **Chinese** supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata 🌍 **Chinese** supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata
🤩 **PyTorch** worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060 🤩 **PyTorch** worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060
🌍 **Windows + Linux** tested in both Windows OS and linux OS after fixing nits 🌍 **Windows + Linux** tested in both Windows OS and linux OS after fixing nits
🤩 **Easy & Awesome** effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder 🤩 **Easy & Awesome** effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder
@ -24,21 +24,26 @@
**Python 3.7 or higher ** is needed to run the toolbox. **Python 3.7 or higher ** is needed to run the toolbox.
* Install [PyTorch](https://pytorch.org/get-started/locally/). * Install [PyTorch](https://pytorch.org/get-started/locally/).
> If you get an `ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2 )` This error is probably due to a low version of python, try using 3.9 and it will install successfully
* Install [ffmpeg](https://ffmpeg.org/download.html#get-packages). * Install [ffmpeg](https://ffmpeg.org/download.html#get-packages).
* Run `pip install -r requirements.txt` to install the remaining necessary packages. * Run `pip install -r requirements.txt` to install the remaining necessary packages.
* Install webrtcvad `pip install webrtcvad-wheels`(If you need)
> Note that we are using the pretrained encoder/vocoder but synthesizer, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment. > Note that we are using the pretrained encoder/vocoder but synthesizer, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment.
### 2. Train synthesizer with your dataset ### 2. Train synthesizer with your dataset
* Download aidatatang_200zh or SLR68 dataset and unzip: make sure you can access all .wav in *train* folder * Download aidatatang_200zh or SLR68 dataset and unzip: make sure you can access all .wav in *train* folder
* Preprocess with the audios and the mel spectrograms: * Preprocess with the audios and the mel spectrograms:
`python synthesizer_preprocess_audio.py <datasets_root>` `python synthesizer_preprocess_audio.py <datasets_root>`
Allow parameter `--dataset {dataset}` to support adatatang_200zh, magicdata Allow parameter `--dataset {dataset}` to support adatatang_200zh, magicdata
>If it happens `the page file is too small to complete the operation`, please refer to this [video](https://www.youtube.com/watch?v=Oh6dga-Oy10&ab_channel=CodeProf) and change the virtual memory to 100G (102400), for example : When the file is placed in the D disk, the virtual memory of the D disk is changed.
* Preprocess the embeddings: * Preprocess the embeddings:
`python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer` `python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer`
* Train the synthesizer: * Train the synthesizer:
`python synthesizer_train.py mandarin <datasets_root>/SV2TTS/synthesizer` `python synthesizer_train.py mandarin <datasets_root>/SV2TTS/synthesizer`
* Go to next step when you see attention line show and loss meet your need in training folder *synthesizer/saved_models/*. * Go to next step when you see attention line show and loss meet your need in training folder *synthesizer/saved_models/*.
> FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps. > FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps.
![attention_step_20500_sample_1](https://user-images.githubusercontent.com/7423248/128587252-f669f05a-f411-4811-8784-222156ea5e9d.png) ![attention_step_20500_sample_1](https://user-images.githubusercontent.com/7423248/128587252-f669f05a-f411-4811-8784-222156ea5e9d.png)
![step-135500-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/7423248/128587255-4945faa0-5517-46ea-b173-928eff999330.png) ![step-135500-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/7423248/128587255-4945faa0-5517-46ea-b173-928eff999330.png)
@ -46,8 +51,8 @@ Allow parameter `--dataset {dataset}` to support adatatang_200zh, magicdata
### 2.2 Use pretrained model of synthesizer ### 2.2 Use pretrained model of synthesizer
> Thanks to the community, some models will be shared: > Thanks to the community, some models will be shared:
| author | Download link | Previow Video | | author | Download link | Previow Video |
| --- | ----------- | ----- | | --- | ----------- | ----- |
|@miven| https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ code2021 | https://www.bilibili.com/video/BV1uh411B7AD/ |@miven| https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ code2021 | https://www.bilibili.com/video/BV1uh411B7AD/
> A link to my early trained model: [Baidu Yun](https://pan.baidu.com/s/10t3XycWiNIg5dN5E_bMORQ) > A link to my early trained model: [Baidu Yun](https://pan.baidu.com/s/10t3XycWiNIg5dN5E_bMORQ)
@ -55,9 +60,9 @@ Codeaid4
### 3. Launch the Toolbox ### 3. Launch the Toolbox
You can then try the toolbox: You can then try the toolbox:
`python demo_toolbox.py -d <datasets_root>` `python demo_toolbox.py -d <datasets_root>`
or or
`python demo_toolbox.py` `python demo_toolbox.py`
> Good news🤩: Chinese Characters are supported > Good news🤩: Chinese Characters are supported