add instruction to use pretrained models

This commit is contained in:
Vega Chen 2021-08-09 19:46:18 +08:00
parent f651ebf0bf
commit 67d301fc5b
2 changed files with 10 additions and 5 deletions

View File

@ -25,8 +25,11 @@
* 安装 [ffmpeg](https://ffmpeg.org/download.html#get-packages)。
* 运行`pip install -r requirements.txt` 来安装剩余的必要包。
### 2. 使用预训练好的编码器/声码器
下载以下模型,解压替换到本代码库的根目录
https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models
### 2. 使用 aidatatang_200zh 训练合成器
### 3. 使用 aidatatang_200zh 训练合成器
* 下载 adatatang_200zh 数据集并解压:确保您可以访问 *train* 文件夹中的所有 .wav
* 使用音频和梅尔频谱图进行预处理:
`python synthesizer_preprocess_audio.py <datasets_root>`
@ -41,7 +44,7 @@
> 仅供参考,我的注意力是在 18k 步之后出现的,并且在 50k 步之后损失变得低于 0.4。
### 3. 启动工具箱
### 4. 启动工具箱
然后您可以尝试使用工具箱:
`python demo_toolbox.py -d <datasets_root>`

View File

@ -27,8 +27,10 @@
* Install [ffmpeg](https://ffmpeg.org/download.html#get-packages).
* Run `pip install -r requirements.txt` to install the remaining necessary packages.
### 2. Train synthesizer with aidatatang_200zh
### 2. reuse the pretrained encoder/vocoder
* Download the following models and extract to the root directory of this project.
https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models
### 3. Train synthesizer with aidatatang_200zh
* Download aidatatang_200zh dataset and unzip: make sure you can access all .wav in *train* folder
* Preprocess with the audios and the mel spectrograms:
`python synthesizer_preprocess_audio.py <datasets_root>`
@ -44,7 +46,7 @@
![attention_step_20500_sample_1](https://user-images.githubusercontent.com/7423248/128587252-f669f05a-f411-4811-8784-222156ea5e9d.png)
![step-135500-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/7423248/128587255-4945faa0-5517-46ea-b173-928eff999330.png)
### 3. Launch the Toolbox
### 4. Launch the Toolbox
You can then try the toolbox:
`python demo_toolbox.py -d <datasets_root>`