diff --git a/README-CN.md b/README-CN.md index 4c484b8..33bd6af 100644 --- a/README-CN.md +++ b/README-CN.md @@ -33,7 +33,7 @@ * 下载 数据集并解压:确保您可以访问 *train* 文件夹中的所有音频文件(如.wav) * 使用音频和梅尔频谱图进行预处理: `python synthesizer_preprocess_audio.py ` -可以传入参数 --dataset `{dataset}` 支持 adatatang_200zh, magicdata +可以传入参数 --dataset `{dataset}` 支持 adatatang_200zh, magicdata, aishell3 > 假如你下载的 `aidatatang_200zh`文件放在D盘,`train`文件路径为 `D:\data\aidatatang_200zh\corpus\train` , 你的`datasets_root`就是 `D:\data\` >假如發生 `頁面文件太小,無法完成操作`,請參考這篇[文章](https://blog.csdn.net/qq_17755303/article/details/112564030),將虛擬內存更改為100G(102400),例如:档案放置D槽就更改D槽的虚拟内存 diff --git a/README.md b/README.md index a15bfbd..f61694b 100644 --- a/README.md +++ b/README.md @@ -30,10 +30,10 @@ * Install webrtcvad `pip install webrtcvad-wheels`(If you need) > Note that we are using the pretrained encoder/vocoder but synthesizer, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment. ### 2. Train synthesizer with your dataset -* Download aidatatang_200zh or SLR68 dataset and unzip: make sure you can access all .wav in *train* folder +* Download aidatatang_200zh or other dataset and unzip: make sure you can access all .wav in *train* folder * Preprocess with the audios and the mel spectrograms: `python synthesizer_preprocess_audio.py ` -Allow parameter `--dataset {dataset}` to support adatatang_200zh, magicdata +Allow parameter `--dataset {dataset}` to support adatatang_200zh, magicdata, aishell3 >If it happens `the page file is too small to complete the operation`, please refer to this [video](https://www.youtube.com/watch?v=Oh6dga-Oy10&ab_channel=CodeProf) and change the virtual memory to 100G (102400), for example : When the file is placed in the D disk, the virtual memory of the D disk is changed.