Skip to content
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
78d48dd
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
1c19a4a
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
2be3448
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
3d2a40b
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
3866bdb
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
35b0a1b
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
206d9e5
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
0207ac0
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 28, 2022
ba045c4
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 29, 2022
ca7e121
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 29, 2022
5251152
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 29, 2022
69819eb
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Nov 30, 2022
a02c0f4
Add end-to-end version of MFA FastSpeech2, test=tts
WongLaw Dec 1, 2022
e846a0f
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
c8fca35
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
a63b994
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
e22646d
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
0d6d44a
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
02f1f05
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
c62dcc0
Add prosody prediction in synthesize_e2e, test=tts
WongLaw Dec 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions examples/csmsc/tts3_rhy/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# This example mainly follows the FastSpeech2 with CSMSC
This example contains code used to train a rhythm version of [Fastspeech2](https://arxiv.org/abs/2006.04558) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html).

## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/TNtts/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.

### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for fastspeech2.
You can directly download the rhythm version of MFA result from here [baker_alignment_tone.zip](https://paddlespeech.bj.bcebos.com/Rhy_e2e/baker_alignment_tone.zip), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
Remember in our repo, you should add `--rhy-with-duration` flag to obtain the rhythm information.

## Get Started
Assume the path to the dataset is `~/datasets/BZNSYP`.
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from a text file.
5. inference using the static model.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.

```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── energy_stats.npy
├── norm
├── pitch_stats.npy
├── raw
└── speech_stats.npy
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.

Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, the path of pitch features, the path of energy features, speaker, and the id of each utterance.

# More details can be refered to the example of FastSpeech2 with CSMSC(tts3)

## Pretrained Model
Pretrained FastSpeech2 model for end-to-end rhythm version:
- [fastspeech2_rhy_csmsc_ckpt_1.3.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_rhy_csmsc_ckpt_1.3.0.zip)

This FastSpeech2 checkpoint contains files listed below.
```text
fastspeech2_rhy_csmsc_ckpt_1.3.0
├── default.yaml # default config used to train fastspeech2
├── phone_id_map.txt # phone vocabulary file when training fastspeech2
├── snapshot_iter_153000.pdz # model parameters and optimizer states
├── durations.txt # the intermediate output of preprocess.sh
├── energy_stats.npy
├── pitch_stats.npy
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
```
1 change: 1 addition & 0 deletions examples/csmsc/tts3_rhy/conf/default.yaml
1 change: 1 addition & 0 deletions examples/csmsc/tts3_rhy/local/preprocess.sh
1 change: 1 addition & 0 deletions examples/csmsc/tts3_rhy/local/synthesize.sh
119 changes: 119 additions & 0 deletions examples/csmsc/tts3_rhy/local/synthesize_e2e.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
#!/bin/bash

config_path=$1
train_output_path=$2
ckpt_name=$3

stage=0
stop_stage=0

# pwgan
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=pwgan_csmsc \
--voc_config=pwg_baker_ckpt_0.4/pwg_default.yaml \
--voc_ckpt=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \
--voc_stat=pwg_baker_ckpt_0.4/pwg_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
--inference_dir=${train_output_path}/inference \
--use_rhy=True
fi

# for more GAN Vocoders
# multi band melgan
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=mb_melgan_csmsc \
--voc_config=mb_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=mb_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1000000.pdz\
--voc_stat=mb_melgan_csmsc_ckpt_0.1.1/feats_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
--inference_dir=${train_output_path}/inference \
--use_rhy=True
fi

# the pretrained models haven't release now
# style melgan
# style melgan's Dygraph to Static Graph is not ready now
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=style_melgan_csmsc \
--voc_config=style_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=style_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1500000.pdz \
--voc_stat=style_melgan_csmsc_ckpt_0.1.1/feats_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
--use_rhy=True
# --inference_dir=${train_output_path}/inference
fi

# hifigan
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
echo "in hifigan syn_e2e"
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=hifigan_csmsc \
--voc_config=hifigan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=hifigan_csmsc_ckpt_0.1.1/snapshot_iter_2500000.pdz \
--voc_stat=hifigan_csmsc_ckpt_0.1.1/feats_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
--inference_dir=${train_output_path}/inference \
--use_rhy=True
fi


# wavernn
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
echo "in wavernn syn_e2e"
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=fastspeech2_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
--voc=wavernn_csmsc \
--voc_config=wavernn_csmsc_ckpt_0.2.0/default.yaml \
--voc_ckpt=wavernn_csmsc_ckpt_0.2.0/snapshot_iter_400000.pdz \
--voc_stat=wavernn_csmsc_ckpt_0.2.0/feats_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
--inference_dir=${train_output_path}/inference \
--use_rhy=True
fi
1 change: 1 addition & 0 deletions examples/csmsc/tts3_rhy/local/train.sh
1 change: 1 addition & 0 deletions examples/csmsc/tts3_rhy/path.sh
38 changes: 38 additions & 0 deletions examples/csmsc/tts3_rhy/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#!/bin/bash

set -e
source path.sh

gpus=0,1
stage=0
stop_stage=100

conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_153.pdz

# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1

if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
### please place the mfa result of rhythm here
./local/preprocess.sh ${conf_path} || exit -1
fi

if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi

if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan by default
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi

if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# synthesize_e2e, vocoder is pwgan by default
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
13 changes: 13 additions & 0 deletions paddlespeech/resource/pretrained_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -1658,3 +1658,16 @@
},
},
}

# ---------------------------------
# ------------- Rhy_frontend ---------------
# ---------------------------------
rhy_frontend_models = {
'rhy_e2e': {
'1.0': {
'url':
'https://paddlespeech.bj.bcebos.com/Rhy_e2e/rhy_frontend.zip',
'md5': '6624a77393de5925d5a84400b363d8ef',
},
},
}
7 changes: 5 additions & 2 deletions paddlespeech/t2s/exps/syn_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,10 +163,13 @@ def get_test_dataset(test_metadata: List[Dict[str, Any]],
# frontend
def get_frontend(lang: str='zh',
phones_dict: Optional[os.PathLike]=None,
tones_dict: Optional[os.PathLike]=None):
tones_dict: Optional[os.PathLike]=None,
use_rhy=False):
if lang == 'zh':
frontend = Frontend(
phone_vocab_path=phones_dict, tone_vocab_path=tones_dict)
phone_vocab_path=phones_dict,
tone_vocab_path=tones_dict,
use_rhy=use_rhy)
elif lang == 'en':
frontend = English(phone_vocab_path=phones_dict)
elif lang == 'mix':
Expand Down
9 changes: 8 additions & 1 deletion paddlespeech/t2s/exps/synthesize_e2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
from paddlespeech.t2s.exps.syn_utils import get_voc_inference
from paddlespeech.t2s.exps.syn_utils import run_frontend
from paddlespeech.t2s.exps.syn_utils import voc_to_static
from paddlespeech.t2s.utils import str2bool


def evaluate(args):
Expand All @@ -49,7 +50,8 @@ def evaluate(args):
frontend = get_frontend(
lang=args.lang,
phones_dict=args.phones_dict,
tones_dict=args.tones_dict)
tones_dict=args.tones_dict,
use_rhy=args.use_rhy)
print("frontend done!")

# acoustic model
Expand Down Expand Up @@ -240,6 +242,11 @@ def parse_args():
type=str,
help="text to synthesize, a 'utt_id sentence' pair per line.")
parser.add_argument("--output_dir", type=str, help="output dir.")
parser.add_argument(
"--use_rhy",
type=str2bool,
default=False,
help="run rhythm frontend or not")

args = parser.parse_args()
return args
Expand Down
14 changes: 14 additions & 0 deletions paddlespeech/t2s/frontend/rhy_prediction/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .rhy_predictor import *
Loading