question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inference/recipe not working properly.

See original GitHub issue

Discussed in https://github.com/coqui-ai/TTS/discussions/639

<div type='discussions-op-text'>

Originally posted by BillyBobQuebec July 10, 2021 I am training Tacotron2-DDC (LJ) from scratch using the recipe provided with no changes, tensorboard looks good to my eyes, but the alignment and duration seem to be way off when I actually try inferencing the audio, I’m suspecting it’s a problem with it being inferenced improperly. specifically, the r-value that it is attempting to inference with. Here is the command that I used to initiate training:

cd ~/repo/coqui-clean
bash recipes/ljspeech/tacotron2-DDC/run.sh

Since the recipe uses gradual training which uses “r” as the starting value for the fine decoder (if I understand it correctly), but then changes it over time during training, I suspect it’s using the starting r value during inference, instead of the latest r-value the fine decoder was at during training, when I try to force a different r value for inference (by passing through a recipe config with "r": 2, instead of "r": 6, ) it gives me this error:

RuntimeError: Error(s) in loading state_dict for Tacotron2:
size mismatch for decoder.linear_projection.linear_layer.weight: copying a param with shape torch.Size([480, 1536]) from checkpoint, the shape in current model is torch.Size([160, 1536]).
size mismatch for decoder.linear_projection.linear_layer.bias: copying a param with shape torch.Size([480]) from checkpoint, the shape in current model is torch.Size([160]).
size mismatch for decoder.stopnet.1.linear_layer.weight: copying a param with shape torch.Size([1, 1504]) from checkpoint, the shape in current model is torch.Size([1, 1184]).

Here’s the command used for inferencing and here’s how it sounds at different points:

cd ~
cp ~/repo/coqui-clean/recipes/ljspeech/tacotron2-DDC/scale_stats.npy .
cp ~/repo/coqui-clean/recipes/ljspeech/tacotron2-DDC/tacotron2-DDC.json config.json
CUDA_VISIBLE_DEVICES="" tts \
  --text "Hello I bought this T.V. today, and it's cold outside. I should probably grab my sweater and go to your moms house." \
  --model_path ~/repo/coqui-clean/recipes/ljspeech/tacotron2-DDC/ljspeech-ddc-July-06-2021_09+10AM-8fbadad6/checkpoint_280000.pth.tar \
  --config_path config.json \
  --out_path output.wav

https://user-images.githubusercontent.com/74849975/125181759-267e6280-e1d6-11eb-8e81-1de96b682245.mp4

https://user-images.githubusercontent.com/74849975/125181760-28e0bc80-e1d6-11eb-90fc-8392e5223fc7.mp4

https://user-images.githubusercontent.com/74849975/125181761-2aaa8000-e1d6-11eb-8477-9a6c1b59843f.mp4

</div>

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:26 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
BillyBobQuebeccommented, Aug 11, 2021

Ah it seems like that may be the problem, looks like I was using a slightly different config, I also noticed some things that might’ve affected training. I’m going to train from scratch again and make sure everything is correct, i’ll post an update of how inferencing sounds with hifi-gan within the next few days.

1reaction
BillyBobQuebeccommented, Aug 7, 2021

@erogol Thank you for pushing the new update! I’m currently training with the same config used at the beginning of this issue, except I’m now training using your new V0.1.3 to see if the stop net was the problem and if I can finally replicate the inference quality in the current pretrained voice.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Constructing a probabilistic module - scvi-tools
Our inference recipe includes two neural networks (the encoders). Because each neural network will have a specific output non-linearity to handle the ...
Read more >
Pyro @ MODE 2021.pdf - CERN Indico
Startups: Robinhood, Babylon Health, Noodle.ai, www.finn.no, … ... VAEs suggest a general inference recipe: ... problem-specific part. Let's make it.
Read more >
Overview of the Active Inference recipe, applied to our example from ...
Finally, we demonstrate the utility of our method for the problem of sleep spindle detection, showing how switching state-space models can be used...
Read more >
The q-exponentials do not maximize the Rényi entropy
In this work, we consider the Rényi entropy with the linear and ... Finally, we note that the Shannon entropy successfully detects the...
Read more >
Probabilistic Programming with monad‑bayes, Part 2 - Tweag
However, we didn't properly normalize these distributions within the limited 2D domain but that's not important ... But, there is a problem:.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found