question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multiple TracerWarning on every epoch

See original GitHub issue

I am running FastSpeech2 and every epoch gives me this:

/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/nets_utils.py:154: TracerWarning: Converting a tensor to a Python list might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  lengths = lengths.long().tolist()
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/nets_utils.py:172: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert xs.size(0) == bs, (xs.size(0), bs)
/home/perry/PycharmProjects/espnet/espnet2/tts/feats_extract/log_mel_fbank.py:92: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert input_stft.shape[-1] == 2, input_stft.shape
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:533: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  for i, l in enumerate(text_lengths):
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/transformer/embedding.py:61: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.pe.size(1) >= x.size(1):
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/transformer/attention.py:81: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/transformer/attention.py:81: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  numpy.finfo(torch.tensor(0, dtype=scores.dtype).numpy().dtype).min
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/fastspeech/length_regulator.py:56: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if ds.sum() == 0:
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/fastspeech/length_regulator.py:66: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  repeat = [torch.repeat_interleave(x, d, dim=0) for x, d in zip(xs, ds)]
/home/perry/PycharmProjects/espnet/espnet/nets/pytorch_backend/nets_utils.py:55: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  max_len = max(x.size(0) for x in xs)
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:582: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  l1_loss=l1_loss.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:583: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  duration_loss=duration_loss.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:584: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  pitch_loss=pitch_loss.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:585: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  energy_loss=energy_loss.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:591: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  encoder_alpha=self.encoder.embed[-1].alpha.data.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:595: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  decoder_alpha=self.decoder.embed[-1].alpha.data.item(),
/home/perry/PycharmProjects/espnet/espnet2/tts/fastspeech2/fastspeech2.py:599: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  stats.update(loss=loss.item())
/home/perry/PycharmProjects/espnet/espnet2/torch_utils/device_funcs.py:64: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  return torch.tensor([data], dtype=torch.float, device=device)

Is there some way to solve it?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
kan-bayashicommented, Jul 29, 2022

It does not affect the training results, but a little bit annoying. Let us think how we should deal with this function.

0reactions
kan-bayashicommented, Aug 2, 2022

I changed the graph creation to option and set to False as a default in #4551. I will close this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

why my model does not learn? Same on every epoch (Pytorch)
First, as it was mentioned in the comments, you probably meant: train_data, train_target = data[:int(a*len(data))] ...
Read more >
fvcore documentation — detectron2 0.6 documentation
TracerWarning only. 'none' : suppress all warnings raised while tracing. Parameters. mode (str) – warning mode in one of the above values.
Read more >
Neural Network Training
In other words, we may wish to train a neural network for more than one epoch. An epoch is a measure of the...
Read more >
Difference Between a Batch and an Epoch in a Neural Network
When all training samples are used to create one batch, the learning algorithm is called batch gradient descent. When the batch is the...
Read more >
Model does not train: Same loss in every epoch
If you switch to a shape of [batch_size, 2] and use nn.BCEWithLogitsLoss, it will be like doing multi-label classification. melste ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found