InputNormalization: std can return all nan values
See original GitHub issueIf you have a tensor of all negative values (ex. x = torch.FloatTensor([[-100.0,-100.0,-100.0,-100.0,]])
, then torch.std(x, dim=0)
will return all nan
values.
There is no check or warning when this occurs.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5
Top Results From Across the Web
Keras model params are all "NaN"s after reloading
One thing to notice is that I make all batch normalization layers "trainable" so that BN related params can be updated with my...
Read more >Why my model returns nan? - PyTorch Forums
LayerNorm(output) might return a all nan vector. Similiar problem happens in my attention model, I'm pretty sure that it can't be exploding ...
Read more >How to use Data Scaling Improve Deep Learning Model ...
Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is...
Read more >std::nan, std::nanf, std::nanl - cppreference.com
Converts the character string arg into the corresponding quiet NaN value, as if by calling std::strtof, std::strtod, or std::strtold, ...
Read more >std function returns NaN for long floats · Issue #5725 - GitHub
Calling .groupby().std() on a dataframe with long floating point numbers returns NaNs and a warning. This occurs because internally the ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think here the solution is to create a numerically stable normalization, just like other normalization modules such as batch norm or layer norm.
On Fri, 12 Feb 2021 at 13:17, Parcollet Titouan notifications@github.com wrote:
I checked the code and, as you can see here, it is already numerically stable: https://github.com/speechbrain/speechbrain/blob/0b074e5a414be79e655ab47ad7809d4618ac2da0/speechbrain/processing/features.py#L1120