question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

nan loss after a number of steps (epochs)

See original GitHub issue

I am able to train a regression network without error using the base keras LSTM layer but always seem to run into a nan loss (mse) after what starts out as very promising results using this PLSTM layer. I have tried most of the recommendations in these two keras issues #2134 and #1244 but nothing seems to help. Do you have any troubleshooting recommendations concerning this issue using your PLSTM implementation?

edit I was able to get through 10x as many steps using the adamax optimizer (as opposed to rmsprop or adam), increased layer sizes all around, and an extra dense layer with a tanh activation. Unfortunately, the loss still went to nan. Again, the training loss was significantly better than anything I was able to squeeze out of the plain LSTM approach. 😕

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
Samyarrahimicommented, Aug 29, 2021

I had the nan loss problem in image data, I was using np.empty for generating the batches of images and looks like it was the reason of the nan loss, changed np.empty to np.zeros and problem solved.

0reactions
ruiatelseviercommented, Jul 22, 2019

I have got the same problem. It produces nan after a number of steps.

Read more comments on GitHub >

github_iconTop Results From Across the Web

NaN loss when training regression network - Stack Overflow
In my case, I use the log value of density estimation as an input. The absolute value could be very huge, which may...
Read more >
Keras Sequential model returns loss 'nan'
This is what I got for first 3 epoches after I replaced relu with tanh (high loss!): Epoch 1/10 1/1 - 9s -...
Read more >
Getting NaN for loss - General Discussion - TensorFlow Forum
Hi! The problem is not in the concatenation layer but in how you normalize the input data and how you pass it to...
Read more >
Common Causes of NANs During Training
Common Causes of NANs During Training · Gradient blow up · Bad learning rate policy and params · Faulty Loss function · Faulty...
Read more >
A training loss turns into NaN after 300 epochs while ... - Quora
A training loss turns into NaN after 300 epochs while training with a model Baidu deep speech framework! What could be the cause...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found