nan loss after a number of steps (epochs)
See original GitHub issueI am able to train a regression network without error using the base keras LSTM
layer but always seem to run into a nan
loss (mse) after what starts out as very promising results using this PLSTM
layer. I have tried most of the recommendations in these two keras issues #2134 and #1244 but nothing seems to help. Do you have any troubleshooting recommendations concerning this issue using your PLSTM implementation?
edit
I was able to get through 10x as many steps using the adamax
optimizer (as opposed to rmsprop
or adam
), increased layer sizes all around, and an extra dense layer with a tanh
activation. Unfortunately, the loss still went to nan
. Again, the training loss was significantly better than anything I was able to squeeze out of the plain LSTM approach. 😕
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
I had the nan loss problem in image data, I was using np.empty for generating the batches of images and looks like it was the reason of the nan loss, changed np.empty to np.zeros and problem solved.
I have got the same problem. It produces nan after a number of steps.