question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)

See original GitHub issue

I am using autogluon to help me a binary classification problem. It is an unbalanced dataset (90:10) and the neural network breaks down in training and never recovers. I guess a reinitialization once this has been encountered would help?

/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log
  loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)
/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log
  loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)
/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log
  loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)
/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log
  loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)
/home/nitin/anaconda3/envs/automlgluon/lib/python3.6/site-packages/sklearn/metrics/classification.py:2174: RuntimeWarning: divide by zero encountered in log

The task is fitting all the other models just fine and hitting an ‘auc’ of 0.74+ but neural network has issues (dead relus, inf encountered and now wont recover? just guesses) but i guess reinitialization once the weights have been bastardized irrevocably would help

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
nitinmnsncommented, May 19, 2020

I have seen this issue in writing my own neural networks based pipeline (it was couple of years back though). I did a few things in trying to solve it and they helped (all in tensorflow) :

  1. Gradient clipping
  2. Batchnorm as the first layer
  3. Weights normalization (You have batchnorm with dropout. I have read that they dont work well together). If anyway you have batchnorm then keeping gamma frozen at 1 and beta frozen at 0 for first few epochs might help
  4. Reinitializing the network altogether when nan was encounter in loss
  5. Leaky ReLU for first few epoch and then ReLU. Also, SELU was great at not letting the network bastardize to even 6-7 layers of depths. Deep feed forward netowrks with dense connections (densenet style) with SELU (since SELU can handle depth) never got the issue. But, you are going for speed as well and SELU is going to need at least 100 neurons in hidden layers for fixed point theorem to kick in plus huge network … so… I dont know … just putting down what i had tried

Also, with me it did happen that the network broke down at times but i didnt see it break down with every initialization but that is happening here. I dropped around 100 features using null importance and still the neural network is breaking on every initialization.

0reactions
jwmuellercommented, Jun 15, 2020
Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeWarning: divide by zero encountered in log
numpy.log10(prob) calculates the base 10 logarithm for all elements of prob , even the ones that aren't selected by the where .
Read more >
when prediction value is very small, metric logloss calculate ...
... /classification.py:2174: RuntimeWarning: divide by zero encountered in log loss = -(transformed_labels * np.log(y_pred)).sum(axis=1) To ...
Read more >
numpy.sum — NumPy v1.24 Manual
The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to...
Read more >
log-sum-exp-trick - Jupyter Notebooks Gallery
A gallery of the most interesting jupyter notebooks online.
Read more >
What is numpy.sum() in Python? - Educative.io
sum () returns an array with the same shape as the input array, with the specified axis removed. If a is a zero-dimensional...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found