question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loss drops to 0 every 50 epochs on TOX21

See original GitHub issue

Hello,

I’m currently using deepChem 2.3.0.

I have troubles when attempting to train on the TOX21 dataset. I tried multiple models (currently GraphConv, Weaver and MPNN), and for each one, my loss drop to 0 every 50 epochs.

Any idea what could cause this ?

Here is a little code to reproduce the problem with GraphConvModel.

import numpy as np
import deepchem as dc
from deepchem.molnet import load_tox21
from deepchem.models.graph_models import GraphConvModel
import matplotlib.pyplot as plt

model_dir = "/tmp/graph_conv"

tox21_tasks, tox21_datasets, transformers = load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = tox21_datasets

metric = dc.metrics.Metric(
    dc.metrics.roc_auc_score, np.mean, mode="classification")

batch_size = 50

model = GraphConvModel(
    len(tox21_tasks), batch_size=batch_size, mode='classification')

losses = []

for i in range(105):
    loss = model.fit(train_dataset, nb_epoch=1)
    losses.append(loss)

print(losses)
plt.plot(losses)
plt.show()

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:17 (15 by maintainers)

github_iconTop GitHub Comments

1reaction
chstemcommented, Aug 6, 2020

You were right, somehow I didn’t have the deepchem version pip was claiming I have. Now I can confirm the issues is fixed.

0reactions
rbharathcommented, Aug 6, 2020

Awesome 😃. Going to go ahead and close off this issue. Will reopen in case there are any more reports of this still being broken

Read more comments on GitHub >

github_iconTop Results From Across the Web

A deep Tox21 neural network with RDKit and Keras
During training the learning rate is reduced when no drop in loss function is observed for 50 epochs. This is conveniently done via...
Read more >
Why there is sudden drop in loss after every epoch?
Let's continue with the example, so the loss is 0.25 at the beginning of epoch 2 and decreases linearly to 0. This means...
Read more >
Loss not changing when training · Issue #2711 · keras-team ...
I have a model that I am trying to train where the loss does not go ... I ve waited for a about...
Read more >
Chapter 4. Fully Connected Deep Networks - O'Reilly
For large enough networks, it is quite common for training loss to trend all the way to zero. This empirical observation is one...
Read more >
What is the cause of the sudden drop in error rate that one ...
It seems abit crazy that the loss gets lowered by a factor of 3 in 1 epoch by lowering the learning rate. "we...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found