question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Problems with binary classifier & metrics

See original GitHub issue

Hi,

I am new to PyTorch and am trying to get started with Ignite. I want to train a simple binary classifier:

class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, num_classes):
    super(NeuralNet, self).__init__()
    self.fc1 = nn.Linear(input_size, hidden_size)
    self.relu = nn.ReLU()
    self.fc2 = nn.Linear(hidden_size, num_classes)
    self.out_act = nn.Sigmoid()

  def forward(self, x):
    out = self.fc1(x)
    out = self.relu(out)
    out = self.fc2(out)
    out = self.out_act(out)
    return out

First I had (plus some events):

trainer = create_supervised_trainer(model, optimizer, nn.BCELoss(), device=device)

evaluator = create_supervised_evaluator(
    model, metrics={'precision': Precision(),
                    'recall': Recall(),
                    'accuracy': BinaryAccuracy()}, device=device)

This failed with:

~/.local/share/virtualenvs/bot2-Z_RSwUyv/lib/python3.6/site-packages/ignite/metrics/precision.py in update(self, output)
     29         num_classes = y_pred.size(1)
     30         indices = torch.max(y_pred, 1)[1]
---> 31         correct = torch.eq(indices, y)
     32         pred_onehot = to_onehot(indices, num_classes)
     33         all_positives = pred_onehot.sum(dim=0)

RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'other'

Then I found the output_transform parameter and changed it to:

def output_transform(output):
  y_pred, y = output
  return y_pred.gt(0.5).long(), y.long()

evaluator = create_supervised_evaluator(
    model, metrics={'precision': Precision(output_transform=output_transform),
                    'recall': Recall(output_transform=output_transform),
                    'accuracy': BinaryAccuracy(output_transform=output_transform)}, device=device)

Even if this worked, I do not think it was a good solution. In my opinion, it should be possible to do the transformation in just one place instead of having to pass it to each metric. Is this somehow possible? Anyway, this fails with:

~/.local/share/virtualenvs/bot2-Z_RSwUyv/lib/python3.6/site-packages/ignite/metrics/precision.py in update(self, output)
     35             true_positives = torch.zeros_like(all_positives)
     36         else:
---> 37             correct_onehot = to_onehot(indices[correct], num_classes)
     38             true_positives = correct_onehot.sum(dim=0)
     39         if self._all_positives is None:

IndexError: too many indices for tensor of dimension 1

Am I on the right track?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:10 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
vfdev-5commented, Sep 20, 2018

@anmolsjoshi please go ahead. I think nobody has yet taken this one.

0reactions
anmolsjoshicommented, Sep 20, 2018

Is someone working on this? Can I help out?

Read more comments on GitHub >

github_iconTop Results From Across the Web

24 Evaluation Metrics for Binary Classification (And When to ...
Classification metrics let you assess the performance of machine learning models but there are so many of them, each one has its own...
Read more >
The Explanation You Need on Binary Classification Metrics
Performance metrics for binary classification · Accuracy · Precision and recall · F1 score · Log loss · ROC-AUC · Matthews Correlation Coefficient...
Read more >
6 Useful Metrics to Evaluate Binary Classification Models
6 Useful Metrics to Evaluate Binary Classification Models · Confusion matrix: the basis of all metrics; Accuracy, precision, recall, F1 Score ...
Read more >
What nobody tells you about binary classification metrics
What nobody tells you about binary classification metrics · Accuracy · Precision, Recall, and F1-score (or F-measure) · Negative Predictive Value ( ...
Read more >
Evaluation of binary classifiers - Wikipedia
Evaluation of binary classifiers · true positive (TP): A test result that correctly indicates the presence of a condition or characteristic · true...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found