question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bug in binary precision

See original GitHub issue

Basic tests of binary precision seem to fail:

precision = Precision(average=True)

y_pred = torch.rand(10, 1)
y = torch.randint(0, 2, size=(10,)).type(torch.LongTensor)

precision.update((y_pred, y))

np_y = y.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')

precision_score(np_y, np_y_pred), precision.compute()

@jasonkriss could you please confirm this ?

EDIT: Another failing test: https://github.com/pytorch/ignite/pull/333#issuecomment-442643530

Probably, the error is at binary to categorical mapping and counting class-0 similarly to class-1. But in binary case we should ignore class-0.

There is also input “binary or categorical” checking is missing if user tries to mix both in several updates.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11 (6 by maintainers)

github_iconTop GitHub Comments

3reactions
alykhantejanicommented, Dec 6, 2018

Once this is merged we should probably cut a release. I think for bugfixes we should release often, just to avoid anyone running for too long with a buggy version

1reaction
alykhantejanicommented, Dec 7, 2018

So we could follow the pytorch path here. 0.2 can contain the backwards incompatible changes and this can be 0.1.2, we can switch the warning message to say 0.2 and all the current tickets assigned to 0.1.2 we can just move to 0.2 and that can be the next release (pending no further bug fixes).

wdyt?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why “Volatile” Fixes the 2.2250738585072011e-308 Bug
In this code, the fmulp and faddp instructions are separated by a load/store sequence, forcing adj to double-precision before adding it to value ......
Read more >
Realized bug was caused by decimal number precision ...
it's constrained by the memory available for each number (32 bits in this case), and b. computers use binary, not decimal. So it's...
Read more >
Pentium FDIV bug
Because of the bug, the processor would return incorrect binary floating point results when dividing certain pairs of high-precision numbers.
Read more >
gcc precision bug?
What Ed calls precision error is a feature of floating-point arithmetic. There's no way around it, it's not a bug, not careless implementation ......
Read more >
Dynamic Test Generation To Find Integer Bugs in x86 ...
For example, machine arithmetic has bounded precision; if an expres- sion has a value greater than the maximum integer that can be represented,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found