Bug in binary precision
See original GitHub issueBasic tests of binary precision seem to fail:
precision = Precision(average=True)
y_pred = torch.rand(10, 1)
y = torch.randint(0, 2, size=(10,)).type(torch.LongTensor)
precision.update((y_pred, y))
np_y = y.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
precision_score(np_y, np_y_pred), precision.compute()
@jasonkriss could you please confirm this ?
EDIT: Another failing test: https://github.com/pytorch/ignite/pull/333#issuecomment-442643530
Probably, the error is at binary to categorical mapping and counting class-0 similarly to class-1. But in binary case we should ignore class-0.
There is also input “binary or categorical” checking is missing if user tries to mix both in several updates.
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (6 by maintainers)
Top Results From Across the Web
Why “Volatile” Fixes the 2.2250738585072011e-308 Bug
In this code, the fmulp and faddp instructions are separated by a load/store sequence, forcing adj to double-precision before adding it to value ......
Read more >Realized bug was caused by decimal number precision ...
it's constrained by the memory available for each number (32 bits in this case), and b. computers use binary, not decimal. So it's...
Read more >Pentium FDIV bug
Because of the bug, the processor would return incorrect binary floating point results when dividing certain pairs of high-precision numbers.
Read more >gcc precision bug?
What Ed calls precision error is a feature of floating-point arithmetic. There's no way around it, it's not a bug, not careless implementation ......
Read more >Dynamic Test Generation To Find Integer Bugs in x86 ...
For example, machine arithmetic has bounded precision; if an expres- sion has a value greater than the maximum integer that can be represented,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Once this is merged we should probably cut a release. I think for bugfixes we should release often, just to avoid anyone running for too long with a buggy version
So we could follow the pytorch path here.
0.2
can contain the backwards incompatible changes and this can be0.1.2
, we can switch the warning message to say0.2
and all the current tickets assigned to0.1.2
we can just move to0.2
and that can be the next release (pending no further bug fixes).wdyt?