np.isnan doesn't work on CPU in fast-math mode
See original GitHub issueas a result np.nan_to_num, np.nanmean, etc all don’t work
import jax.numpy as np
a = np.zeros(1) / np.zeros(1)
print(a.__array__())
print(np.isnan(a).__array__())
[nan]
[False]
This bug only happens with the CPU-only build of JAX, when I see this warning: “warnings.warn(‘No GPU found, falling back to CPU.’)”
Issue Analytics
- State:
- Created 5 years ago
- Comments:14 (7 by maintainers)
Top Results From Across the Web
python - Numpy isnan() fails on an array of floats (from pandas ...
I have an array of floats (some normal numbers, some nans) that is coming out of an apply on a pandas dataframe. For...
Read more >Performance Tips — Numba 0.50.1 documentation
Numba supports most of numpy.linalg in no Python mode. The internal implementation relies on a LAPACK and BLAS library to do the numerical...
Read more >Theano 配置和编译模式
They can't be run on the GPU with the current(old) gpu back-end and are slow with gamer GPUs. Value: ignore cast_policy (('custom', 'numpy+floatX'))...
Read more >Understanding Numba - the Python and Numpy compiler
This talk will explain how Numba works, and when and how to use it ... Why algorithms implemented using Numpy sometimes don't yield...
Read more >Performance Tips · The Julia Language
The Traceur package can help you find common performance problems in your code. ... the CPU needs to handle them using two different...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
We just pushed out
jaxlib
0.1.13, which should fix this problem.Parts of fast math are still enabled by default for performance, but the semantics of NaNs and Infs should now be honored. Please file a new issue if you see any further problems!
A brief update on this bug: we tried disabling fastmath in XLA/CPU by default, but found it regressed performance for some neural network benchmarks significantly because it prevents vectorization in some important cases.
https://reviews.llvm.org/D57728 apparently fixes the performance problem, but it isn’t in yet. I’m hoping we can simply disable fast math by default when that change makes it into LLVM.
A warning makes sense until we do so, I guess.