Floating-point bit twiddling in CUDA
See original GitHub issueFeature request
I need to use view
individual floating-point numbers as integers and backwards. For example:
np.array(np.array(23.65).view('i8') | 1).view('d').item()
should give 23.650000000000002
in CUDA device functions.
I also need to use frexp
and ldexp
in CUDA device functions.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:11 (11 by maintainers)
Top Results From Across the Web
1.2.5. Half Precision Conversion and Data Movement
Convert the half-precision floating-point value h to a signed 64-bit integer in round-to-nearest-even mode. NaN inputs return a long long int with hex...
Read more >8.3. Floating-Point Support | GPU Programming and ... - InformIT
Listing 8.3 shows a C routine that exactly replicates the float-to-half conversion operation, as implemented by CUDA hardware. The variables exp ...
Read more >CUDA C Best Practices Guide - Campus de Metz
This Best Practices Guide is a manual to help developers obtain the best performance from the NVIDIA® CUDA™ architecture using version 5.5 of...
Read more >Bitwise Operations on Cuda Float Tensor - PyTorch Forums
I would like to access the bit representation of a float tensor on a GPU and perform manipulations such as shifting, anding, etc....
Read more >Utilizing the Double-Precision Floating-Point Computing ...
The second one is converting DPF to integer then using CUDA native 32-bit integer bitwise instruction to handle bit extraction. Through the experiments,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@zhihaoy You can use the code in https://numba.discourse.group/t/cuda-building-extensions-to-call-libdevice-functions-cube-root-example/140 as a starting point for making a Numba extension to call
__nv_ldexp
and__nv_frexp
as a workaround for now - if you modify the names of the functions (and the Python implementation ofcbrt
so that it isldexp
/frexp
) then that should give youldexp
andfrexp
functions you can call from CUDA kernels.Now that
@overload
is supported on the CUDA target, I have made a start on reusing the implementation in the core of Numba for this: https://github.com/gmarkall/numba/tree/cuda-scalar-view - the change itself is only:however there are some test failures for this functionality on the CUDA target - I have not yet looked into whether there is a problem in the implementation related to CUDA, or if the tests need some adjustment.