Enable custom-ops for tensorflow-cpu
See original GitHub issueCurrently tensorflow-cpu will fail when trying to load custom ops for undefined symbol: __cudaPushCallConfiguration
:
from tensorflow_addons.activations.gelu import gelu
File "/usr/local/lib/python3.7/site-packages/tensorflow_addons/activations/gelu.py", line 24, in <module>
get_path_to_datafile("custom_ops/activations/_activation_ops.so"))
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/load_library.py", line 57, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.7/site-packages/tensorflow_addons/custom_ops/activations/_activation_ops.so: undefined symbol: __cudaPushCallConfiguration
I’m not quite sure what is causing this without having done a deep dive, but linking this possibly related PR since this was a departure from standard TF linking: https://github.com/tensorflow/addons/pull/539
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Create an op | TensorFlow Core
Note: To guarantee that your C++ custom ops are ABI compatible with TensorFlow's official pip packages, please follow the guide at Custom op ......
Read more >tf.config.experimental.enable_op_determinism - TensorFlow
When op determinism is enabled, TensorFlow ops will be deterministic. This means that if an op ... Do not use nondeterministic custom ops....
Read more >Build from source - TensorFlow
Setup for Linux and macOS. Install the following build tools to configure your development environment. Install Python and the TensorFlow package dependencies.
Read more >Writing custom ops, kernels and gradients in TensorFlow.js
Implementing Custom Kernels. Backend specific kernel implementations allow for optimized implementation of the logic for a given operation.
Read more >Serving TensorFlow models with custom ops | TFX
TensorFlow comes pre-built with an extensive library of ops and op kernels (implementations) fine-tuned for different hardware types (CPU, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
The problem is -
libtensorflow_framework.so.2
exports CUDA stubs they use to dynamically load CUDA runtime. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/stream_executor/cuda/cudart_stub.cc However, tensorflow-cpu doesn’t have these stubs! A simple reordering of TFA linking to allow CUDA libraries be first seems to solve the problem. Let me explain. Here is an import table of_activation_ops.so
:Exports of
libtensorflow_framework.so.2
:After a simple modification of https://github.com/tensorflow/addons/blob/master/tensorflow_addons/tensorflow_addons.bzl
now returns nothing and
_activation_ops.so
grows in size.Looks great but I’ve not tested how it works yet. 😆
Building the addons from sources fixed it for me on TF 2.2 (installed with pip). I followed the CPU custom ops instructions at: https://github.com/tensorflow/addons/tree/master#cpu-custom-ops