Python custom backend failed to start with error "no version information available (required by /bin/bash)"
See original GitHub issueDescription A clear and concise description of what the bug is.
Python custom backend failed to launch if Tensorflow is imported with exception:
I0729 17:54:20.447494 5870 python.cc:918] Starting Python backend stub: export LD_LIBRARY_PATH=/tmp/python_env_aZozeK/0/lib:$LD_LIBRARY_PATH; source /tmp/python_env_aZozeK/0/bin/activate && exec models/add_sub/triton_python_backend_stub models/add_sub/1/model.py /add_sub_0_CPU_0 67108864 67108864 5866 /opt/tritonserver/backends/python
I0729 17:54:21.499203 5866 python.cc:1549] TRITONBACKEND_ModelInstanceFinalize: delete instance state
/bin/bash: /tmp/python_env_aZozeK/0/lib/libtinfo.so.6: no version information available (required by /bin/bash)
If we look at /bin/bash
under ldd
:
(py36) root@c9dc0ce47c69:/opt/tritonserver/python_backend# ldd /bin/bash
linux-vdso.so.1 (0x00007ffd8b7e4000)
libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f765e9bb000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f765e9b5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f765e7c3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f765eb21000)
it actually points to native GNU lib directory. However, tritonserver
appends the conda-pack’s lib ahead in the LD_LIBRARY_PATH
ENV variable that’s points to temporary directory for the decompressed conda env lib.
See similar posts on SO: https://stackoverflow.com/questions/64879654/how-can-i-load-a-dso-with-no-versioning-information
Triton Information
What version of Triton are you using?
r21.06
Are you using the Triton container or did you build it yourself?
costum image
To Reproduce
Steps to reproduce the behavior.
Follow the instruction here and in the model code initialize
import tensorflow
:
def initialize(self, args):
"""`initialize` is called only once when the model is being loaded.
Implementing `initialize` function is optional. This function allows
the model to intialize any state associated with this model.
Parameters
----------
args : dict
Both keys and values are strings. The dictionary keys and values are:
* model_config: A JSON string containing the model configuration
* model_instance_kind: A string containing model instance kind
* model_instance_device_id: A string containing model instance device ID
* model_repository: Model repository path
* model_version: Model version
* model_name: Model name
"""
import tensorflow
print('tensorflow == {}'.format(tensorflow.__file__))
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Expected behavior A clear and concise description of what you expected to happen.
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (5 by maintainers)
Top GitHub Comments
Hmm I was using release tag
r21.06
with latest python backend master branch and the build failed with error:Switch to release tag
r21.07
fixed the issue. Thanks again for the quick fix @Tabrizian!@shaowei-su Thanks for providing the Dockerfile. I tried creating two models using the Dockerfile that you have provided but still I couldn’t reproduce the error. Can you provide a zip file of the model repository that you are using as well?
Update: No need to provide the model repository. I was able to reproduce the bug. Will get back to you soon.