question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Python custom backend failed to start with error "no version information available (required by /bin/bash)"

See original GitHub issue

Description A clear and concise description of what the bug is.

Python custom backend failed to launch if Tensorflow is imported with exception:

I0729 17:54:20.447494 5870 python.cc:918] Starting Python backend stub: export LD_LIBRARY_PATH=/tmp/python_env_aZozeK/0/lib:$LD_LIBRARY_PATH; source /tmp/python_env_aZozeK/0/bin/activate && exec models/add_sub/triton_python_backend_stub models/add_sub/1/model.py /add_sub_0_CPU_0 67108864 67108864 5866 /opt/tritonserver/backends/python
I0729 17:54:21.499203 5866 python.cc:1549] TRITONBACKEND_ModelInstanceFinalize: delete instance state
/bin/bash: /tmp/python_env_aZozeK/0/lib/libtinfo.so.6: no version information available (required by /bin/bash)

If we look at /bin/bash under ldd:

(py36) root@c9dc0ce47c69:/opt/tritonserver/python_backend# ldd /bin/bash
	linux-vdso.so.1 (0x00007ffd8b7e4000)
	libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f765e9bb000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f765e9b5000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f765e7c3000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f765eb21000)

it actually points to native GNU lib directory. However, tritonserver appends the conda-pack’s lib ahead in the LD_LIBRARY_PATH ENV variable that’s points to temporary directory for the decompressed conda env lib.

See similar posts on SO: https://stackoverflow.com/questions/64879654/how-can-i-load-a-dso-with-no-versioning-information

Triton Information What version of Triton are you using? r21.06 Are you using the Triton container or did you build it yourself? costum image To Reproduce Steps to reproduce the behavior. Follow the instruction here and in the model code initialize import tensorflow:

    def initialize(self, args):
        """`initialize` is called only once when the model is being loaded.
        Implementing `initialize` function is optional. This function allows
        the model to intialize any state associated with this model.

        Parameters
        ----------
        args : dict
          Both keys and values are strings. The dictionary keys and values are:
          * model_config: A JSON string containing the model configuration
          * model_instance_kind: A string containing model instance kind
          * model_instance_device_id: A string containing model instance device ID
          * model_repository: Model repository path
          * model_version: Model version
          * model_name: Model name
        """
        import tensorflow
        print('tensorflow == {}'.format(tensorflow.__file__))

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Expected behavior A clear and concise description of what you expected to happen.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
shaowei-sucommented, Aug 3, 2021

Hmm I was using release tag r21.06 with latest python backend master branch and the build failed with error:

...
[ 58%] Linking CXX executable triton_python_backend_stub
[ 58%] Built target triton-python-backend-stub
[ 60%] Building CXX object _deps/repo-core-build/CMakeFiles/triton-core-serverstub.dir/src/tritonserver_stub.cc.o
[ 63%] Linking CXX shared library libtritonserver_stub.so
[ 63%] Built target triton-core-serverstub
[ 65%] Building CXX object CMakeFiles/triton-python-backend.dir/src/python.cc.o
In file included from /opt/tritonserver/python_backend/src/pb_utils.h:37,
                 from /opt/tritonserver/python_backend/src/pb_tensor.h:44,
                 from /opt/tritonserver/python_backend/src/infer_request.h:30,
                 from /opt/tritonserver/python_backend/src/python.cc:52:
/opt/tritonserver/python_backend/src/python.cc: In member function 'TRITONSERVER_Error* triton::backend::python::ModelInstanceState::GetInputTensor(uint32_t, triton::backend::python::Tensor*, std::shared_ptr<triton::backend::python::PbTensor>&, TRITONBACKEND_Request*, std::vector<TRITONBACKEND_Response*>&)':
/opt/tritonserver/python_backend/src/python.cc:1251:11: error: 'HostPolicyName' was not declared in this scope
 1251 |       in, HostPolicyName().c_str(), &input_name, &input_dtype, &input_shape,
      |           ^~~~~~~~~~~~~~
/opt/tritonserver/python_backend/build/_deps/repo-backend-src/include/triton/backend/backend_common.h:97:38: note: in definition of macro 'RETURN_IF_ERROR'
   97 |     TRITONSERVER_Error* rie_err__ = (X); \
      |                                      ^
/opt/tritonserver/python_backend/src/python.cc:1267:7: error: 'HostPolicyName' was not declared in this scope
 1267 |       HostPolicyName().c_str());
      |       ^~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/triton-python-backend.dir/build.make:76: CMakeFiles/triton-python-backend.dir/src/python.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:211: CMakeFiles/triton-python-backend.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
The command '/bin/sh -c git clone https://github.com/triton-inference-server/python_backend &&     cd python_backend &&     mkdir build && cd build &&     cmake -DTRITON_ENABLE_GPU=OFF -DTRITON_CORE_REPO_TAG=r21.06 -DTRITON_BACKEND_REPO_TAG=r21.06 -DTRITON_COMMON_REPO_TAG=r21.06 -DPYTHON_EXECUTABLE:FILEPATH=${MINICONDA_DIR}/envs/py36/bin/python -DCMAKE_INSTALL_PREFIX:PATH=/opt/tritonserver .. &&     make install' returned a non-zero code: 2

Switch to release tag r21.07 fixed the issue. Thanks again for the quick fix @Tabrizian!

1reaction
Tabriziancommented, Jul 30, 2021

@shaowei-su Thanks for providing the Dockerfile. I tried creating two models using the Dockerfile that you have provided but still I couldn’t reproduce the error. Can you provide a zip file of the model repository that you are using as well?

Update: No need to provide the model repository. I was able to reproduce the bug. Will get back to you soon.

Read more comments on GitHub >

github_iconTop Results From Across the Web

GitHub - triton-inference-server/python_backend
The goal of Python backend is to let you serve models written in Python by Triton Inference Server without having to write any...
Read more >
What does the "no version information available" error from ...
The "no version information available" means that the library version number is lower on the shared object.
Read more >
Types of shells supported by GitLab Runner
The shell scripts contain commands to execute all steps of the build: git clone; Restore the build cache; Build commands; Update the build...
Read more >
Dockerfile reference - Docker Documentation
This feature is only available when using the BuildKit backend, and is ignored when using the classic builder backend. See Custom Dockerfile syntax...
Read more >
Robot Framework User Guide
robot --version Robot Framework 5.0 (Python 3.8.10 on linux). If running these commands fails with a message saying that the command is not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found