question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can't install intel extension for pytorch

See original GitHub issue

Hello, I am trying to install “intel extension for pytorch” following the readme of this repository. I am getting the following error when I run the “python setup.py install” command:

[ 83%] Built target dnnl_cpu_x64
[ 83%] Linking CXX static library ../../../../../packages/intel_extension_for_pytorch/lib/libdnnl.a
[ 83%] Built target dnnl
[ 83%] Linking CXX shared library ../../../packages/intel_extension_for_pytorch/lib/libdnnl_graph.so
[ 83%] Built target dnnl_graph
Consolidate compiler generated dependencies of target intel-ext-pt-cpu
[ 83%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/version.cpp.o
[ 83%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/utils.cpp.o
[ 83%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/verbose.cpp.o
[ 83%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/autocast_mode.cpp.o
[ 84%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/LlgaTensorImpl.cpp.o
[ 84%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/autocast_kernel.cpp.o
[ 84%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/autocast_verbose.cpp.o
[ 84%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/quantization/AutoCast.cpp.o
[ 84%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/quantization/AutoCast_utils.cpp.o
[ 85%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/quantization/Common.cpp.o
In file included from /home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/computations.hpp:20,
                 from /home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep.hpp:41,
                 from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/mkldnn/MKLDNNCommon.h:6,
                 from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/verbose.cpp:3:
/home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/operators/matmul.hpp: In static member function ‘static ideep::tensor::desc ideep::matmul_forward::expected_weights_desc(const dims&, ideep::data_type, ideep::data_type, const ideep::engine&)’:
/home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/operators/matmul.hpp:85:54: warning: assignment from temporary initializer_list does not extend the lifetime of the underlying array [-Winit-list-lifetime]
   85 |       y_dims = {x_dims[0], x_dims[1], weights_dims[2]};
      |                                                      ^
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: In function ‘void torch_ipex::autocast::TORCH_LIBRARY_IMPL_init_aten_AutocastCPU_82(torch::Library&)’:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:172:72: error: could not convert template argument ‘& at::linalg_matrix_rank’ from ‘<unresolved overloaded function type>’ to ‘at::Tensor (*)(const at::Tensor&, double, bool)’
  172 |         &CPU_WrapFunction<DtypeCastPolicy::CAST_POLICY, SIG, SIG, &FUNC>:: \
      |                                                                        ^
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:443:1: note: in expansion of macro ‘MAKE_REGISTER_FUNC’
  443 | MAKE_REGISTER_FUNC(
      | ^~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: error: ‘<expression error>::type’ has not been declared
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: At global scope:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: error: template-id ‘get_op_name<at::Tensor(const at::Tensor&, double, bool), at::linalg_matrix_rank>’ for ‘std::string torch_ipex::autocast::get_op_name()’ does not match any template declaration
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~
In file included from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:1:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.h:150:13: note: candidate is: ‘template<class Redispatch, Redispatch* F> std::string torch_ipex::autocast::get_op_name()’
  150 | std::string get_op_name() {
      |             ^~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: In function ‘void torch_ipex::autocast::TORCH_LIBRARY_IMPL_init_aten_AutocastCPU_83(torch::Library&)’:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:172:72: error: could not convert template argument ‘& at::linalg_matrix_rank’ from ‘<unresolved overloaded function type>’ to ‘at::Tensor (*)(const at::Tensor&, const c10::optional<at::Tensor>&, const c10::optional<at::Tensor>&, bool)’
  172 |         &CPU_WrapFunction<DtypeCastPolicy::CAST_POLICY, SIG, SIG, &FUNC>:: \
      |                                                                        ^
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:448:1: note: in expansion of macro ‘MAKE_REGISTER_FUNC’
  448 | MAKE_REGISTER_FUNC(
      | ^~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: error: ‘<expression error>::type’ has not been declared
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: At global scope:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: error: template-id ‘get_op_name<at::Tensor(const at::Tensor&, const c10::optional<at::Tensor>&, const c10::optional<at::Tensor>&, bool), at::linalg_matrix_rank>’ for ‘std::string torch_ipex::autocast::get_op_name()’ does not match any template declaration
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~
In file included from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:1:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.h:150:13: note: candidate is: ‘template<class Redispatch, Redispatch* F> std::string torch_ipex::autocast::get_op_name()’
  150 | std::string get_op_name() {
      |             ^~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: In function ‘void torch_ipex::autocast::TORCH_LIBRARY_IMPL_init_aten_AutocastCPU_84(torch::Library&)’:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:172:72: error: could not convert template argument ‘& at::linalg_matrix_rank’ from ‘<unresolved overloaded function type>’ to ‘at::Tensor (*)(const at::Tensor&, c10::optional<double>, c10::optional<double>, bool)’
  172 |         &CPU_WrapFunction<DtypeCastPolicy::CAST_POLICY, SIG, SIG, &FUNC>:: \
      |                                                                        ^
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:457:1: note: in expansion of macro ‘MAKE_REGISTER_FUNC’
  457 | MAKE_REGISTER_FUNC(
      | ^~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: error: ‘<expression error>::type’ has not been declared
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:173:13: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  173 |             type::call);                                                   \
      |             ^~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: At global scope:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: error: template-id ‘get_op_name<at::Tensor(const at::Tensor&, c10::optional<double>, c10::optional<double>, bool), at::linalg_matrix_rank>’ for ‘std::string torch_ipex::autocast::get_op_name()’ does not match any template declaration
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~~~~~~~~~~~~
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:176:15: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  176 |   std::string get_op_name<SIG, FUNC>() {                                   \
      |               ^~~~~~~~~~~
In file included from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:1:
/home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.h:150:13: note: candidate is: ‘template<class Redispatch, Redispatch* F> std::string torch_ipex::autocast::get_op_name()’
  150 | std::string get_op_name() {
      |             ^~~~~~~~~~~
In file included from /home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/computations.hpp:20,
                 from /home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep.hpp:41,
                 from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.h:7,
                 from /home/eden/Downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_kernel.cpp:4:
/home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/operators/matmul.hpp: In static member function ‘static ideep::tensor::desc ideep::matmul_forward::expected_weights_desc(const dims&, ideep::data_type, ideep::data_type, const ideep::engine&)’:
/home/eden/Downloads/intel-extension-for-pytorch/ideep/ideep/operators/matmul.hpp:85:54: warning: assignment from temporary initializer_list does not extend the lifetime of the underlying array [-Winit-list-lifetime]
   85 |       y_dims = {x_dims[0], x_dims[1], weights_dims[2]};
      |                                                      ^
[ 85%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/quantization/Config.cpp.o
[ 85%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/cpu/AdaptiveAveragePooling.cpp.o
[ 85%] Building CXX object CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/cpu/AdaptiveMaxPooling.cpp.o
make[2]: *** [CMakeFiles/intel-ext-pt-cpu.dir/build.make:118: CMakeFiles/intel-ext-pt-cpu.dir/torch_ipex/csrc/autocast_mode.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:468: CMakeFiles/intel-ext-pt-cpu.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
  File "/home/eden/Downloads/intel-extension-for-pytorch/setup.py", line 585, in <module>
    setup(
  File "/home/eden/installs/anaconda3/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/command/install.py", line 546, in run
    self.run_command('build')
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/eden/installs/anaconda3/lib/python3.9/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/eden/Downloads/intel-extension-for-pytorch/setup.py", line 496, in run
    check_call(['make'] + build_args, cwd=build_type_dir, env=env)
  File "/home/eden/installs/anaconda3/lib/python3.9/subprocess.py", line 373, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['make', '-j', '8']' returned non-zero exit status 2.
(base) eden@eden-Inspiron-15-5510:~/Downloads/intel-extension-for-pytorch$ 

Some elements that can help:

(base) eden@eden-Inspiron-15-5510:~/Downloads/intel-extension-for-pytorch$ gcc --version
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(base) eden@eden-Inspiron-15-5510:~/Downloads/intel-extension-for-pytorch$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.10.0'

Thank you for your help.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
EikanWangcommented, Dec 2, 2021
1reaction
EikanWangcommented, Dec 2, 2021

@EdenBelouadah , What’s your mean of “this graph card”? Currently, the IPEX has not supported Intel GPU yet.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error while using Intel extension for pytorch - Intel Communities
Hi Team,. I was working with intel extension for pytorch for deep learning purpose. I installed pytorch packages and intel_pytorch_extension.
Read more >
Installation Guide — intel_extension_for_pytorch 1.13.0+cpu ...
Install PyTorch ... Please install CPU version of PyTorch through its official channel. For more details, refer to pytorch.org. Note: For the extension...
Read more >
I am trying to install intel optimized pytorch in different ways
1 Answer 1 · 1.Intel Optimized Pytorch Installation. Install the stable version (v 1.0) on Linux via Pip for Python 3.6. · 2.Conda...
Read more >
Windows FAQ — PyTorch 1.13 documentation
The support for CFFI Extension is very experimental. ... PyTorch doesn't work on 32-bit system. ... pip install numpy mkl intel-openmp mkl_fft.
Read more >
Intel Extension for PyTorch - AWS Marketplace
Intel ® Extensions for PyTorch* extends the original PyTorch* framework by ... Both eliminate the need for you to install and operate your...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found