question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

What version of this project is compilable? and what are the third parties?

See original GitHub issue

Hi,

Some of the problems that are discussed in this issue are already rised in issues: https://github.com/google/hdrnet/issues/4 & https://github.com/google/hdrnet/issues/9 . I decided to open a new issue as I want to be able to execute this project in any of its version (not necessarly the latest). I included some of the trails I did.

I’ve tried two versions of this project and failed in both of them. Following some hints that I saw in other issues, I made some progress but didn’t succeed. I would like to share my experiments and ask for suggestions.

The information that is missing in this project is what third parties & versions should be used in the compilation & how to arrange them.

The latest commit

The first step was to try the latest commit #7f71f44 (2022-05-08)

The latest commit compilation

As reported in other issues, simply executing make results with

$ make
nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-27 13:11:53.745577: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/bilateral_slice.cu.cc:23:10: fatal error: third_party/array/array.h: No such file or directory
 #include "third_party/array/array.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

Adding the array third party

Following answer in issue https://github.com/google/hdrnet/issues/4 I cloned the array third party from https://github.com/dsharlet/array/ (commit ID #344d75d of 2022-04-11). I placed this project under hdrnet/ops/third_party/array Now I had some progress in running make :

$ make
nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-27 13:13:12.143046: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/bilateral_slice.cu.cc:24:10: fatal error: third_party/tensorflow/core/util/gpu_kernel_helper.h: No such file or directory
 #include "third_party/tensorflow/core/util/gpu_kernel_helper.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The tensorflow third party

Changing the include switches

As the error seems to relate to tensorflow, I have tested the command that should provide the location of the tensorflow include files: python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())' This results with:

2022-05-27 13:14:54.228153: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include

Replacing the python -c... command with the python tensorflow include path results with the same error.

Copy the tensorflow core to the thirdparty folder

The next step was to copy the folder of tensorflow/core/util/gpu_kernel_helper.h (of the tensorflow project - commitID #0976345ba57) to the third party folder (I copied the full folder structure)

Running make now results with the folllowing error:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-28 06:23:47.224210: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
In file included from ops/bilateral_slice.cu.cc:24:0:
ops/third_party/tensorflow/core/util/gpu_kernel_helper.h:24:10: fatal error: third_party/gpus/cuda/include/cuda_fp16.h: No such file or directory
 #include "third_party/gpus/cuda/include/cuda_fp16.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The cuda third party

I have copied the location of the cuda_fp16 (/usr/local/cuda/include/cuda_fp16.h) to the third_party/gpus/cuda location. This by itself didn’t work. So I added the ops folder to the include path by manually executing:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops
This results with another error message: (click to open)
2022-05-28 16:34:08.991965: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/third_party/array/array.h(123): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(123): error: expected a ";"

ops/third_party/array/array.h(588): error: namespace "std" has no member "index_sequence"

ops/third_party/array/array.h(589): error: namespace "std" has no member "make_index_sequence"

ops/third_party/array/array.h(594): error: index_sequence is not a template

ops/third_party/array/array.h(600): error: identifier "make_index_sequence" is undefined

ops/third_party/array/array.h(600): error: expected an expression

ops/third_party/array/array.h(640): error: index_sequence is not a template

ops/third_party/array/array.h(657): error: index_sequence is not a template

ops/third_party/array/array.h(663): error: index_sequence is not a template

ops/third_party/array/array.h(669): error: index_sequence is not a template

ops/third_party/array/array.h(677): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(681): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(687): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(692): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(698): error: index_sequence is not a template

ops/third_party/array/array.h(697): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(719): error: index_sequence is not a template

ops/third_party/array/array.h(718): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(726): error: index_sequence is not a template

ops/third_party/array/array.h(750): error: index_sequence is not a template

ops/third_party/array/array.h(749): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(755): error: index_sequence is not a template

ops/third_party/array/array.h(755): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(754): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::mins"

ops/third_party/array/array.h(760): error: index_sequence is not a template

ops/third_party/array/array.h(760): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(759): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::extents"

ops/third_party/array/array.h(765): error: index_sequence is not a template

ops/third_party/array/array.h(765): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(764): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::strides"

ops/third_party/array/array.h(770): error: index_sequence is not a template

ops/third_party/array/array.h(770): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(769): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::maxs"

ops/third_party/array/array.h(822): error: index_sequence is not a template

ops/third_party/array/array.h(840): error: index_sequence is not a template

ops/third_party/array/array.h(845): error: index_sequence is not a template

ops/third_party/array/array.h(852): error: index_sequence is not a template

ops/third_party/array/array.h(862): error: index_sequence is not a template

ops/third_party/array/array.h(862): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(866): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(890): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(890): error: expected a "," or ">"

ops/third_party/array/array.h(890): error: expected a declaration

ops/third_party/array/array.h(890): error: expected a ";"

ops/third_party/array/array.h(918): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(946): error: index_sequence is not a template

ops/third_party/array/array.h(968): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1009): error: namespace "nda::internal" has no member "make_index_sequence"

ops/third_party/array/array.h(1009): error: expected an expression

ops/third_party/array/array.h(1017): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1017): error: expected a ";"

ops/third_party/array/array.h(1020): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1020): error: expected a ";"

ops/third_party/array/array.h(1023): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1023): error: expected a ";"

ops/third_party/array/array.h(1027): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1027): error: expected a ";"

ops/third_party/array/array.h(1031): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1031): error: expected a ";"

ops/third_party/array/array.h(1037): error: mismatched delimiters in default argument expression

ops/third_party/array/array.h(1040): error: expected a "," or ">"

ops/third_party/array/array.h(1037): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1037): error: expected a "," or ">"

ops/third_party/array/array.h(1040): error: expected a declaration

ops/third_party/array/array.h(1105): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1111): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1117): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1121): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1186): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1187): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1188): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1189): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1190): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1191): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1195): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1196): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1197): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1198): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1199): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1200): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1201): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1202): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1203): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1204): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1241): error: constant "DimIndices" is not a type name

ops/third_party/array/array.h(1241): error: expected a "," or ">"

ops/third_party/array/array.h(1241): error: namespace "nda::internal" has no member "enable_if_permutation"

ops/third_party/array/array.h(1241): error: expected a "," or ">"

ops/third_party/array/array.h(1242): error: expected a declaration

ops/third_party/array/array.h(1242): error: expected a ";"

ops/third_party/array/array.h(1274): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1275): error: expected a declaration

ops/third_party/array/array.h(1489): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1493): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1493): error: expected a ";"

ops/third_party/array/array.h(1496): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1496): error: expected a ";"

ops/third_party/array/array.h(1499): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1499): error: expected an expression

ops/third_party/array/array.h(1501): error: expected a declaration

ops/third_party/array/array.h(1506): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1511): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1511): error: expected an expression

ops/third_party/array/array.h(1525): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1535): error: expected a "," or ">"

ops/third_party/array/array.h(1535): error: identifier "internal" is undefined

ops/third_party/array/array.h(1535): error: enable_if_shapes_compatible is not a template

Error limit reached.
100 errors detected in the compilation of "ops/bilateral_slice.cu.cc".
Compilation terminated.

Trying to fix this by upgrading the compiler to c++14:

nvcc -std c++14 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops

This results with the following error message:

2022-05-28 16:35:50.746921: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::FilesExist" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::CreateDir" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/env.h(482): warning: overloaded virtual function "tensorflow::Env::RegisterFileSystem" is only partially overridden in class "tensorflow::EnvWrapper"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<5UL>>() const [with T=const float, Shape=nda::shape_of_rank<5UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<5UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<3UL>>() const [with T=const float, Shape=nda::shape_of_rank<3UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<3UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/bilateral_slice.cu.cc(74): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(77): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(80): error: namespace "std" has no member "clamp"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<4UL>>() const [with T=const float, Shape=nda::shape_of_rank<4UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<4UL>]" 
ops/bilateral_slice.cu.cc(96): here

ops/bilateral_slice.cu.cc(203): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(206): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(209): error: namespace "std" has no member "clamp"

6 errors detected in the compilation of "ops/bilateral_slice.cu.cc".

Searching more about this issue, seems that the std::clamp is implemented in c++17:

nvcc -std c++17 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops
Results with the following error message: (click to view)
2022-05-28 16:37:36.430060: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::FilesExist" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::CreateDir" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/env.h(482): warning: overloaded virtual function "tensorflow::Env::RegisterFileSystem" is only partially overridden in class "tensorflow::EnvWrapper"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<5UL>>() const [with T=const float, Shape=nda::shape_of_rank<5UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<5UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<3UL>>() const [with T=const float, Shape=nda::shape_of_rank<3UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<3UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<4UL>>() const [with T=const float, Shape=nda::shape_of_rank<4UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<4UL>]" 
ops/bilateral_slice.cu.cc(96): here

ops/bilateral_slice.cu.cc(40): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(40): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(40): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(40): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(41): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(41): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(41): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(41): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(42): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(42): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(42): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(42): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(43): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(43): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(43): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(43): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(44): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(44): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(45): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(45): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(65): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(65): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(75): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(75): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(78): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(78): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(81): error: calling a __host__ function("SmoothedLerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(81): error: identifier "SmoothedLerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(83): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(83): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(89): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(89): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(97): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(97): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(97): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(97): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(98): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(98): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(98): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(98): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(99): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(99): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(99): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(99): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(100): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(100): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(100): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(100): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(101): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(101): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(102): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(102): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(129): error: calling a __host__ function("MirrorBoundary") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(129): error: identifier "MirrorBoundary" is undefined in device code

ops/bilateral_slice.cu.cc(131): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(131): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(135): error: calling a __host__ function("MirrorBoundary") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(135): error: identifier "MirrorBoundary" is undefined in device code

ops/bilateral_slice.cu.cc(137): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(137): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(143): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(143): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(144): error: calling a __host__ function("SmoothedLerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(144): error: identifier "SmoothedLerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(154): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(154): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(159): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(159): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(168): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(168): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(168): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(168): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(169): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(169): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(169): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(169): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(170): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(170): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(170): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(170): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(171): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(171): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(171): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(171): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(172): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(172): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(173): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(173): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(193): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(193): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(204): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(204): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(207): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(207): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(211): error: calling a __host__ function("SmoothedLerpWeightGrad") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(211): error: identifier "SmoothedLerpWeightGrad" is undefined in device code

ops/bilateral_slice.cu.cc(216): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(216): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(223): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(223): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

Error limit reached.
100 errors detected in the compilation of "ops/bilateral_slice.cu.cc".
Compilation terminated.

At this point I checked if I can get around this by using the “initial commit”

The initial commit

Following the suggestion here https://github.com/google/hdrnet/issues/9 I tried to take the initial commit (#5ac95ef of 2017-08-21)

First compilation of the initial commit

  1. It appears that this commit requires tensorflow_gpu==1.1.0 - and python 2.7 updated in the environment
click for pip list content

Executing pip list result with:

DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Package                       Version            
----------------------------- -------------------
backports.functools-lru-cache 1.6.4              
certifi                       2020.6.20          
cloudpickle                   1.3.0              
cycler                        0.10.0             
decorator                     4.4.2              
funcsigs                      1.0.2              
glog                          0.3.1              
kiwisolver                    1.1.0              
matplotlib                    2.2.5              
mock                          3.0.5              
networkx                      2.2                
numpy                         1.12.0             
Pillow                        6.2.2              
pip                           20.0.2             
protobuf                      3.17.3             
pyglib                        0.1                
pyparsing                     2.4.7              
python-dateutil               2.8.2              
python-gflags                 3.1.1              
python-magic                  0.4.13             
pytz                          2022.1             
PyWavelets                    1.0.3              
scikit-image                  0.14.5             
scipy                         1.2.3              
setproctitle                  1.1.10             
setuptools                    44.0.0.post20200106
six                           1.16.0             
subprocess32                  3.5.4              
tensorflow                    1.1.0              
tensorflow-gpu                1.1.0              
virtualenv                    16.7.9             
Werkzeug                      1.0.1              
wheel                         0.34.1       
  1. When I try to compile according to the readme:
    cd hdrnet
    make

I executed ~/GIT/hdrnet/hdrnet$ make and recieve:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:14: fatal error: math_functions.hpp: No such file or directory
     #include <math_functions.hpp>
              ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The initial commit - adding third party

Understand that the thirdparty folder is missing, I cloned the eigen project: https://gitlab.com/libeigen/eigen.git to the folder: hdrnet/third_party/eigen3

As a commit ID for the eigne project I tried commit #5c68ba41a (2017-02-21).

Executing make results with:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:14: fatal error: math_functions.hpp: No such file or directory
     #include <math_functions.hpp>
              ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The initial commit - Debugging the compilation error

I tried to execute the compilation command manually:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true

As the function python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())' returned: /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include

I executed:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true

As this returned the same error as before, I added the location of the <math_functions.hpp> to the include folder in the compilation (-I/usr/local/cuda/include/crt):

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -I/usr/local/cuda/include/crt
I recieved the following error of cuda not supported: (click to view)
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/usr/local/cuda/include/crt/math_functions.hpp:54:2: warning: #warning "crt/math_functions.hpp is an internal header file and must not be used directly.  Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp]
 #warning "crt/math_functions.hpp is an internal header file and must not be used directly.  Please use cuda_runtime_api.h or cuda_runtime.h instead."
  ^~~~~~~
In file included from /usr/local/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:115:0,
                 from <command-line>:0:
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

At this point I’m stuck at the moment…

My cuda version is:

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

3reactions
ZoliNcommented, Jul 13, 2022

I got it working in Colab: https://github.com/ZoliN/colab/blob/main/hdrnetOrigSlice.ipynb It uses the original bilateral slice function from 2017 (only GPU).

I built it with the latest slice function(GPU+CPU) too with some hacking, however the GPU kernel crashes, so it is only usable only on CPU device at the moment: https://github.com/ZoliN/colab/blob/main/hdrnetNewSlice.ipynb

These all use TF1.x. The slice OP can be built with TF2 too, however layers.py would have to be rewritten I think.

0reactions
fangli0906commented, Sep 17, 2022

I got it working in Colab: https://github.com/ZoliN/colab/blob/main/hdrnetOrigSlice.ipynb It uses the original bilateral slice function from 2017 (only GPU).

I built it with the latest slice function(GPU+CPU) too with some hacking, however the GPU kernel crashes, so it is only usable only on CPU device at the moment: https://github.com/ZoliN/colab/blob/main/hdrnetNewSlice.ipynb

These all use TF1.x. The slice OP can be built with TF2 too, however layers.py would have to be rewritten I think.

I’ve tried your Colab and works great on the base version of Colab. However, when I switch to Colab pro, it stopped working. It builds fine and I get the same output as on the base version of Colab but py.test is throwing Assertion errors and Loss is blowing up when I train. I know this sounds like a question that I should be asking Google Colab for but just wondering if you have any idea as to why this is happening?

`_________________ BilateralSliceApplyTest.test_input_gradient __________________

self = <hdrnet.test.ops_test.BilateralSliceApplyTest testMethod=test_input_gradient>

def test_input_gradient(self):
  for dev in ['/gpu:0']:
    batch_size = 1
    h = 8
    w = 5
    gh = 6
    gw = 3
    d = 7
    i_chans = 3
    o_chans = 3
    grid_shape = [batch_size, gh, gw, d, (1+i_chans)*o_chans]
    guide_shape = [batch_size, h, w]
    input_shape = [batch_size, h, w, i_chans]
    output_shape = [batch_size, h, w, o_chans]

    grid_data = np.random.rand(*grid_shape).astype(np.float32)
    guide_data = np.random.rand(*guide_shape).astype(np.float32)
    input_data = np.random.rand(*input_shape).astype(np.float32)

    with tf.device(dev):
      grid_tensor = tf.convert_to_tensor(grid_data,
                                         name='data',
                                         dtype=tf.float32)
      guide_tensor = tf.convert_to_tensor(guide_data,
                                          name='guide',
                                          dtype=tf.float32)
      input_tensor = tf.convert_to_tensor(input_data,
                                          name='input',
                                          dtype=tf.float32)

      output_tensor = ops.bilateral_slice_apply(grid_tensor, guide_tensor, input_tensor, has_offset=True)

    with self.test_session():
      err = tf.test.compute_gradient_error(
          input_tensor,
          input_shape,
          output_tensor,
          output_shape)
    self.assertLess(err, 3e-4)

E AssertionError: 0.9942179322242737 not less than 0.0003

test/ops_test.py:506: AssertionError ----------------------------- Captured stderr call ----------------------------- 2022-09-17 14:15:11.833032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2022-09-17 14:15:11.833117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-09-17 14:15:11.833127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2022-09-17 14:15:11.833134: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2022-09-17 14:15:11.833216: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4884 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0) =========================== short test summary info ============================ FAILED test/ops_test.py::BilateralSliceTest::test_grid_optimize - AssertionEr… FAILED test/ops_test.py::BilateralSliceTest::test_guide_gradient - AssertionE… FAILED test/ops_test.py::BilateralSliceTest::test_guide_optimize - AssertionE… FAILED test/ops_test.py::BilateralSliceTest::test_interpolate - AssertionErro… FAILED test/ops_test.py::BilateralSliceTest::test_optimize_both - AssertionEr… FAILED test/ops_test.py::BilateralSliceApplyTest::test_grid_gradient - Assert… FAILED test/ops_test.py::BilateralSliceApplyTest::test_guide_gradient - Asser… FAILED test/ops_test.py::BilateralSliceApplyTest::test_input_gradient - Asser… =================== 8 failed, 4 passed, 2 skipped in 27.87s ==================== [ ] %cd /content/hdrnet !wget https://data.csail.mit.edu/graphics/hdrnet/pretrained_models.zip !unzip pretrained_models.zip `

Read more comments on GitHub >

github_iconTop Results From Across the Web

Versioning depending on Third parties
I have a question about versioning when I depend on a third party (TP) project's versioning. Our current process is to release a...
Read more >
Factors to consider when adding third party dependencies to a ...
When adding a third-party dependency, your project is now using code that you don't own. What happens if that code has a bug...
Read more >
Which third-party dependencies are declared in the build ...
Which third-party dependencies are declared in the build.gradle of this Android Studio project? ... I want to convert this Android Studio project ......
Read more >
Using third party libraries in your Java Project? You need Maven
Let Maven download and build a few things ... Maven will need to download and compile a few things to get working, so...
Read more >
How to use CMake to add Third Party Libraries to your Project
The most common use of CMake is to build projects that are written in C, C++ or both. As a longtime user of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found