[TOPI] Conv2d Schedule for Intel HD Graphics Target fails and produces wrong output
See original GitHub issuennvm.compiler.build() fails for intel_graphics target. sample model https://s3.amazonaws.com/download.onnx/models/opset_3/resnet50.tar.gz error log
Traceback (most recent call last):
File "C:\Users\rg\Documents\Visual Studio 2015\Projects\nnvm_tvm_resnet\src\nnvm_tvm_igpu_.py", line 144, in <module>
graph, lib, params = nnvm.compiler.build(sym, tvm.target.intel_graphics(), input_dict, params=params)
File "C:\tvm\nnvm\python\nnvm\compiler\build_module.py", line 294, in build
graph = graph.apply("GraphFusePartition").apply("GraphFuseCompile")
File "C:\tvm\nnvm\python\nnvm\graph.py", line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File "C:\tvm\nnvm\python\nnvm\_base.py", line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File "C:\tvm\python\tvm\_ffi\_ctypes\function.py", line 54, in cfun
rv = local_pyfunc(*pyargs)
File "C:\tvm\nnvm\python\nnvm\top\nn.py", line 164, in compute_contrib_conv2d_NCHWc
strides, padding, layout, out_layout)
File "<decorator-gen-40>", line 2, in conv2d_NCHWc
File "C:\tvm\python\tvm\target.py", line 345, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
TypeError: _decl_conv2d() takes from 6 to 7 positional arguments but 9 were given
call stack hits from here : https://github.com/dmlc/tvm/blob/fd1a572058aef5a07e1e1032e26e67fe1906f9b2/nnvm/python/nnvm/top/nn.py#L169-L170
and further calls intel_graphics schedules below . But conv2d implementation for Intel graphics missed " layout," & " out_layout," function parameters.as shown below : https://github.com/dmlc/tvm/blob/fd1a572058aef5a07e1e1032e26e67fe1906f9b2/topi/python/topi/intel_graphics/conv2d.py#L60
On solving this error by passing default prarmeters as 'layout =None" & " out_layout=None" , It compiled successfully . But now the prediction output results[inference] turned to be wrong with tvm.target.intel_graphics() ! [ I tested in default mode ie. target=‘opencl’ the prediction result is good ] . why the Inference output changes when using tvm.target.intel_graphics() ?
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (9 by maintainers)
Top GitHub Comments
@rajh619 Glad to hear that you find the problem. Yeah I agree that the intel_graphics target’s schedulers are more suitable for intel integrated graphics cards while when you use independent intel graphics, the original cuda scheduler can do the job better. We’ll do more coverage on both the operators and networks.
@tqchen Maybe we are good on closing the issue.
@rajh619 the community will always trying to solve the problem together, be it on forum or issue 😉 The forum is preferred because issues are for actionable items we aggressively close issues and expect issues to be active actionable items that can get closed in an expected time span (so we won’t have pile of issues that get missed).