Register Extension Support for PyCapsule
See original GitHub issueCurrently, the method of enabling DLTensors for tvm relies on python having access to the data attribute. However, pytorch’s _to_dlpack operator returns a PyCapsule that is opaque to python, making incompatible with the way register_extension is set up. If PyCapsules were allowed, integration with PyTorch would be much more efficient. As a caveat, I may be totally missing something about how registering extensions work, and would be thrilled to be corrected. My goal is to use a PyTorch tensor buffer without having to copy to an ndarray. I imagine the code would look similar to this:
@tvm.register_extension
class FireTensor(object):
_tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
def __init__(self, tensor):
self.handle = torch._C._to_dlpack(tensor)
@property
def _tvm_handle(self):
return self.handle
Issue Analytics
- State:
- Created 6 years ago
- Comments:13 (10 by maintainers)
Top Results From Across the Web
Capsules — Python 3.11.1 documentation
type PyCapsule¶. This subtype of PyObject represents an opaque value, useful for C extension modules who need to pass an opaque value (as...
Read more >1. Extending Python with C or C++
To support extensions, the Python API (Application Programmers Interface) defines a set ... This function must be registered with the interpreter using the ......
Read more >How to deal with PyCapsule type inside Python - Stack Overflow
Okay I managed to figure it out: # ... capsule = self.effectiveWinId() ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p ctypes.pythonapi.
Read more >Registering Extensions with Entry Points - Numba
Adding Support for the “Init” Entry Point¶. A package can register an initialization function with Numba by adding the entry_points argument to the...
Read more >Extending TorchScript with Custom C++ Operators - PyTorch
TorchScript supports a large subset of operations provided by the torch ... Now that have implemented our custom operator in C++, we need...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Your commit resolves the problem, everything is working great now! Thanks. For reference, here’s what the custom tensor for pytorch ended up looking like.
Try https://github.com/dmlc/tvm/pull/669 which checks if the buffer is compact even when strides is available. This is a recommended approach when the data buffer is compact.
Alternatively, use
tvm.decl_buffer
to declare a buffer with strides array, and specify them with binds argument in http://docs.tvmlang.org/api/python/build.html#tvm.buildsee also http://docs.tvmlang.org/api/python/tvm.html#tvm.decl_buffer and https://github.com/dmlc/tvm/issues/585