Access to raw C++ tensor
See original GitHub issueI’d like to share tensors between Python and .NET,
e.g. have a batch generator written in .NET and pass those batches to a Python model loaded through Python.NET.
Currently there’s no way to unsafely get raw C++ Torch tensor pointer from TorchSharp, and pass it to Python.
A workaround is to access the raw data (tensor.bytes
) and use that to construct a tensor in Python, but it results in unnecessary copying and is inefficient, especially if the source and target tensors are on GPU.
So the ask in this ticket is to get access to raw tensor pointer.
My target is actually to get a full-blown Python PyTorch tensor (the PyObject wrapper of the the C++ tensor), but I assume that’s beyond the scope of TorchSharp.
Issue Analytics
- State:
- Created 7 months ago
- Comments:10 (9 by maintainers)
Top Results From Across the Web
How to get raw pointer from tensors? · Issue #1649
Hi all, I want to know how to get raw pointer from PyTorch tensors? Background: I want to have multiple C threads writing...
Read more >How to construct a tensorflow::Tensor from raw pointer data ...
Internally, this creates a reference-counted TensorBuffer object that takes ownership of the raw pointer. (Unfortunately, only the C API has ...
Read more >[P] Access raw pointers of Tensorflow tensors.
[P] Access raw pointers of Tensorflow tensors. ... Essentially, you need access to the pointer if you want to write your own C++...
Read more >Introduction to Tensors | TensorFlow Core
The data maintains its layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow...
Read more >How to access and modify the values of a Tensor in PyTorch?
We can access the value of a tensor by using indexing and slicing. Indexing is used to access a single value in the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I have nothing against Python (although I’m personally a big fan of Julia for numeric workloads) and I think Python.NET is really great for the scenarios it’s intended to serve.
That said, I want TorchSharp to remain simple and straightforward. It’s a library that is aware of its role and not trying to be something more than it is – .NET bindings to libtorch. Therefore, it must remain free of any runtime dependencies beyond .NET and whatever supported accelerator HW requires.
Yeah, my feed is full of links to LeCun’s recent comment on Python slowing deep learning research down due to GIL.
I also noticed, that they tried and failed to make simple multithreading work efficiently with
nn.DataParallel
due to it, and now recommendDistributedDataParallel
, which is magnitude more complicated to setup, and likely uses some arcane magic internally like passing Python objects across process boundaries with big rapid-firing footguns.