question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Access to raw C++ tensor

See original GitHub issue

I’d like to share tensors between Python and .NET,
e.g. have a batch generator written in .NET and pass those batches to a Python model loaded through Python.NET.

Currently there’s no way to unsafely get raw C++ Torch tensor pointer from TorchSharp, and pass it to Python.

A workaround is to access the raw data (tensor.bytes) and use that to construct a tensor in Python, but it results in unnecessary copying and is inefficient, especially if the source and target tensors are on GPU.

So the ask in this ticket is to get access to raw tensor pointer.

My target is actually to get a full-blown Python PyTorch tensor (the PyObject wrapper of the the C++ tensor), but I assume that’s beyond the scope of TorchSharp.

Issue Analytics

  • State:open
  • Created 7 months ago
  • Comments:10 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
NiklasGustafssoncommented, Feb 23, 2023

I have nothing against Python (although I’m personally a big fan of Julia for numeric workloads) and I think Python.NET is really great for the scenarios it’s intended to serve.

That said, I want TorchSharp to remain simple and straightforward. It’s a library that is aware of its role and not trying to be something more than it is – .NET bindings to libtorch. Therefore, it must remain free of any runtime dependencies beyond .NET and whatever supported accelerator HW requires.

0reactions
lostmsucommented, Feb 23, 2023

As long as we don’t introduce a Python VM dependency into TorchSharp…

Yeah, my feed is full of links to LeCun’s recent comment on Python slowing deep learning research down due to GIL.

I also noticed, that they tried and failed to make simple multithreading work efficiently with nn.DataParallel due to it, and now recommend DistributedDataParallel, which is magnitude more complicated to setup, and likely uses some arcane magic internally like passing Python objects across process boundaries with big rapid-firing footguns.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to get raw pointer from tensors? · Issue #1649
Hi all, I want to know how to get raw pointer from PyTorch tensors? Background: I want to have multiple C threads writing...
Read more >
How to construct a tensorflow::Tensor from raw pointer data ...
Internally, this creates a reference-counted TensorBuffer object that takes ownership of the raw pointer. (Unfortunately, only the C API has ...
Read more >
[P] Access raw pointers of Tensorflow tensors.
[P] Access raw pointers of Tensorflow tensors. ... Essentially, you need access to the pointer if you want to write your own C++...
Read more >
Introduction to Tensors | TensorFlow Core
The data maintains its layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow...
Read more >
How to access and modify the values of a Tensor in PyTorch?
We can access the value of a tensor by using indexing and slicing. Indexing is used to access a single value in the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found