question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Caching function that returns PyTorch tensor on GPU gives error message

See original GitHub issue

Summary

The function described below causes Streamlit to give an error message that says “In this specific case, it’s very likely you found a Streamlit bug so please file a bug report here.”

Steps to reproduce

On a machine with a CUDA GPU, run the following file test.py with streamlit run test.py:

import streamlit as st
import torch
import numpy as np

@st.cache
def f(w):
    return w

w = torch.from_numpy(np.array([1,2,3])).to('cuda')

st.write(f(w))

If you change cuda to cpu it works fine without an error.

If you change the initialization to w = torch.eye(3).to('cuda') then the error message is slightly different.

Expected behavior:

It should write the array [1,2,3].

Actual behavior:

Streamlit gives an error message and says to report a bug.

Is this a regression?

No, this code caused version 0.57 to core dump.

Debug info

  • Streamlit version: 0.60
  • Python version: 3.6.9
  • Using Conda? PipEnv? PyEnv? Pex? No
  • OS version: Linux 5.0.0-37-generic #40~18.04.1-Ubuntu
  • Browser version: Chrome

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
hertzmanncommented, Jun 16, 2020

image

0reactions
randyzwitchcommented, Mar 26, 2021

Closing, as it appears there is a solution provided above. Happy to re-open if I’m misunderstanding

Read more comments on GitHub >

github_iconTop Results From Across the Web

Frequently Asked Questions — PyTorch 1.13 documentation
PyTorch uses a caching memory allocator to speed up memory allocations. As a result, the values shown in nvidia-smi usually don't reflect the...
Read more >
How can we release GPU memory cache?
I think it is due to cuda memory caching in no longer use Tensor. I know torch.cuda.empty_cache but it needs do del valuable...
Read more >
CUDA semantics — PyTorch 1.13 documentation
PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations. However, the ...
Read more >
How can we release GPU memory cache? - #14 by ptrblck
detach() return a tensor that shares storage (and the same device) ... the first line in my code, I still get a 'CUDA...
Read more >
Unable to allocate cuda memory, when there is enough of ...
Are there any tools to show which python objects consume GPU RAM (besides the pytorch preloaded structures which take some 0.5GB per process)...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found