question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How much GPU memory is needed when evaluating DAVIS?

See original GitHub issue

I tried ‘eval_video_segmentation.py’ with one 16GB V100, but CUDA out of memory always occurs when processing the 28th video, i.e. ‘shooting’.

RuntimeError: CUDA out of memory. Tried to allocate 8.90 GiB (GPU 0; 15.78 GiB total capacity; 3.46 GiB already allocated; 1.59 GiB free; 12.99 GiB reserved in total by PyTorch)

However, the length of ‘shooting’ is only 39, which is shorter than the previous videos. I am confused about why the out of memory happens here.

Did you use 32GB V100 for inference?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
mathildecaron31commented, Aug 13, 2021

Closing the issue, feel free to re-open if you have further question.

1reaction
pansanity666commented, Jul 30, 2021

Ha yes you’re right I removed this feature when refactoring the code for simplicity. I should remove the argument to avoid confusion… Well in that case you can either

What model are you using ? ViT-small 16x16 fits into memory with a 16Gb GPU in my experiments.

I am using ViT-small 8x8, and 16x16 works well in my experiments too. May be the torch.bmm operation leads to the OOM. I will try the cpu inference for shooting. Thank you for your suggestion and reply.

Best,

Read more comments on GitHub >

github_iconTop Results From Across the Web

How much GPU memory do I need?
According to Nvidia's Professional Solution Guide, modern GPUs equipped with 8GB to 12GB of VRAM are necessary for meeting minimum requirements.
Read more >
Estimating GPU Memory Consumption of Deep Learning ...
DNNMem employs an an- alytic estimation approach to systematically calculate the memory consumption of both the computation graph and the DL framework runtime....
Read more >
How much GPU Memory do you REALLY need? - YouTube
People get REALLY caught up on Video Card memory... so today lets talk about how much you ACTUALLY need ! Learn more about...
Read more >
Chapter 33. Implementing Efficient Parallel Data Structures ...
This chapter gives an overview of the GPU memory model and explains how fundamental data structures such as multidimensional arrays, structures, lists, and ......
Read more >
Training Deeper Models by GPU Memory Optimization on ...
we propose a general dataflow-graph based GPU memory optimization strategy, ... The essential logic of training deep learning models involves parallel ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found