question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GPU memory consumption

See original GitHub issue

Hi,

when running PatchmatchNet on ETH3D dataset through eval.py, I end up using 15GB of GPU memory, and the paper reports 5529 MB. Could it be the case that all images for all scenes are loaded at the same time into the memory through Dataloader? Or Is there something else in the code that might be causing such large memory consumption?

I appreciate your answer, thanks.

Best, Sinisa

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:17 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
hx804722948commented, Jan 18, 2022

@anmatako thank you very much! Cuda111+torch1.9.1 work right

1reaction
anmatakocommented, Dec 21, 2021

@atztao I am sharing the new point cloud retuls along with a text file with the metrics. For comparison here are the legacy point cloud results with a metrics text file as well.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How much GPU memory do I need? | Digital Trends
According to Nvidia's Professional Solution Guide, modern GPUs equipped with 8GB to 12GB of VRAM are necessary for meeting minimum requirements.
Read more >
What are GPU Memory Utilization and Max GPU Memory Used?
Memory Utilization = bandwidth from 0-100% (specifically how busy the copy engine of the GPU is) Max GPU Memory Used = allocated memory...
Read more >
Quadro GPU Memory Usage - NVIDIA
How do I track GPU memory usage? The Nvidia driver enables counters within the operating system. The Windows Performance Monitor application can ...
Read more >
Estimating GPU memory consumption of deep learning models
In this paper, we propose DNNMem, an accurate estimation tool for GPU memory consumption of DL models. DNNMem employs an analytic estimation ...
Read more >
How is GPU and memory utilization defined in nvidia-smi ...
To be more specific: GPU busy is the percentage of time over the last second that any of the SMs was busy, and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found