Running out of memory on a 4GB card
See original GitHub issueI’m trying to run Faster-RCNN on a Nvidia GTX 1050Ti, but I’m running out of memory. Nvidia-smi says that about 170MB are already in use, but does Faster-RCNN really use 3.8GB of VRAM to process an image?
I tried Mask-RCNN too (the model in the getting started tutorial) and got about 4 images in (5 if I closed my browser) before it crashed.
Is this a bug or does it really just need more than 4GB of memory?
INFO infer_simple.py: 111: Processing demo/18124840932_e42b3e377c_k.jpg -> /home/px046/prog/Detectron/output/18124840932_e42b3e377c_k.jpg.pdf
terminate called after throwing an instance of 'caffe2::EnforceNotMet'
what(): [enforce fail at blob.h:94] IsType<T>(). wrong type for the Blob instance. Blob contains nullptr (uninitialized) while caller expects caffe2::Tensor<caffe2::CUDAContext> .
Offending Blob name: gpu_0/conv_rpn_w.
Error from operator:
input: "gpu_0/res4_5_sum" input: "gpu_0/conv_rpn_w" input: "gpu_0/conv_rpn_b" output: "gpu_0/conv_rpn" name: "" type: "Conv" arg { name: "kernel" i: 3 } arg { name: "exhaustive_search" i: 0 } arg { name: "pad" i: 1 } arg { name: "order" s: "NCHW" } arg { name: "stride" i: 1 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
*** Aborted at 1516787658 (unix time) try "date -d @1516787658" if you are using GNU date ***
PC: @ 0x7f08de455428 gsignal
*** SIGABRT (@0x3e800000932) received by PID 2354 (TID 0x7f087cda9700) from PID 2354; stack trace: ***
@ 0x7f08de4554b0 (unknown)
@ 0x7f08de455428 gsignal
@ 0x7f08de45702a abort
@ 0x7f08d187db39 __gnu_cxx::__verbose_terminate_handler()
@ 0x7f08d187c1fb __cxxabiv1::__terminate()
@ 0x7f08d187c234 std::terminate()
@ 0x7f08d1897c8a execute_native_thread_routine_compat
@ 0x7f08def016ba start_thread
@ 0x7f08de52741d clone
@ 0x0 (unknown)
Aborted (core dumped)
Issue Analytics
- State:
- Created 6 years ago
- Comments:24 (11 by maintainers)
Top Results From Across the Web
Top 5 Fixes to "out of Video Memory Trying to Allocate a Texture"
Ran into “out of video memory trying to allocate a texture”? Read the post where you will find some quick fixes to this...
Read more >The usable memory may be less than the installed memory on ...
Click Start windows icon , right-click Computer, and then click Properties. · View the Installed memory (RAM) value under System. For example, if...
Read more >[Fixed] Your Computer Is Low on Memory Windows 10/8/7
8 Ways to Fix Your Computer Is Low on Memory Windows 10/8/7 · 1. Close the Programs That Are Using Too Much Memory...
Read more >I keep getting a message that I run out of RAM space during ...
I have 16gb of ram running on medium settings on a 1080 graphics card. With the minimum amt of ram requirements being 4gb,...
Read more >Running out of RAM but using barely any GPU memory
So if you would have 4 gb ram and 1 gb memory on GPU you would see only 3GB in OS system as...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
One additional note: the current implementation uses memory optimizations during training, but not during inference. In the case of inference, it is possible to substantially reduce memory usage since intermediate activations are not needed once they are consumed. We will consider adding inference-only memory optimization in the future.
how can l run the mask-rcnn with a 2G memory GPU? can anyone help me ?