RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.94 GiB total capacity; 2.65 GiB already allocated; 2.25 MiB free; 4.11 MiB cached)
See original GitHub issueEpoch: 0
Traceback (most recent call last):
File "main.py", line 142, in <module>
train(epoch)
File "main.py", line 93, in train
outputs = net(inputs)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Documents/FL-code/dataset/pytorch-cifar/models/googlenet.py", line 87, in forward
out = self.a4(out)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Documents/FL-code/dataset/pytorch-cifar/models/googlenet.py", line 51, in forward
y3 = self.b3(x)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.94 GiB total capacity; 2.65 GiB already allocated; 2.25 MiB free; 4.11 MiB cached)
pytorch version: 1.1.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
Top Results From Across the Web
How to avoid "CUDA out of memory" in PyTorch - Stack Overflow
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 4.29 GiB already allocated; 10.12 MiB free; 4.46...
Read more >CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0 ...
Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached) . According...
Read more >Solving "CUDA out of memory" Error - Kaggle
RuntimeError : CUDA out of memory. Tried to allocate 978.00 MiB (GPU 0; 15.90 GiB total capacity; 14.22 GiB already allocated; 167.88 MiB...
Read more >stabilityai/stable-diffusion · RuntimeError: CUDA out of memory.
Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.46 GiB already allocated; 0 bytes free; 3.52 GiB reserved in total...
Read more >CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0 ...
Tried to allocate 20.00 MiB (GPU 0; 3.94 GiB total capacity; 3.36 GiB already allocated; 13.06 MiB free; 78.58 MiB cached) · vision....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
You just don’t have memory in your GPU. Try to reduce the batch size
I submitted these bugs to PyTorch, and the advice I gave was to adjust the multithreading to 1.