question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi GPUs training raise an error

See original GitHub issue

Single GPU training is normal. But multi gpus training will raise an error below. How can I solve it ?

Traceback (most recent call last):
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "<string>", line 1, in <module>
  File "/home/dl/anaconda3/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
  File "/home/dl/anaconda3/lib/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
    exitcode = _main(fd)
  File "/home/dl/anaconda3/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
  File "/home/dl/anaconda3/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
    self = reduction.pickle.load(from_parent)
MemoryError
MemoryError
Traceback (most recent call last):
  File "/home/dl/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dl/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/dl/anaconda3/lib/python3.6/site-packages/torch/distributed/launch.py", line 235, in <module>
    main()
  File "/home/dl/anaconda3/lib/python3.6/site-packages/torch/distributed/launch.py", line 231, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['/home/dl/anaconda3/bin/python', '-u', './tools/train.py', '--local_rank=0', 'configs/cascade_rcnn_x101_64x4d_fpn_1x.py', '--launcher', 'pytorch']' died with <Signals.SIGKILL: 9>

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:10 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
ZenFShengcommented, Aug 14, 2019

80G memory is consumed with 8*2 workers, so 64G may be enough for 4 GPUs. However, if you plan to train on larger datasets, maybe you need even larger memory. BTW, Fast R-CNN takes more memory than end-to-end Faster R-CNN, sometimes it can eat 200G memory or more.

😮 Oh my~ Thank you for your reply,that really helps a lot.🙂

2reactions
hellockcommented, Aug 13, 2019

How much memory does your server have?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Raise an error in validation with multi GPU training #1445
When the training epochs over, the validation start to run.And it seems like the error was happened after the validation images loading. I'm...
Read more >
Training fails on multiple gpu throwing cuda runtime errors
I am fine-tuning a GPT2LMHeadModel. When I run the code on a single GPU, it works, but when I run it on multiple...
Read more >
Problems with multi-gpus - MATLAB Answers - MathWorks
I have no problem training with a single gpu, but when I try to train with multiple gpus, matlab generates the following error:...
Read more >
Multi-GPU training error(OOM) on keras (sufficient memory ...
deep learning - Multi-GPU training error(OOM) on keras (sufficient memory, may be configuration problem) - Stack Overflow. Stack Overflow for ...
Read more >
Pytorch with DDP throws error with multi-GPU
I am able to train the model if I use a single GPU, however if I switch to multiple GPUs I get an...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found