question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi GPU training error

See original GitHub issue

Hi while using multiple GPUs for training I get this:

File "/workspace/TResNet/src/models/tresnet/layers/anti_aliasing.py", line 40, in __call__    
    return F.conv2d(input_pad, self.filt, stride=2, padding=0, groups=input.shape[1])
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 3, input, output, weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /tmp/pip-r
eq-build-cms73_uj/aten/src/THCUNN/generic/SpatialDepthwiseConvolution.cu:19

However single GPU training using CUDA_VISIBLE_DEVICES=0 before my training script works fine. I can see the losses going down after iterations.

Can you help with this?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7

github_iconTop GitHub Comments

2reactions
mrT23commented, Apr 5, 2020

i added an option --remove_aa_jit. run with it, it should be ok for you.

As i said before, TResNet fully supports multi-GPUs, i trained on imagenet with 8xV100. your script is not well designed in terms of distributed. models should be defined after(!) you do ‘torch.cuda.set_device(rank)’, not before. if you insist on the opposite way, use the --remove_aa_jit flag.

i also added some general tips section for working with inplace-abn: https://github.com/mrT23/TResNet/blob/master/INPLACE_ABN_TIPS.md

all the best

0reactions
yashnvcommented, Apr 4, 2020

I got this:

RuntimeError:
attribute lookup is not defined on python value of type '_Environ':
  File "/workspace/TResNet/src/models/tresnet/layers/anti_aliasing.py", line 35
        filt = (a[:, None] * a[None, :]).clone().detach()
        filt = filt / torch.sum(filt)
        self.filt = filt[None, None, :, :].repeat((self.channels, 1, 1, 1)).cuda(device=int(os.environ.get('RANK', 0))).half()
                                                                                            ~~~~~~~~~~~~~~ <--- HERE

I also tried modifying the non-JIT Downsample to account for RANK, but that gave me the same originial error: Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /tmp/pip-r eq-build-cms73_uj/aten/src/THCUNN/generic/SpatialDepthwiseConvolution.cu:19

Do you have some suggestions to make a custom grad function to account for multi-GPUs?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Multi-GPU Training error #2461 - ultralytics/yolov5 - GitHub
Multi-GPU Training: python -m torch.distributed.launch --master_port 42342 ... I got the error: Tensors must be CUDA and dense When I set ...
Read more >
Multi GPU training Error e - NVIDIA Developer Forums
I am using 8 gpus for training and randomly I get this error after some number of epochs: [2020-08-03 11:25:36.480844: W ...
Read more >
Problems with multi-gpus - MATLAB Answers - MathWorks
Learn more about multi gpus. ... no problem training with a single gpu, but when I try to train with multiple gpus, matlab...
Read more >
Multi-GPU training crashes after some time due to NVLink ...
I want to train my model on a dual GPU set-up using Trainer(gpus=2, strategy='ddp'). To my understanding, Lightning sets up Distributed ...
Read more >
Multi-GPU Training - YOLOv5 Documentation
You can increase the device to use Multiple GPUs in DataParallel mode. ... This method is slow and barely speeds up training compared...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found