question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CUDA_VISIBLE_DEVICES=6,7 python train.py --PCB --batchsize 60 --name PCB-64 --train_all

I use multi gpu, so add some code:
if torch.cuda.device_count() > 1 and use_gpu:
  model_wraped = nn.DataParallel(model).cuda()
  model = model_wraped

but error in forward: RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1513368888240/work/torch/lib/THC/THCTensorCopy.cu:204

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
layumicommented, Apr 28, 2018

This is my code. You may refer it and modify your code.

model = torch.nn.DataParallel(model, device_ids=gpu_ids).cuda()

ignored_params = list(map(id, model.module.model.fc.parameters() )) + list(map(id, model.module.classifier.parameters() ))
base_params = filter(lambda p: id(p) not in ignored_params, model.parameters())

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD([
             {'params': base_params, 'lr': 0.01},
             {'params': model.module.model.fc.parameters(), 'lr': 0.1},
             {'params': model.module.classifier.parameters(), 'lr': 0.1}
         ], momentum=0.9, weight_decay=5e-4, nesterov=True)
0reactions
xujian0commented, Nov 6, 2018

Hi @xujian0 When you test the model, you also need to add model.module in the test.py. Especially, this line

model = torch.nn.DataParallel(model, device_ids=gpu_ids).cuda()

Thanks for your reply, and it solved my problem!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Getting error in a multi gpu machine #2701
My model is working fine when I use gpu:0 but it is giving error when I use gpu:1. ... Getting error in a...
Read more >
Error occurs when saving model in multi-gpu settings
I'm finetuning a language model on multiple gpus. However, I met some problems with saving the model. After saving the model using ....
Read more >
Multi-GPU training crashes after some time due to NVLink ...
I want to train my model on a dual GPU set-up using Trainer(gpus=2, strategy='ddp'). To my understanding, Lightning sets up Distributed ...
Read more >
Multi-gpu runtime error - fastai dev - fast.ai Course Forums
I have been experimenting with fastai multi-gpu training. Spun up a multi-gpu instance on Jarvislabs.ai with fastai 2.5.0 having 2 RTX5000 ...
Read more >
Multiple streams on 1 GPU and out of memory error
I'm running multiple streams across multiple GPUs. I have more streams (dozens) than devices (4). I assign the streams to the devices round ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found