question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Trainer: Cannot train with 3+ GPUs / Uneven Memory Consumption

See original GitHub issue

Environment info

  • transformers version: 4.9.1
  • Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.29
  • Python version: 3.8.5
  • PyTorch version (GPU?): 1.9.1+cu111 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: <fill in>

Who can help

@sgugger @patil-suraj

Information

Model I am using (Bert, XLNet …):

The problem arises when using:

  • [] the official example scripts: (give details below)
  • my own modified scripts: I’m just using the Trainer class to train a model

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: Custom proprietary dataset

To reproduce

I’m running the Trainer class and I’m essentially just fine tune a GPT-Neo variant. I don’t use any specific CLI options and just call python train.py.

What happens? With EleutherAI/gpt-neo-1.3B I am running into CUDA OOM memory errors depending on how much GPUs I want to use for training. For example:

  • 1 GPUs: Works
  • 2 GPUs: Works
  • 3 GPUs: OOM

So effectively I am unable to train with more than 2 GPUs.

training_args = TrainingArguments(
    output_dir='results', 
    num_train_epochs=EPOCHS, 
    logging_steps=EPOCHS,
    load_best_model_at_end=True, 
    save_strategy="epoch", 
    evaluation_strategy="epoch",
    per_device_train_batch_size=BATCH_SIZE, 
    per_device_eval_batch_size=BATCH_SIZE,
    warmup_steps=100, 
    weight_decay=0.01, 
    logging_dir='logs',
    report_to="none",
    save_total_limit=15,
    seed=42,
)

# start training
Trainer(model=model, 
        args=training_args, 
        train_dataset=train_dataset,
        eval_dataset=eval_dataset,
        data_collator=lambda data: {
            'input_ids': torch.stack([f[0] for f in data]),
            'attention_mask': torch.stack([f[1] for f in data]),
            'labels': torch.stack([f[0] for f in data]),
        }
).train()

The memory consumption on those two GPUs is also very imbalanced:

+-------------------------------+----------------------+----------------------+
|   5  Tesla V100-SXM2...  On   | 00000000:89:00.0 Off |                    0 |
| N/A   78C    P0   195W / 300W |  32212MiB / 32510MiB |    100%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   6  Tesla V100-SXM2...  On   | 00000000:B2:00.0 Off |                    0 |
| N/A   83C    P0   281W / 300W |  16096MiB / 32510MiB |     99%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

I also tried running the training script with the torch.distributed command, but that doesn’t work either for me. For example:

python -m torch.distributed.launch --nproc_per_node=2 train.py

Am I missing something obvious?

Expected behavior

The trainer should be able to handle more GPUs than 2.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
oborcherscommented, Oct 15, 2021

Can safely confirm that it works nicely out of the box with the 125M variant of the model. Thus I will have to play around with Zero or FP16 to understand how to get it to work with the larger ones. Many thanks!

1reaction
sguggercommented, Oct 13, 2021

I don’t see anything out of the ordinary:

  • raising the batch size will get you OOM on GPU-0
  • distributed data parallel might take a little bit more space than DataParallel and you were super tight on GPU-0
  • raising the number of GPUs will slow down the iterations a little bit because of communication, but you will also get less iterations since you are raising the actual batch size (actual batch size = batch size x number of GPUs)
Read more comments on GitHub >

github_iconTop Results From Across the Web

Efficient Training on Multiple GPUs - Hugging Face
When training on a single GPU is too slow or the model weights don't fit in a single GPUs memory we use a...
Read more >
Out Of Memory when training on Big Images #1817 - GitHub
I resized the machine to 16CPU and 104G of RAM, and it worked. But the training is very greedy and use 84G of...
Read more >
How to scale training on multiple GPUs - Towards Data Science
In order to solve this problem, and reduce the amount of memory usage, we use two techniques: Reduce the batch_size; Use Apex for...
Read more >
Multiple GPUs training with Gluon API - Apache MXNet
The first thing is there is an additional memory allocation that happens on GPUs that is not directly related to your data and...
Read more >
Use PyTorch DistributedDataParallel with Hugging Face on ...
I recently ran in to this problem: scaling a HF training job from p3.8xlarge to p3.16xlarge increased memory consumption on (I think) one...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found