GPT2 large trains on 1 GPU but does not fit in two.
See original GitHub issueHi all,
I am training GPT2 from scratch with the following command:
torchrun --nproc_per_node=2 --nnodes=1 ./5.run_clm-post.py --model_name_or_path gpt2-large --train_file datasets/sample.txt --tokenizer_name myembeddings --do_train --do_eval --output_dir ./sample --evaluation_strategy epoch --num_train_epochs 100 --per_device_train_batch_size 24 --cache_dir .cache/
When I train on a single A100, the model trains perfectly. When running on 2 GPUs (both A100s) I get the CUDA out of memory error. I tried to decrease to batch size 16 but still happens. Does this it mean that I have to go to batch size 8? Why does batch size 24 fit on a single GPU but not in two?
Below are the errors:
With batch size 16:
File "/path/to/miniconda3/lib/python3.6/site-packages/transformers/activations.py", line 42, in gelu_new
return 0.5 * x * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0))))
RuntimeError: CUDA out of memory. Tried to allocate 320.00 MiB (GPU 0; 39.59 GiB total capacity; 36.81 GiB already allocated; 205.69 MiB free; 37.07 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
With batch size 24:
File "/path/to/miniconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1169, in dropout
return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: CUDA out of memory. Tried to allocate 1.88 GiB (GPU 1; 39.59 GiB total capacity; 36.11 GiB already allocated; 909.69 MiB free; 36.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Any help would be appreciated. Also, any advice to make the model train faster would be great to follow. Thanks for this great repository.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
This really great document, written by @stas00 , may be of help 😃
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.