DeepSpeed gets stuck when training
See original GitHub issueEnvironment info
transformers
version: 4.8.1- Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
Who can help
Information
Trying to replicate this, I am using a 125M GPT Neo model and fine-tune it with using the Trainer. Training arguments include a DeepSpeed option. The Trainer gets stuck with:
[2021-06-29 14:29:44,747] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.4.1, git-hash=unknown, git-branch=unknown
[2021-06-29 14:29:44,757] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1
ds_report gives:
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
sparse_attn ............ [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
[WARNING] async_io requires the libraries: ['libaio-dev'] but are missing. Can be fixed by: `apt install libaio-dev`.
async_io ............... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch']
torch version .................... 1.9.0
torch cuda version ............... 11.1
nvcc version ..................... 10.1
deepspeed install path ........... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed']
deepspeed info ................... 0.4.1, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.9, cuda 11.1
Is there a way to debug this?
To Replicate
I modified the original code slightly to remove the errors:
training_args = tr.TrainingArguments(output_dir=save_dir, num_train_epochs=5, logging_steps=300, save_steps=300,
per_device_train_batch_size=1, per_device_eval_batch_size=1,warmup_steps=50,
learning_rate=0.001,adam_epsilon=1e-06,fp16=True,
weight_decay=0.01, logging_dir=f'{save_dir}/logs', deepspeed='./ds_config.json')
and ds_config.json is now:
{
"fp16": {
"enabled": true,
"min_loss_scale": 1,
"opt_level": "O3"
},
"zero_optimization": {
"stage": 3,
"cpu_offload": true,
"cpu_offload_params" : true,
"contiguous_gradients": true,
"overlap_comm": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [
0.9,
0.999
],
"eps": 1e-6
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 50
}
},
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"steps_per_print":1
}
Issue Analytics
- State:
- Created 2 years ago
- Comments:22 (11 by maintainers)
Top Results From Across the Web
DeepSpeed Integration — transformers 4.10.1 documentation
If you're using only 1 GPU, here is how you'd have to adjust your training code in the notebook to use DeepSpeed. #...
Read more >DeepSpeed Configuration JSON
The effective training batch size. This is the amount of data samples that leads to one step of model update. train_batch_size is aggregated...
Read more >Enabling Efficient Inference of Transformer Models at ... - arXiv
batch sizes, inference workloads can differ from training ... The DeepSpeed Transformer solution is a three-layered sys-.
Read more >ZeRO-Infinity and DeepSpeed: Unlocking unprecedented ...
ZeRO-Infinity at a glance: ZeRO-Infinity is a novel deep learning (DL) training technology for scaling model training, from a single GPU to ...
Read more >DeepSpeed: All the tricks to scale to gigantic models - YouTube
0:00 · New! Watch ads now so you can enjoy fewer interruptions. Got it ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@stas00
yes turning on gradients doesn’t make any sense. I was attempting to battle the issue with using ‘gloo’ backend that you referred to… not sure how to fix it https://github.com/microsoft/DeepSpeed/issues/1030
I’m not succeeding at building that Docker image. If I use
build_image.sh
it hangs, if I try todocker build .
it fails with some deps missing. Do you have a ready docker image I could pull?Since kubeflow is run in a docker image most likely the issue has something to do with its setup/configuration.
It’s very possible. I haven’t run into this myself, so I trust your research.
gloo doesn’t provide the same functionality as nccl, but it looks that Deepspeed docs say it should work.
OK, what if you do:
deepspeed.init_distributed("gloo")
here? instead ofdeepspeed.init_distributed()
https://github.com/huggingface/transformers/blob/d7e156bd1ae2467e9ea1dbc44f31da0ed2296aee/src/transformers/training_args.py#L812
I found this issue https://github.com/microsoft/DeepSpeed/issues/1030 where a user was able to use the gloo backend with Deepspeed.