question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

DeepSpeed gets stuck when training

See original GitHub issue

Environment info

  • transformers version: 4.8.1
  • Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
  • Python version: 3.7.10
  • PyTorch version (GPU?): 1.9.0 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: single gpu

Who can help

@stas00

Information

Trying to replicate this, I am using a 125M GPT Neo model and fine-tune it with using the Trainer. Training arguments include a DeepSpeed option. The Trainer gets stuck with:

[2021-06-29 14:29:44,747] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.4.1, git-hash=unknown, git-branch=unknown
[2021-06-29 14:29:44,757] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1

ds_report gives:

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
sparse_attn ............ [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the libraries: ['libaio-dev'] but are missing. Can be fixed by: `apt install libaio-dev`.
async_io ............... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch']
torch version .................... 1.9.0
torch cuda version ............... 11.1
nvcc version ..................... 10.1
deepspeed install path ........... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed']
deepspeed info ................... 0.4.1, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.9, cuda 11.1

Is there a way to debug this?

To Replicate

I modified the original code slightly to remove the errors:

training_args = tr.TrainingArguments(output_dir=save_dir, num_train_epochs=5, logging_steps=300, save_steps=300,
                                  per_device_train_batch_size=1, per_device_eval_batch_size=1,warmup_steps=50,
                                     learning_rate=0.001,adam_epsilon=1e-06,fp16=True,
                                  weight_decay=0.01, logging_dir=f'{save_dir}/logs', deepspeed='./ds_config.json')

and ds_config.json is now:

{
  "fp16": {
    "enabled": true,
    "min_loss_scale": 1,
    "opt_level": "O3"
  },
  "zero_optimization": {
    "stage": 3,
    "cpu_offload": true,
    "cpu_offload_params" : true,
    "contiguous_gradients": true,
    "overlap_comm": true
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": 0.001,
      "betas": [
        0.9,
        0.999
      ],
      "eps": 1e-6
    }
  },
  "scheduler": {
    "type": "WarmupLR",
    "params": {
      "warmup_min_lr": 0,
      "warmup_max_lr": 0.001,
      "warmup_num_steps": 50
    }
  },
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "steps_per_print":1
}

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:22 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
SamsTheGreatestcommented, Jul 9, 2021

@stas00

Not sure why you needed to turn gradients off - that surely won’t work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure.

yes turning on gradients doesn’t make any sense. I was attempting to battle the issue with using ‘gloo’ backend that you referred to… not sure how to fix it https://github.com/microsoft/DeepSpeed/issues/1030

1reaction
stas00commented, Jul 7, 2021

I’m not succeeding at building that Docker image. If I use build_image.sh it hangs, if I try to docker build . it fails with some deps missing. Do you have a ready docker image I could pull?

Since kubeflow is run in a docker image most likely the issue has something to do with its setup/configuration.

Reading though some issues, could it be that its due to the nccl usage? Is there a trivial way to set backend to gloo within the notebook I shared with you @stas00?

It’s very possible. I haven’t run into this myself, so I trust your research.

gloo doesn’t provide the same functionality as nccl, but it looks that Deepspeed docs say it should work.

OK, what if you do: deepspeed.init_distributed("gloo") here? instead of deepspeed.init_distributed()

https://github.com/huggingface/transformers/blob/d7e156bd1ae2467e9ea1dbc44f31da0ed2296aee/src/transformers/training_args.py#L812

I found this issue https://github.com/microsoft/DeepSpeed/issues/1030 where a user was able to use the gloo backend with Deepspeed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

DeepSpeed Integration — transformers 4.10.1 documentation
If you're using only 1 GPU, here is how you'd have to adjust your training code in the notebook to use DeepSpeed. #...
Read more >
DeepSpeed Configuration JSON
The effective training batch size. This is the amount of data samples that leads to one step of model update. train_batch_size is aggregated...
Read more >
Enabling Efficient Inference of Transformer Models at ... - arXiv
batch sizes, inference workloads can differ from training ... The DeepSpeed Transformer solution is a three-layered sys-.
Read more >
ZeRO-Infinity and DeepSpeed: Unlocking unprecedented ...
ZeRO-Infinity at a glance: ZeRO-Infinity is a novel deep learning (DL) training technology for scaling model training, from a single GPU to ...
Read more >
DeepSpeed: All the tricks to scale to gigantic models - YouTube
0:00 · New! Watch ads now so you can enjoy fewer interruptions. Got it ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found