question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[BUG] nvme optimizer offload decreases loss 25x less than cpu optimizer offload for a given number of epochs

See original GitHub issue

When training gpt-type models with cpu optimizer offload the loss drops rapidly. However when using the nvme optimizer offload (all other parameters remain the same) the loss drops by tiny amounts for each epoch making it essentially unusable.

For instance using GPT-J and a small custom dataset (2.8Mb of text) on DeepSpeed 0.7.2

Non-finetuned GPT-J achieves:

  eval_accuracy           =     0.4915
  eval_loss               =     2.4492
  perplexity              =    11.5793

After finetuning for 2 epochs with stage 3 cpu optimizer offload the model has learned the dataset satisfactorily:

  eval_accuracy           =     0.7233
  eval_loss               =     1.1758
  perplexity              =     3.2407

However after finetuning for 2 epochs with stage 3 nvme optimizer offload (all other parameters are the same) the model has barely changed:

  eval_accuracy           =     0.4981
  eval_loss               =     2.3945
  perplexity              =    10.9631

Note that stage 1 cpu optimizer offload gives the same results as stage 3 cpu optimizer offload

I first noticed the low loss reduction when finetuning Salesforce/codegen-16B-nl and EleutherAI/gpt-neox-20b (models for which stage 3 offloads are required on my setup). The loss was dropping much less by epoch than seemed to be experienced by gpt-neox-20b setups using megatron rather than offloading.

To Reproduce

  1. Using either the nvme or cpu ds_config.zip attached run CUDA_VISIBLE_DEVICES=1,2 python -u -m torch.distributed.launch --nproc_per_node=2 ./run_clm.py --do_train --model_name_or_path EleutherAI/gpt-j-6B --train_file data/train.txt --output_dir models/EleutherAI/gpt-j-6B --gradient_accumulation_steps 16 --per_device_train_batch_size 1 --num_train_epochs 2 --learning_rate 3e-05 --bf16 --overwrite_output_dir --deepspeed ds_config.json where train.txt should be a small text file of your choosing (around 3Mb)
  2. After training evaluate using CUDA_VISIBLE_DEVICES=1 python -u ./run_clm.py --model_name_or_path models/EleutherAI/gpt-j-6B --output_dir models/dummy --do_eval --validation_file data/train.txt --per_device_eval_batch_size 1 --fp16_full_eval
  3. Compare evaluation scores between nvme and cpu

Expected behavior I would expect optimization to give roughly the same scores for cpu or nvme offloading

System info (please complete the following information):

  • DeepSpeed 0.7.2
  • OS: Ubuntu 22.04
  • 1 machine using 2x RTX a6000 (without NV-LINK)
  • Python 3.8
  • HuggingFace Transformers 4.21.2
  • Cuda 11.6
  • Pytorch 1.12-cu116

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
tjruwasecommented, Sep 1, 2022

@timohear, can you please try PR #2282?

1reaction
tjruwasecommented, Aug 30, 2022

I have reproduced this issue. Now investigating. Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

NVMe offload - Colossal-AI
If we offload optimizer states to the disk, we can break through GPU memory wall. We implement a user-friendly and efficient asynchronous Tensor...
Read more >
Fast and Affordable Billion-Scale Deep Learning Model Training
ZeRO-Offload moves optimizer states from the. GPU to the CPU memory while keeping the entire model parameters in the GPU memory. It is...
Read more >
STRONGHOLD: Fast and Affordable Billion-Scale Deep ...
based DNNs. ZeRO-Offload moves optimizer states from the. GPU to the CPU memory while keeping the entire model parameters in the GPU memory....
Read more >
VLDB 2022: Paper Sessions - Keynote Speakers
In this paper we focus on several query optimization techniques that reduce the cost of these operators. First, we introduce a novel exchange ......
Read more >
breaking the GPU memory wall for extreme scale deep learning
... Based on ZeRO, ZeRO-Offload [68] enables big model training by offloading data and computation from the GPU to the host CPU. Furthermore, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found