[BUG] nvme optimizer offload decreases loss 25x less than cpu optimizer offload for a given number of epochs
See original GitHub issueWhen training gpt-type models with cpu optimizer offload the loss drops rapidly. However when using the nvme optimizer offload (all other parameters remain the same) the loss drops by tiny amounts for each epoch making it essentially unusable.
For instance using GPT-J and a small custom dataset (2.8Mb of text) on DeepSpeed 0.7.2
Non-finetuned GPT-J achieves:
eval_accuracy = 0.4915
eval_loss = 2.4492
perplexity = 11.5793
After finetuning for 2 epochs with stage 3 cpu optimizer offload the model has learned the dataset satisfactorily:
eval_accuracy = 0.7233
eval_loss = 1.1758
perplexity = 3.2407
However after finetuning for 2 epochs with stage 3 nvme optimizer offload (all other parameters are the same) the model has barely changed:
eval_accuracy = 0.4981
eval_loss = 2.3945
perplexity = 10.9631
Note that stage 1 cpu optimizer offload gives the same results as stage 3 cpu optimizer offload
I first noticed the low loss reduction when finetuning Salesforce/codegen-16B-nl and EleutherAI/gpt-neox-20b (models for which stage 3 offloads are required on my setup). The loss was dropping much less by epoch than seemed to be experienced by gpt-neox-20b setups using megatron rather than offloading.
To Reproduce
- Using either the nvme or cpu ds_config.zip attached run
CUDA_VISIBLE_DEVICES=1,2 python -u -m torch.distributed.launch --nproc_per_node=2 ./run_clm.py --do_train --model_name_or_path EleutherAI/gpt-j-6B --train_file data/train.txt --output_dir models/EleutherAI/gpt-j-6B --gradient_accumulation_steps 16 --per_device_train_batch_size 1 --num_train_epochs 2 --learning_rate 3e-05 --bf16 --overwrite_output_dir --deepspeed ds_config.json
where train.txt should be a small text file of your choosing (around 3Mb) - After training evaluate using
CUDA_VISIBLE_DEVICES=1 python -u ./run_clm.py --model_name_or_path models/EleutherAI/gpt-j-6B --output_dir models/dummy --do_eval --validation_file data/train.txt --per_device_eval_batch_size 1 --fp16_full_eval
- Compare evaluation scores between nvme and cpu
Expected behavior I would expect optimization to give roughly the same scores for cpu or nvme offloading
System info (please complete the following information):
- DeepSpeed 0.7.2
- OS: Ubuntu 22.04
- 1 machine using 2x RTX a6000 (without NV-LINK)
- Python 3.8
- HuggingFace Transformers 4.21.2
- Cuda 11.6
- Pytorch 1.12-cu116
Issue Analytics
- State:
- Created a year ago
- Comments:8 (4 by maintainers)
Top GitHub Comments
@timohear, can you please try PR #2282?
I have reproduced this issue. Now investigating. Thanks!