question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi-node training for casual language modeling example does not work

See original GitHub issue

Environment info

  • transformers version: 4.7.0.dev0
  • Platform: Linux-4.19.0-14-amd64-x86_64-with-debian-10.8
  • Python version: 3.7.10
  • PyTorch version (GPU?): 1.8.1 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: yes

Who can help

@sgugger

@patrickvonplaten, @LysandreJik

Information

Model I am using (Bert, XLNet …): GPT-2

The problem arises when using:

  • my own modified scripts:
nproc_per_node=4

  python -m torch.distributed.launch \
      --nproc_per_node=$nproc_per_node \
      --nnodes=2 \
      --node_rank=0 \
      --master_addr="192.168.1.1" \
      --master_port=1234  run_clm.py \
      --model_name_or_path gpt2 \
      --block_size 256 \
      --dataset_name wikitext \
      --dataset_config_name wikitext-2-raw-v1 \
      --do_train \
      --do_eval \
      --overwrite_output_dir \
      --num_train_epochs 1 \
      --output_dir /tmp/test-clm

The tasks I am working on is:

  • an official GLUE/SQUaD task: language modeling
  • my own task or dataset: wikitext

To reproduce

Steps to reproduce the behavior:

  1. Have two nodes with at least 4 GPUs each.
  2. In the first machine, run the above script.
  3. In the second machine, run a script same as above except with the flag --node_rank=1 instead of --node_rank=0.

I have waited for almost 15 mins. Nothing has happened. The training did not get started.

Expected behavior

The training gets started.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
sguggercommented, May 25, 2021

Glad you solved your issue!

1reaction
sguggercommented, May 25, 2021

No they need to have the same port number, otherwise they can’t connect to each other.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tensorflow language modeling example doesn't work ... - GitHub
Steps to reproduce the behavior: Open new Colab notebook session with TPU accelerator. (The example script is able to run normally on CPU...
Read more >
Training a causal language model from scratch - Hugging Face
In this chapter, we'll take a different approach and train a completely new model from scratch. This is a good approach to take...
Read more >
Trivial Multi-Node Training With Pytorch-Lightning
Let's first define a PyTorch-Lightning (PTL) model. This will be the simple MNIST example from the PTL docs. Notice that this model has...
Read more >
Hugging Face Pre-trained Models: Find the Best One for Your ...
These tasks can be categorized as – Masked Language Modelling and Casual Language modeling. There is more to NLP tasks other than just...
Read more >
Scalable multi-node deep learning training using GPUs in the ...
While there is no formal literature discussing this version in detail, this version has been mentioned in a few papers ([1][2]) and is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found