question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Model is saved every eval_steps steps if eval_steps < save_steps. Is this expected behavior?

See original GitHub issue

Environment info

  • transformers version: 4.6.1
  • Platform: macOS-10.16-x86_64-i386-64bit
  • Python version: 3.8.5
  • PyTorch version (GPU?): 1.7.1 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help

@sgugger

Information

Model I am using (Bert, XLNet …): Bert, but I don’t think that is relevant

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

  1. Make a TrainingArgs object with eval_steps < save_steps and eval_strategy and save_strategy both set to "steps"
  2. Pass those to a Trainer
  3. Model checkpoints every eval_steps steps, not every save_steps steps

Here is my TrainingArguments code:

args = TrainingArguments(
    output_dir=outpath,
    save_total_limit=10,
    load_best_model_at_end=True,
    save_strategy="steps" if cli_args.save_steps is not None else "epoch",
    save_steps=cli_args.save_steps,
    evaluation_strategy="steps" if cli_args.eval_steps is not None else "epoch",
    eval_steps=cli_args.eval_steps,
    metric_for_best_model="loss",
    learning_rate=cli_args.learning_rate,
    per_device_train_batch_size=cli_args.batch_size,
    per_device_eval_batch_size=cli_args.batch_size,
    num_train_epochs=cli_args.num_train_epochs,
    weight_decay=cli_args.weight_decay,
    fp16=cli_args.fp16,
    deepspeed=deepspeed,
    local_rank=cli_args.local_rank,
)

with the values I am using filled in, this is:

args = TrainingArguments(
    output_dir="ten_m/model",
    save_total_limit=10,
    load_best_model_at_end=True,
    save_strategy="steps",
    save_steps=6,  # for testing
    evaluation_strategy="steps",
    eval_steps=2,  # for testing
    metric_for_best_model="loss",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
    fp16=False,
    deepspeed=None,
    local_rank=-1,
)

Expected behavior

Well, maybe this is expected? But if so, I feel like it should be documented more obviously.

I wrote a callback to upload the saved checkpoint to GCS, but the eval step is very quick, so I was going to do those much more frequently. However, if evaluating means I have to upload to GCS, then I will evaluate less often. However, I verified that even if I don’t use the GCS save callback, with the above settings, a checkpoint is saved every 2 steps, not every 6.

If this is expected behavior, then is the correct way to change it to write a Callback that on_evaluate sets the argument of type transformers.TrainerControl to have property should_save to False?

Thank you

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
sam-writercommented, Jun 23, 2021

sure!

1reaction
sguggercommented, Jun 22, 2021

Sure! Do you want to make a PR with that change?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Trainer - Hugging Face
The Trainer class is optimized for Transformers models and can have surprising behaviors when you use it on other models. When using it...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found