question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot use training_args.logging_steps of Hugging Face Trainer

See original GitHub issue
  • Weights and Biases version: version 0.8.35
  • Python version: 3.7.6
  • Operating System: Linux

Description

I wanted to use Pytorch Trainer, this works

from transformers import TrainingArguments
training_args = TrainingArguments("/kaggle/working")
...
# training_args.logging_steps=2
...
from transformers import Trainer

trainer = Trainer(
        model=model,
        args=training_args,...)

This does not, I received You can only call wandb.watch once per model

from transformers import TrainingArguments
training_args = TrainingArguments("/kaggle/working")
...
training_args.logging_steps=2

from transformers import Trainer

trainer = Trainer(
        model=model,
        args=training_args,...)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
minhtrietcommented, May 25, 2020

Since this is on Kaggle, I would share the notebook here so that you can fork it. In the Settings menu on the right panel, please (1) Enable Internet for the kernel and (2) Use the lastest enviroment as seen in screenshot below screenshot

1reaction
borisdaymacommented, May 21, 2020

Can you try to restart your kernel? Also maybe remove the wandb folder created. I think it’s due to the importing of wandb which tries to look into this unrelated folder sometimes.

Let me know if that solves it and we can look into a permanent fix.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Trainer - Hugging Face
The Trainer class provides an API for feature-complete training in PyTorch for most standard use cases. It's used in most of the example...
Read more >
Cannot use training_args.logging_steps of Hugging Face ...
Description. I wanted to use Pytorch Trainer, this works. from transformers import TrainingArguments training_args = ...
Read more >
Pretrain Transformers Models in PyTorch Using Hugging Face ...
This notebook is designed to use an already pretrained transformers model and fine-tune it on your custom dataset, and also train a ...
Read more >
Huggingface Trainer only doing 3 epochs no matter the ...
training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=10, # total number of training epochs ...
Read more >
Hugging Face Transformers - Documentation - Weights & Biases
A Weights & Biases integration for Hugging Face's Transformers library: solving NLP, one logged run ... from transformers import TrainingArguments, Trainer.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found