question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can't see logger output

See original GitHub issue

🐛 Bug

Information

Model I am using (Bert, XLNet …): RoBERTa

Language I am using the model on (English, Chinese …): Sanskrit

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

Can’t see logger output showing model config and other parameters in Trainer that were printed in training_scripts.

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./model_path",
    overwrite_output_dir=True,
    num_train_epochs=1,
    per_gpu_train_batch_size=128,
    per_gpu_eval_batch_size =256,
    save_steps=1_000,
    save_total_limit=2,
    logging_first_step = True,
    do_train=True,
    do_eval = True,
    evaluate_during_training=True,
    logging_steps = 1000
)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_dataset,
    eval_dataset = valid_dataset,
    prediction_loss_only=True,
)
%%time
trainer.train(model_path="./model_path")

Is it It is overriden by tqdm? but I can still see Using deprecated –per_gpu_train_batch_sizeargument which will be removed in a future version. Using–per_device_train_batch_size is preferred.

Environment info

  • transformers version: 2.10.0
  • Platform: Linux-4.19.104±x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
  • PyTorch version (GPU?): 1.6.0a0+916084d (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Using GPU in script?: TPU
  • Using distributed or parallel set-up in script?: No

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
jawadSajidcommented, Feb 2, 2021

Hey, this doesn’t log the training progress by trainer.train() into a log file. I want to keep appending the training progress to my log file but all I get are the prints and the parameters info at the end of trainer.train(). What would be a way around to achieve this? @parmarsuraj99 @LysandreJik

2reactions
iamlockelightningcommented, Aug 26, 2021

+1

same request. @parmarsuraj99 @LysandreJik

Read more comments on GitHub >

github_iconTop Results From Across the Web

Python logging not outputting anything - Stack Overflow
When I try to run logging.debug("Some string") , I get no output to the console, even though this page in the docs says...
Read more >
logging.info not showing up in console output #46 - GitHub
Not seeing logging.info() show up in the console output. In the example I don't see the "Tearing Down" or if I add logging.info...
Read more >
Logging HOWTO — Python 3.11.1 documentation
To determine when to use logging, see the table below, which states, ... Display console output for ordinary usage of a command line...
Read more >
How to Get Started with Logging in Python - Better Stack
By default, a Logger will not have a specific format, so all it will output is the log message. Let's take a look...
Read more >
Stop Using “Print” and Start Using “Logging” | by Naser Tamimi
A critical error happend and the code cannot run! Let's change the logging level to WARNING. In this case, the output is:
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found