question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to handle a (possibly) misbehaving logger?

See original GitHub issue

Hi Delgan,

In one of my multithreaded service daemons I’m seeing that the logger will occasionally stop altogether. By that I mean the application continues to run as if there are no errors but nothing is written to the logs and the files themselves aren’t rotated. The file descriptors for the log files themselves are still open too. I suspect that I’m either blocking on a single logging call or that the there could have been an exception in the logger itself. Unfortunately I don’t have a record of strerr to check this.

In the interest of debugging my problem I have a couple of questions:

  1. Do you have an example of handling errors in the Loguru logger (setting catch to False when adding a sink) to the documentation? Would this basically mean that every logging call needs to be put into a try/except block?
  2. If there is an exception in the logger, can it be restarted to make it sane again? If so, how can I accomplish that?
  3. If on the other hand I’m being blocked by a single logging call (not sure why this would be), do you think that setting enqueue to True when adding a sink might solve my problem? Do logging calls eventually timeout?

I use Loguru extensively in my work. Thanks for your effort!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:11 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
kacecommented, Mar 16, 2021

I assumed this was the case, but failed to find anything from the Python stdlib documentation listing this requirement (not sure if re-entrant signal handlers are the norm across all programming languages).

Your explanation makes sense. Don’t worry about coming up with a fix. I’ve already removed the logging calls from my signal handlers, but a note in the documentation might save clueless people like me some time!

Thanks again Delgan and great work on Loguru!

0reactions
jacksmith15commented, Sep 26, 2022

Actually, I’ve managed to reproduce and I think what I’ve found constitutes a separate issue. I will raise and link it here. EDIT: https://github.com/Delgan/loguru/issues/712

Read more comments on GitHub >

github_iconTop Results From Across the Web

Logging Best Practices: The 13 You Should Know - DataSet
1. Don't Write Logs by Yourself (AKA Don't Reinvent the Wheel) ... Never, ever use printf or write your log entries to files...
Read more >
What to do with a misbehaving BIND server
Here are some things that we'd recommend you do as many of as possible before attempting to clear the problem - and then...
Read more >
Bad Behavior in the Workplace? Here Are 10 Ways to Deal
Show that you're serious about what is and isn't bad behavior at work, as well as what the consequences are. Warnings, reprimands, unpaid...
Read more >
Quick And Dirty Python Logging Lesson - Nathaniel Brown's
Logging should be easy. If you are overwhelmed with logging output you can't filter or turn off, or you don't know how to...
Read more >
Where to handle IOException when adding a file handler to a ...
final also ensures that some misbehaving part of the code doesn't change the value of the logger or set it to null ....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found