question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItĀ collects links to all the places you might be looking at while hunting down a tough bug.

And, if youā€™re still stuck at the end, weā€™re happy to hop on a call to see how we can help out.

How to configure multiple logger like standard logging.config.dictConfig

See original GitHub issue

hi,I read through the readme and api reference but didnot find out how to get multiple logger instances. here is my configure of logging loggers: image

in my project,i rely on getLogger(ā€œxxxā€) to process the log instead of getLogger() to distribute the log. And I donā€™t want some of the logs to be passed to multiple loggers, so I added propagate: no to each logger. in loguruā€™s api referenceļ¼ŒI only found the filter Parameters in logger.add to handle multiple loggers. but mine multiple loggers are stored in the same log, if i used filter=ā€œxxx.logā€,log record will send to multiple loggers,so how to deal with my scene

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7 (5 by maintainers)

github_iconTop GitHub Comments

6reactions
Delgancommented, Dec 27, 2018

Hey @Gideon-koutian!

I think you could use .bind() to replicate your current logging configuration.

For example, instead of logger = logging.getLogger("xxx"), you could do logger = logger.bind(name="xxx").

As a result, each message logged with this bound logger will contain the name value in the extra record dict that you can use to filter logs adequately.

# Convenient function to avoid repetition, but you could simply go with a lambda
def make_filter(name):
    def filter(record):
        return record["extra"].get("name") == name
    return filter

logger.add("params.log", level="INFO", filter=make_filter("params"))
logger.add("debug.log", level="DEBUG", filter=make_filter("debug"))
logger.add("api.log", level="INFO", filter=make_filter("api"))
logger.add("scheduler.log", level="INFO", filter=make_filter("scheduler"))

api_logger = logger.bind(name="api")
api_logger.info("This message is only propagated to the 'API' handler")

Depending on how you intend to handle incoming log messages, maybe can you use just one custom sink and filter them in your function directly:

def sink(message):
    record = message.record
    name = record["extra"].get("name")

    if name == "params":
        do_something(message)
    elif name == "debug":
        do_something_else(message)
    ...

logger.add(sink)

I admit that this looks less convenient than built-in logging, though. šŸ˜•

Do you think using Loguru like this would suit your needs?

1reaction
Delgancommented, Mar 7, 2019

Hi @D3f0. šŸ™‚

The problem you describe is one of the reasons I did not implement any method to load a configuration from a file.

Honestly, I donā€™t what would be the best solution. The standard logging library partly solves it by specifying a special syntax: Access to external objects. So, you could state that 'ext://sys.stderr' stands for sys.stderr, and then tranform the TOML dict in your Python script. Depending on the dynamicity you are looking for, you could simply use pattern matching like {"ext:://sys.stderr": sys.stderr} or implement a proper resolver as done by the standard library: cpython/logging/config.py.

But when it comes to parametrizing handlers with functions, itā€™s even more complicated. If you wish to support this too, I guess you need to define a set of pre-wrote functions in your Python script that you can parametrize in the TOML file by using their identifier.

Basically, the simplest solution would look like this I think:

resolver = {
    "ext://stderr": sys.stderr,
    "ext://stdout": sys.stdout,
    "ext://database": lambda msg: db.update(msg.record),
}

for handler in toml_config["handlers"]:
     for key, value in handler.items():
         handler[key] = resolver.get(value, value)

loguru.config(toml_config)

I suppose you probably already have thought of this solution. I donā€™t have any better idea for now, but Iā€™m interested to know the solution you will choose. Alternatively, you could also define the logging configuration dict inside some kind of config.py file.

Read more comments on GitHub >

github_iconTop Results From Across the Web

python 3.x - How to set up multiple loggers with different ...
You used default formatter to configure specific logger and the last one C used to log you records. ... dictConfig(cfg) return logging.
Read more >
logging.config ā€” Logging configuration ā€” Python 3.11.1 ...
This section describes the API for configuring the logging module. ... and then dictConfig() could be called exactly as in the default, uncustomized...
Read more >
logging.config - Simple Guide to Configure Loggers from ...
An in-depth guide to configure loggers from dictionary and config ... logging by giving this dictionary as input to dictConfig() method.
Read more >
Python Logging Guide - Best Practices and Hands-on Examples
The Python standard library provides a logging module as a solution to log events from applications and libraries. Once the logger is configured...
Read more >
How to Collect, Customize, and Centralize Python Logs
This means that if you have a default logging configuration that you want all of your loggers to pick up, you should add...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found