Joblib hijacks the root logger
See original GitHub issueWhen I clear a cache with Memory.clear
:
from joblib import Memory
memory = Memory(cachedir=".")
memory.clear()
I get output on the root logger which has apparently now been configured with what looks like logging.basicConfig
:
WARNING:root:[Memory(cachedir='./joblib')]: Flushing completely the cache
This goes against Python logging best practices because it’s manipulating the root logger which propagates to all other loggers. An example of why this is bad practice can be seen below:
import logging
from joblib import Memory
logger = logging.getLogger("mylogger")
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
memory = Memory(cachedir=".")
memory.clear()
logger.info("oops, I'll show up twice")
which gives the output
WARNING:root:[Memory(cachedir='./joblib')]: Flushing completely the cache
oops, I'll show up twice
INFO:mylogger:oops, I'll show up twice
What should be happening instead:
- joblib should not use calls to
logging.warning
but instead create its own logger and calllogger.warning
instead - this logger should add a
NullHandler
so as to not interfere with existing logging - clearing cache should default to not give a warning in the first place since if I am calling it, I know that it’s going to be cleared
Issue Analytics
- State:
- Created 5 years ago
- Reactions:4
- Comments:9 (1 by maintainers)
Top Results From Across the Web
Logging nested functions using joblib Parallel and delayed calls
In one of my scripts I have something like: import logging from joblib import Parallel, delayed def f_A(x): logging.info("f_A "+str(x)) def ...
Read more >Logging — addons-server documentation - Read the Docs
Plus, django-debug-toolbar can hijack the logger and show all the log statements ... The root logger is set up from settings_base in the...
Read more >Other Tools for Parallel Processing - Cornell Virtual Workshop
Joblib. Joblib is a set of tools to provide lightweight pipelining in Python. Thus, while not focusing primarily on supporting general purpose distributed ......
Read more >3 Strategies to Customise Celery logging handlers
An alternative solution is to let Celery set up its logger but not use it and also prevent it from hijacking the root...
Read more >Multiprocessing Logging in Python
You can log from multiple processes directly using the log module or safely using a custom log handler. In this tutorial you will...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Still not fixed. Does anyone have a workaround?
Hey, I just submitted a pull request for this issue. It’s #1033. But for some reason a bunch of seemingly unrelated things are now failing (things like pickling io objects). The only file I changed was like 5 lines or so from Logger.py. What gives?