caplog fixture: capture log records from another process
See original GitHub issuecaplog
captures log records from spawned threads, but not from processes. This is probably a feature request rather than a bug, also unsure if a general and complete implementation of capturing logs from spawned processes is possible at all…
A simple test:
import concurrent.futures as futures
import logging
import time
import pytest
logging.basicConfig(level=logging.INFO)
def worker(max_count):
count = 0
while count < max_count:
logging.getLogger().info('sleeping ...')
time.sleep(1)
count += 1
@pytest.mark.parametrize('Executor_', (futures.ThreadPoolExecutor, futures.ProcessPoolExecutor, ))
def test_spam(caplog, Executor_):
num_records = 5
with Executor_() as pool:
fut = pool.submit(worker, num_records)
all_records = []
new_records = []
running = True
while running:
time.sleep(2)
new_records = [rec for rec in caplog.records if rec not in all_records]
all_records = caplog.records[:]
if not new_records:
running = False
futures.wait((fut, ), timeout=0)
assert len(all_records) == num_records
Running the test yields:
$ pytest -v test_spam.py
========================================================================================================= test session starts =========================================================================================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 -- /Users/hoefling/.virtualenvs/pytest-caplog-test/bin/python3.6
cachedir: .cache
rootdir: /private/tmp, inifile:
collected 2 items
test_spam.py::test_spam[ThreadPoolExecutor] PASSED [ 50%]
test_spam.py::test_spam[ProcessPoolExecutor] FAILED [100%]
============================================================================================================== FAILURES ===============================================================================================================
___________________________________________________________________________________________________ test_spam[ProcessPoolExecutor] ____________________________________________________________________________________________________
caplog = <_pytest.logging.LogCaptureFixture object at 0x1053639e8>, Executor_ = <class 'concurrent.futures.process.ProcessPoolExecutor'>
@pytest.mark.parametrize('Executor_', (futures.ThreadPoolExecutor, futures.ProcessPoolExecutor, ))
def test_spam(caplog, Executor_):
num_records = 5
with Executor_() as pool:
fut = pool.submit(worker, num_records)
all_records = []
new_records = []
running = True
while running:
time.sleep(2)
new_records = [rec for rec in caplog.records if rec not in all_records]
all_records = caplog.records[:]
if not new_records:
running = False
futures.wait((fut, ), timeout=0)
> assert len(all_records) == num_records
E assert 0 == 5
E + where 0 = len([])
test_spam.py:36: AssertionError
-------------------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------------------
INFO:root:sleeping ...
INFO:root:sleeping ...
INFO:root:sleeping ...
INFO:root:sleeping ...
INFO:root:sleeping ...
================================================================================================= 1 failed, 1 passed in 13.09 seconds =================================================================================================
Issue Analytics
- State:
- Created 6 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
How to manage logging — pytest documentation
pytest captures log messages of level WARNING or above automatically and ... To access logs from other stages, use the caplog.get_records(when) method.
Read more >Empty messages in caplog when logs emmited in a different ...
The idea is to create a fixture, which takes the queue from the QueueHandler handler and reemits the logs in the main process, ......
Read more >Reference — pytest documentation
Captured logs are available through the following methods: * caplog.text -> string containing formatted log output * caplog.records -> list of logging.
Read more >loguru Documentation - Read the Docs
Using logs in your application should be an automatism, Loguru tries ... The caplog fixture captures logging output so that it can be...
Read more >pytest-catchlog - PyPI
py.test plugin to catch log messages. ... Also the members text, records and record_tuples of the caplog fixture can be used as properties...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I am able to capture all logs from a child process by using
logging.handlers.QueueHandler()
. OP’s code is modified as below, which passes all tests:Just to add my 2 cents, because I was bitten by the same problem.
My program is “very” multiprocessing: A process to receive messages and put it in a Queue, some processes to consume the message from the Queue, another process (“log-process”) to gather all the logs (passed through yet another Queue) and log to console + rotating file, and the main process to monitor & controll all the above.
So I patched the logging configuration in log-process like so:
then I create a fixture like so:
Finally I use the fixture like so:
Yeah, I cannot use
caplog
, but I can now ‘assert’ for expected log entries.The benefits of using UDP instead of Queue is that one does not have to drain the Queue afterwards, nor does one have to inject the Queue into the being-monitored processes before they get yeeted into a separate process.