question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Slowest test times from pytest are incorrect

See original GitHub issue
  • Python 3.7
  • pytest==4.2.0
  • freezegun==0.3.11
import pytest
from freezegun import freeze_time
from datetime import timedelta
@pytest.fixture
def freezer():
  with freeze_time(
    # ignore=['_pytest.runner']
  ) as f:
    yield f
def test(freezer):
  freezer.tick(delta=timedelta(seconds=60)
pytest --durations 1
============= slowest 1 test durations ==============
60.00s call     test_blah.py::test
============= 1 passed in 0.06 seconds ==============

If I uncomment the ignore line, the durations match what I expect:

============= slowest 1 test durations ==============
0.03s setup    test_blah.py::test
============= 1 passed in 0.08 seconds ==============

I also see in the master branch _pytest.runner. was added. But this doesn’t work for me. Perhaps we should also add _pytest.runner?

Issue Analytics

  • State:open
  • Created 5 years ago
  • Reactions:5
  • Comments:7

github_iconTop GitHub Comments

2reactions
jerry-gitcommented, Mar 7, 2020

Hello! Any update on #288? I ran into this with pytest-split which utilises duration related information for splitting the full test suite into “optimal sub suites”. I basically needed to add some ugly hacks to cope with incorrect durations due to freezegun (which is an awesome library btw 👍).

0reactions
blueyedcommented, Dec 1, 2022
Read more comments on GitHub >

github_iconTop Results From Across the Web

Printing test execution times and pinning down slow tests with ...
Is there a way to make py.test print out execution times of (slow) test, so pinning down problematic tests become easier? python ·...
Read more >
Flaky tests — pytest documentation
Flaky tests¶. A “flaky” test is one that exhibits intermittent or sporadic failure, that seems to have non-deterministic behaviour.
Read more >
Effective Python Testing With Pytest
Durations Reports: Fighting Slow Tests. Each time you switch contexts from implementation code to test code, you incur some overhead. If your ...
Read more >
Testing - Hugging Face
self-hosted runner: runs normal and slow tests on GPU in tests and examples : ... Here is the same example, this time using...
Read more >
Testing tools - Django documentation
The test client is a Python class that acts as a dummy web browser, allowing you ... class Client (enforce_csrf_checks=False, raise_request_exception=True, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found