question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Exponential growth in time required to run large numbers of tests

See original GitHub issue

I’m still investigating the root cause of this issue, but it is reproducible.

I discovered this problem while working on https://github.com/tomcatmanager/tomcatmanager, a project which utilizes cmd2. A complete run of the test suite for that project makes several hundred calls to cmd2.Cmd.onecmd_plus_hooks(). As the test suite has grown, I have noticed the performance of the test suite degrading in a non-linear way. I finally decided to try and track down why.

I discovered that as the number of calls to cmd2.Cmd.onecmd_plus_hooks() increases in a test run, the time required to execute each call grows very quickly. It’s easy to duplicate using the test suite in cmd2. First, install the pytest-repeat module, which allows you to repeat each test in a suite any number of times.

Then run:

$ pytest tests/test_argparse.py -v --duration=0

On my computer those 18 tests run in less than 1 second. However, in the duration report at the end, which shows the clock time required for each test to run, you can see that some of the tests completed in 0.01 seconds, and others took 0.09 seconds. None of the tests in that file are an order of magnitude more expensive to run than the others, but you could maybe chalk it up to variation in what else is running on your computer.

Now try:

$ pytest tests/test_argparse.py -v --duration=0 --count=10

This will run each test 10 times. I expect that the time to run the tests would grow linearly with the number of iterations, so it should take perhaps 10 seconds to complete the run. Go ahead and go make some tea, because on my computer it took 464 seconds (almost 8 minutes!) to complete.

Examine the duration report at the end of the test run. Here’s what I saw as the 10 slowest tests:

============================ slowest test durations ============================
9.28s call     tests/test_argparse.py::test_subcommand_invalid_help[8/10]
9.26s call     tests/test_argparse.py::test_subcommand_invalid_help[4/10]
9.20s call     tests/test_argparse.py::test_subcommand_invalid_help[10/10]
9.19s call     tests/test_argparse.py::test_subcommand_invalid_help[7/10]
9.17s call     tests/test_argparse.py::test_subcommand_invalid_help[9/10]
9.02s call     tests/test_argparse.py::test_subcommand_invalid_help[2/10]
8.78s call     tests/test_argparse.py::test_subcommand_invalid_help[1/10]
8.78s call     tests/test_argparse.py::test_subcommand_invalid_help[5/10]
8.76s call     tests/test_argparse.py::test_subcommand_invalid_help[3/10]
8.68s call     tests/test_argparse.py::test_subcommand_invalid_help[6/10]

Something has got to be broken, because there is no way it should take more than 9 seconds to run one of those tests.

When I made this discovery on https://github.com/tomcatmanager/tomcatmanager, I assumed there was some problem with the networking test code I had created, so I profiled the test runs and examined the output. The growth in clock time for the tests all occurs in the pyparsing module.

Using pytest-xdist seems to mask the problem, especially if you distribute the tests to a large number of workers. For example, if you use 16 workers, each worker will end up running approximately one sixteenth of the tests, and for “medium” sized test suites that would be enough to reduce the number of calls each worker makes to pyparsing so that the clock time required for each test doesn’t begin to grow.

The pytest-forked module allows you to run each test in a new forked process. This prevents memory leaks or cache errors in one test from affecting other tests in the suite. If you install pytest-forked and run:

$ pytest tests/test_argparse.py -v --duration=0 --count=10 --forked

The test suite finishes 10 runs of each test (on my computer) in less than 9 seconds, which fits with the linear growth of test time I would expect.

At this point I think the likely culprit is the construction of the cmd2 grammar for pyparsing, but it could also be a bug in pyparsing. More investigation definitely required. The good news is that this isn’t super urgent, because there is a work around with --forked.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
kotfucommented, Apr 17, 2018

Preliminary experimentation makes me think the change from pyparsing to shlex in #37 will resolve this issue. Work is being done on the ply branch.

0reactions
kotfucommented, May 3, 2018

Closed by #370

Read more comments on GitHub >

github_iconTop Results From Across the Web

Exponential growth: what it is, why it matters, and how to spot it
Here's how I learnt exponential growth in maths class: when the speed of growth is proportional to the size of the population, that's ......
Read more >
Exponential Growth - Varsity Tutors
Exponential growth models are often used for real-world situations like interest earned on an investment, human or animal population, bacterial culture growth, ...
Read more >
Exponential growth & logistic growth (article) - Khan Academy
In logistic growth, a population's per capita growth rate gets smaller and smaller as population size approaches a maximum imposed by limited resources...
Read more >
Linear and exponential growth — Harder example (video)
The equation for exponential growth is y=a(1+r)^t, with "a" as the initial amount, "r" as the growth rate (typically a percentage), ...
Read more >
Estimating epidemic exponential growth rate and basic ...
Estimating the growth rate from the epidemic curve can be a challenge, because of its decays with time. For fast epidemics, the estimation...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found