question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why the "RPS" generated by locust is much fewer than other performance testing tools ?

See original GitHub issue

I did load testing on a HTTP interface with several performance testing tools, and I found the ‘RPS’ generated by locust is much fewer than others.

ApacheBench

command: ab -n 1000 -c 80 http://testurl:8000/echo/hello

Benchmark:

Requests per second:    291.38 [#/sec] (mean)
...
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       82  125  87.3    116    2070
Processing:    83  149 250.4    116    2830
Waiting:       82  145 245.0    115    2830
Total:        170  274 278.0    232    4899

Jmeter

set Number of Threads to 80, set Loop Count to 100, and got the value of Throughtout was 270/sec.

Locust

set min_wait = 0 max_wait = 0 in script file, and run locustfile with command: locust -f api.py --no-web -c 80 -r 80 -n 10000 --only-summary

Benchmark:

 Name                                                          # reqs      # fails     Avg     Min     Max  |  Median   req/s
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /echo/hello                                                10000     0(0.00%)    1020     305    3285  |    1000   70.50
--------------------------------------------------------------------------------------------------------------------------------------------
 Total                                                          10000     0(0.00%)                                      70.50

Percentage of the requests completed within given times
 Name                                                           # reqs    50%    66%    75%    80%    90%    95%    98%    99%   100%
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /echo/hello                                                 10000   1000   1100   1100   1100   1200   1400   1700   2100   3285
--------------------------------------------------------------------------------------------------------------------------------------------

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Reactions:11
  • Comments:24 (5 by maintainers)

github_iconTop GitHub Comments

6reactions
Jasnoor1commented, Jun 15, 2020

Hi @heyman

While running load at 200 RPS, i am getting some drop on the RPS. Can you please let me know how can we resolve this through locust? Please find the attached reference:

result

Please let me know how can we pass a constant RPS.

Locust version: 0.13.3

4reactions
max-rocket-internetcommented, May 18, 2020

There was is an old article from k6 that showed locust being very slow but they configured it with min_wait = 5000 max_wait = 15000, I assume because this was used in an example in the locust docs. They have an updated article now which is very comprehensive: https://k6.io/blog/comparing-best-open-source-load-testing-tools

For comparison, we have run tests up to 30000 RPS (not using FastHttpLocust):

  • locust master = 2 cores, 1GB or memory
  • locust slaves = 50 slaves at 1 core and 2GB each

Python/locust will likely never match the raw speed of other tools like k6 or the basic tools like wrk/ab etc but ease of the Python language and MUCH easier horizontal scaling on k8s with the master/slave model more than makes up for it IMO.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How we manipulated Locust to test system performance ...
Locust is an open-source load testing tool that gauges the number of concurrent users a system can handle. Testers can write simple behaviors...
Read more >
Choosing the right load testing tool - JMeter vs. Locust ...
Locust can scale up with fewer system resources than JMeter, as it supports asynchronous requests. Locust documentation explains, "a common set ...
Read more >
How to increase RPS in distributed locust load test
I've done to generate minimal 8K rps (stable value for my app, it can't serve better) with 1000 pods/worker, with Locust load test...
Read more >
Performance and Load Testing using Locust - PFLB
Locust is a powerful tool for load testing. It provides ample opportunities for creating queries and processing the responses received, has a clear...
Read more >
Load Testing With Locust
Similar to the average response time, the peak response time (PRT) is the measurement of the longest responses for all requests coming through...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found