Why the "RPS" generated by locust is much fewer than other performance testing tools ?
See original GitHub issueI did load testing on a HTTP interface with several performance testing tools, and I found the ‘RPS’ generated by locust
is much fewer than others.
ApacheBench
command: ab -n 1000 -c 80 http://testurl:8000/echo/hello
Benchmark:
Requests per second: 291.38 [#/sec] (mean)
...
Connection Times (ms)
min mean[+/-sd] median max
Connect: 82 125 87.3 116 2070
Processing: 83 149 250.4 116 2830
Waiting: 82 145 245.0 115 2830
Total: 170 274 278.0 232 4899
Jmeter
set Number of Threads
to 80, set Loop Count
to 100, and got the value of Throughtout
was 270/sec.
Locust
set min_wait = 0
max_wait = 0
in script file,
and run locustfile with command: locust -f api.py --no-web -c 80 -r 80 -n 10000 --only-summary
Benchmark:
Name # reqs # fails Avg Min Max | Median req/s
--------------------------------------------------------------------------------------------------------------------------------------------
GET /echo/hello 10000 0(0.00%) 1020 305 3285 | 1000 70.50
--------------------------------------------------------------------------------------------------------------------------------------------
Total 10000 0(0.00%) 70.50
Percentage of the requests completed within given times
Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%
--------------------------------------------------------------------------------------------------------------------------------------------
GET /echo/hello 10000 1000 1100 1100 1100 1200 1400 1700 2100 3285
--------------------------------------------------------------------------------------------------------------------------------------------
Issue Analytics
- State:
- Created 8 years ago
- Reactions:11
- Comments:24 (5 by maintainers)
Top Results From Across the Web
How we manipulated Locust to test system performance ...
Locust is an open-source load testing tool that gauges the number of concurrent users a system can handle. Testers can write simple behaviors...
Read more >Choosing the right load testing tool - JMeter vs. Locust ...
Locust can scale up with fewer system resources than JMeter, as it supports asynchronous requests. Locust documentation explains, "a common set ...
Read more >How to increase RPS in distributed locust load test
I've done to generate minimal 8K rps (stable value for my app, it can't serve better) with 1000 pods/worker, with Locust load test...
Read more >Performance and Load Testing using Locust - PFLB
Locust is a powerful tool for load testing. It provides ample opportunities for creating queries and processing the responses received, has a clear...
Read more >Load Testing With Locust
Similar to the average response time, the peak response time (PRT) is the measurement of the longest responses for all requests coming through...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @heyman
While running load at 200 RPS, i am getting some drop on the RPS. Can you please let me know how can we resolve this through locust? Please find the attached reference:
Please let me know how can we pass a constant RPS.
Locust version: 0.13.3
There was is an old article from k6 that showed locust being very slow but they configured it with min_wait = 5000 max_wait = 15000, I assume because this was used in an example in the locust docs. They have an updated article now which is very comprehensive: https://k6.io/blog/comparing-best-open-source-load-testing-tools
For comparison, we have run tests up to 30000 RPS (not using
FastHttpLocust
):Python/locust will likely never match the raw speed of other tools like k6 or the basic tools like wrk/ab etc but ease of the Python language and MUCH easier horizontal scaling on k8s with the master/slave model more than makes up for it IMO.