question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inaccurate response time?

See original GitHub issue
  • locust: 0.8a2
  • MacBook Pro 2016
  • python 3.6.2

I ran some benchmarks with locust, wrk and ab, and locust always reports an average response time around 300ms, while the other 2 report 4ms. Same server, same settings, same bandwidth.

I assume the BIO nature of requests lib has no effect to the response time, right?

I know locust isn’t for benchmark, but how can we trust its result if it can’t even get the (remotely) correct stats of a 100B static page? For most APIs in most .com company, this extra 300ms is unacceptable.

wrk --latency -t 8 -c 200 -d 120s --timeout 10s http://192.168.3.221/nginx_status
Running 2m test @ http://192.168.3.221/nginx_status
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    53.94ms  178.14ms   3.28s    94.62%
    Req/Sec     4.37k   531.52     6.98k    69.51%
  Latency Distribution
     50%    4.05ms
     75%    4.86ms
     90%  161.98ms
     99%  741.21ms
  4175822 requests in 2.00m, 1.02GB read
Requests/sec:  34792.40
Transfer/sec:      8.74MB

ab -k -r -n 4000000 -c 200 -s 10 "http://192.168.3.221/nginx_status"
This is ApacheBench, Version 2.3 <$Revision: 1796539 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.3.221 (be patient)
Completed 400000 requests
Completed 800000 requests
Completed 1200000 requests
Completed 1600000 requests
Completed 2000000 requests
Completed 2400000 requests
Completed 2800000 requests
Completed 3200000 requests
Completed 3600000 requests
Completed 4000000 requests
Finished 4000000 requests


Server Software:        nginx/1.10.3
Server Hostname:        192.168.3.221
Server Port:            80

Document Path:          /nginx_status
Document Length:        115 bytes

Concurrency Level:      200
Time taken for tests:   117.530 seconds
Complete requests:      4000000
Failed requests:        9568
   (Connect: 0, Receive: 0, Length: 9568, Exceptions: 0)
Keep-Alive requests:    3960103
Total transferred:      1055810083 bytes
HTML transferred:       460009568 bytes
Requests per second:    34033.92 [#/sec] (mean)
Time per request:       5.876 [ms] (mean)
Time per request:       0.029 [ms] (mean, across all concurrent requests)
Transfer rate:          8772.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.9      0     216
Processing:     0    6  27.0      4    6547
Waiting:        0    6  27.0      4    6547
Total:          0    6  27.1      4    6547

Percentage of the requests served within a certain time (ms)
  50%      4
  66%      4
  75%      4
  80%      4
  90%      5
  95%      6
  98%      9
  99%     12
 100%   6547 (longest request)

PYTHONPATH=. locust -f tests/test_server.py --no-reset-stats
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"GET","/nginx_status",60595,0,290,288,2,1735,114,424.08
"None","Total",60595,0,290,288,2,1735,114,424.08

"Name","# requests","50%","66%","75%","80%","90%","95%","98%","99%","100%"
"GET /nginx_status",60595,290,320,360,390,470,540,650,750,1735
"None Total",60595,290,320,360,390,470,540,650,750,1735

locustfile:

from locust import TaskSet, HttpLocust, task

class HelloWorld(TaskSet):
    @task
    def foo(self):
        self.client.get('/nginx_status')


class Run(HttpLocust):
    host = 'http://192.168.3.221'
    min_wait = 0
    max_wait = 0
    stop_timeout = 120

    task_set = HelloWorld

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

4reactions
lgrabowskicommented, May 24, 2018

hello I have the same issue, I used extra vegeta as another http bench tool and I have the same results as @keithmork. All of them ( I used the same tools and @keithmork ) returns much lower time for very simple app (tornado + hello world) and go lang (same implementation). Locust returns like 1.5 sec for Simple hello world (?!?). Can You take a look on this ?

1reaction
lgrabowskicommented, May 24, 2018

yes I know, I have set them to different values most between 1 and 5.

vegeta (similar results as for wrk, ab):

Bucket           #     %       Histogram
[0s,     20ms]   1091  24.24%  ##################
[20ms,   30ms]   2880  64.00%  ################################################
[30ms,   60ms]   489   10.87%  ########
[60ms,   90ms]   18    0.40%
[90ms,   120ms]  14    0.31%
[120ms,  150ms]  5     0.11%
[150ms,  180ms]  0     0.00%
[180ms,  250ms]  3     0.07%
[250ms,  300ms]  0     0.00%
[300ms,  500ms]  0     0.00%
[500ms,  600ms]  0     0.00%
[600ms,  700ms]  0     0.00%
Percentage of the requests completed within given times
 Name                          # reqs    50%    66%    75%    80%    90%    95%    98%    99%   100%
-----------------------------------------------------------------------------------
 GET /ping                 7    630    880   1400   1400   1500   1500   1500   1500   1538
-----------------------------------------------------------------------------------
Read more comments on GitHub >

github_iconTop Results From Across the Web

Speed versus accuracy instructions in the response time ...
The general notion that the accuracy of a response varies with the time taken to produce it has been studied in psychology for...
Read more >
[Bug] Inaccurate response time measurement #668 - GitHub
When repeatedly calling the endpoint through Chrome, the response time is stable around 16-17 ms, with the occasional arbitrarily slower result.
Read more >
The relationship between response time and diagnostic ...
This study tested whether accuracy is associated with shorter or longer times to diagnosis.
Read more >
TFT monitor panel response time causes ... - Black Box ToolKit
Remember panels with a 5 millisecond response time won't actually display an image in 5 milliseconds due to input lag. Panel response time...
Read more >
Using Response Times and Response Accuracy to Measure ...
This paper offers a new sight on using both response times and response accuracy to measure fluency with cognitive diagnosis model framework ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found