question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Requests Per Second Plot Breaks When There are too Many Unique URLs

See original GitHub issue

Description of issue

Plotting requests per second breaks when locust has visited around 500 unique URLs

Expected behavior

Total requests per second works as long as needed

Actual behavior

It seems to break once you hit too many unique URLs. I believe this is because locust gets the stats for every URL every time it updates the web ui.

Environment settings

  • OS: Windows
  • Python version: 3.7.3
  • Locust version: 0.10.0

Steps to reproduce (for bug reports)

I am using locust to test a retail point of sale rest api. The api uses document SIDs in the url, so everytime I make a new transaction, I get 3-4 new urls. Not only does this make analyzing the results harder, as I need to group the urls together with regular expressions in pandas, but i believe that it is breaking the plotting of requests per second as well. I can hammer the api with requests that don’t use the document SID in the url forever, but I can only run a few hundred transactions before the plotting breaks. To make sure, I made 15 new receipts per second for about 30 mins and everything worked correctly. Afterword, I changed my locust file to make a blank receipt and then request it’s contents (generating a request on a new url) and was only able to run the load test for 2 mins before plotting broke. (at around 500 unique urls)

Possible fix

I would love to be able to set the field that requests are grouped by to a string. Something like: response = self.client.post("/v1/rest/document" headers=self.header, json=payload, tag="creating a document") but I’m not incredibly experienced, so this may not be as easy to implement as I’m hoping. Also this implies that I’m correct as to why the requests per second plot is breaking. Furthermore, this would save me (and people doing similar testing) from having to group api calls after running a test.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:8 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
RyanW89commented, Aug 12, 2019

Have you tried adding name=“string” to your request? ie response = self.client.post(“/v1/rest/document” headers=self.header, json=payload, name=“creating a document”)

unless I am mistaken, this is what you are after.

1reaction
cgoldbergcommented, Aug 6, 2019

please give a clear description of the issue, including details of what you are expecting and what is actually happening. Please exclude all other information, conjectures, or ideas for possible fixes. This issue report is packed with information that is completely irrelevant… which makes it difficult to follow.

(For example… I can’t even tell if this is a bug report, or a feature request for tagging responses … it seems like both)

Read more comments on GitHub >

github_iconTop Results From Across the Web

What is the fastest way to send 100,000 HTTP requests in ...
I was able to fetch about ~150 unique domains per second running on AWS. import concurrent.futures import requests import time out = []...
Read more >
How to Fix 429 Too Many Requests Error - Kinsta
The HTTP 429 error is returned when too many requests are made to a page within a short period of time. Find out...
Read more >
Ultimate Guide to Web Scraping with Python Part 1: Requests ...
Request and wrangling HTML using two of the most popular Python libraries for web scraping: requests and BeautifulSoup.
Read more >
Chapter 4. Query Performance Optimization - O'Reilly
The most basic reason a query doesn't perform well is because it's working with too much data. Some queries just have to sift...
Read more >
Analysis of HTTP Performance Problems
Further avoidable latency is incurred due to the protocol only returning a single object per request. Scalability problems are caused by TCP requring...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found