Stats are reset when re-balancing users across slave nodes
See original GitHub issueWhen running locust with --reset-stats
in distributed mode, the stats are reset every time a new slave connects (when the re-balancing of the simulated users occur). The stats should only be reset when the initial hatching is complete, and not when re-balancing users.
Issue Analytics
- State:
- Created 4 years ago
- Comments:10 (7 by maintainers)
Top Results From Across the Web
Readwritesplit - MariaDB Knowledge Base
Read queries, which do not modify data, are spread across multiple nodes while ... The read query load balancing is then done between...
Read more >MySQL replication for high availability - Severalnines
Configures replication role for MySQL slave with GTID. Starts the replication. Verifies the deployment. Registers the node under the corresponding “cluster ID” ...
Read more >Redis-specific parameters - Amazon ElastiCache for Redis
The close-on-slave-write parameter is introduced by Amazon ElastiCache to give you more control over how your cluster responds when a primary node and...
Read more >Managing Nodes | OpenShift Container Platform 3.11
Viewing nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, ...
Read more >Document - TS/MP 2.8 ACS User Guide | HPE Support
If server-name parameter is not specified, ACS will reset the statistics for ... The Pathway domain-based statistics per serverclass across all associated ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve tried to reproduce the issue with the latest master, but it seems to be fixed now 👍.
Awesome, thanks @heyman!