`test_start` event triggered multiple times on workers
See original GitHub issueDescribe the bug
When running locust in distributed mode, the test_start
handler gets triggered every time new users are spawned, rather than once running once at the start of the test.
This only seems to occur if there is a test_start
handler which takes longer than the interval between spawn events (I think 1s?).
This is problematic if the test needs to load test data from an external source and share it between the users.
To give an example of a concrete use case:
- I want the test to access load logs from an external source and replay them.
- These logs are valid for the lifetime of a test (rather than process or user) - so they can be shared between users on a worker.
- When I trigger the test the log data-store gets hammered by duplicated requests (load testing the wrong thing! 😅).
Expected behavior
The test_start
event handlers should be triggered once when the test begins, allowing e.g. expensive test data loading operations to occur only once.
Actual behavior
The test_start
event handlers are triggered multiple times, with all but the final handler being killed prematurely.
Steps to reproduce
Given the following locustfile:
import time
from locust import HttpUser, events, runners, task
COUNT = 0
@events.test_start.add_listener
def test_start_worker(environment, **kwargs):
if not isinstance(environment.runner, runners.WorkerRunner):
return
global COUNT
COUNT += 1
print(f"'test_start' triggered {COUNT} times.")
time.sleep(1) # Same result with `gevent.sleep`. Less than 1 second works fine, >=1 second causes the described behaviour.
class HelloWorldUser(HttpUser):
@task
def hello_world(self):
self.client.get("/hello")
self.client.get("/world")
And the following docker-compose.yml
version: '3'
services:
master:
image: locustio/locust:2.5.1
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master -H http://master:8089 --headless -u 10 -r 1 --run-time 1m --expect-workers 1
worker:
image: locustio/locust:2.5.1
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host master
Run
docker-compose up -d
And the logs for the worker will display:
worker_1 | [2022-01-27 11:56:19,677] 26ff2afeca30/INFO/locust.main: Starting Locust 2.5.1
worker_1 | 'test_start' triggered 1 times.
worker_1 | 'test_start' triggered 2 times.
worker_1 | 'test_start' triggered 3 times.
worker_1 | 'test_start' triggered 4 times.
worker_1 | 'test_start' triggered 5 times.
worker_1 | 'test_start' triggered 6 times.
worker_1 | 'test_start' triggered 7 times.
worker_1 | 'test_start' triggered 8 times.
worker_1 | 'test_start' triggered 9 times.
worker_1 | 'test_start' triggered 10 times.
Environment
- OS: Ubuntu 21.10
- Python version: 3.9.9
- Locust version: 2.5.1
- Locust command line that you ran: As described by
docker-compose.yml
- Locust file contents (anonymized if necessary): Shown above
- docker-compose version 1.27.4
- Docker version 20.10.7
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (3 by maintainers)
Top GitHub Comments
Sorry for the delay, I’ll try to take a look, but this seems strange.
📝 My current workaround for this is to spawn a background greenlet in the event handler, so that the event handler returns quickly and is not retriggered.
In the above example this would be:
Naturally this means any user code which depends on this completing needs to be defensive, since the user will start before this completes.
Leaving this here as it may help others!