30% of runner instantiation failes due to timeout
See original GitHub issueRun machulav/ec2-github-runner@v2
GitHub Registration Token is received
AWS EC2 instance i-0eeae9ef28dcd04e9 is started
AWS EC2 instance i-0eeae9ef28dcd04e9 is up and running
Waiting 30s for the AWS EC2 instance to be registered in GitHub as a new self-hosted runner
Checking every 10s if the GitHub self-hosted runner is registered
Checking...
.
.
.
Checking...
Error: GitHub self-hosted runner registration error
Checking...
Error: A timeout of 5 minutes is exceeded. Your AWS EC2 instance was not able to register itself in GitHub as a new self-hosted runner.
this is the error i receive for like 30% of my runners what could cause this? and how can i increase the percentage of successfull instantiations?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:9
- Comments:7 (1 by maintainers)
Top Results From Across the Web
Gitlab Runner Failing to Download files (#28225) · Issues
In the job log you can see that curl tries to download release artifact from a random project. It has failed after 30...
Read more >Error: Could not start container · Issue #955 - GitHub
Hello everyone, I'm using testcontainers for E2E testing our Spring application. Everything was working, but one day on my PC (my colleague ...
Read more >java.lang.RuntimeException: Failed to instantiate test runner ...
You have to annotate test() method with @Test annotation: @Test fun test() { ... }.
Read more >TestRunner playmode timeout (?) - Unity Answers
Using the TestRunner to run tests in PlayMode (coroutine-alike) my test so far always terminate (pass) prematurely after about 30 seconds.
Read more >Paralel test timeout error - SmartBear Community
ScriptTimeoutException : java.util.concurrent.TimeoutException It seems it is related to parallel run but I don't know how to fix it.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I ran into this issue when previous runners didn’t clean themselves up in the GitHub API and I look at the cloud-init log of the configure command it was asking if I wanted to replace the previous runner:
https://github.com/actions/runner/blob/main/src/Runner.Listener/CommandSettings.cs#L193
Edit: I followed up here and it seems one can pass in
--replace
to the config.sh script. I could fork and cut a PR for this, but was wondering if it should be flaggable since it shouldn’t normally happen on a clean stop of the instance (which sometimes isn’t guaranteed).We’re also experiencing this, requiring periodic manual re-run of our CI jobs.