Action timing out when new node versions are released
See original GitHub issueI am using this in a job:
name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
and node-version
13.x
, but my workflow takes 10+minutes just to complete use node. Any idea why or anyone else having this problem?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:148
- Comments:41 (13 by maintainers)
Top Results From Across the Web
A Complete Guide to Timeouts in Node.js - Better Stack
Assigning timeout values prevents network operations in Node.js from blocking indefinitely. This article provides extensive instruction on how to time out ...
Read more >NodeJS server timeout - Stack Overflow
I resolve the issue by releasing the connection before sending response and when error happen. conn.release().
Read more >Set the connection timeout when using Node.js - Google Cloud
Configure a connection timeout when connecting to Cloud SQL for MySQL by using the Node.js npm mysql module.
Read more >Actions - Progressive microservices framework for Node.js
Timeout can be set in action definition, as well. It overwrites the global broker requestTimeout option, but not the timeout in calling options....
Read more >Amazon EKS troubleshooting - AWS Documentation
If you receive the following error while attempting to create an Amazon EKS ... network disruptions or if API servers are timing out...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Update: We are starting work to insulate action from issues like this. We want to both cache latest of every LTS directly on the hosted image (doesn’t contribute to your time) and another solution where we cache all versions on a CDN for actions to use. We may read through to the origin but that work will allow for much better reliability and perf in most cases.
This is a serious issue if one’s pipeline doesn’t have default timeout setup. This will cause a lot of people’s CI to hang for up to 6 hours. This is not wasting MS’ resource, but is wasting resource for other paying customers. Please fix this!
no code has been changed to match the timeline. it seems like some critical dependency has problem serving resources properly? but the issue can be fixed by re-running the build meaning potentially some bad hosts in the fleet or throttling?