Queries keep timing out
See original GitHub issueHey there,
For some reason and quite often our calls to the API fail with a ETIMEDOUT. The application runs on a Azure Web App using node-iis. Depending on the resource, a lot of concurrent queries can occur, as we are trying to fetch nested references (imagine a navigation) or a slice that references a different resource.
Here are some examples of the errors:
(node:5748) UnhandledPromiseRejectionWarning: FetchError: request to https://xxx.cdn.prismic.io/api/v2 failed, reason: connect ETIMEDOUT 54.192.27.125:443
at ClientRequest.<anonymous> (D:\home\site\wwwroot\node_modules\node-fetch\index.js:133:11)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at TLSSocket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
or
(node:5748) UnhandledPromiseRejectionWarning: FetchError: request to https://xxx.cdn.prismic.io/api/v2/documents/search?page=1&pageSize=100&lang=it-ch&ref=....&q=%5B%5Bany(document.type%2C%20%5B%22default_page%22%2C%22empty_page%22%2C%22communities_page%22%2C%22home_page%22%2C%22ranking_page%22%2C%22sponsors_page%22%5D)%5D%5D failed, reason: connect ETIMEDOUT 54.192.27.125:443
at ClientRequest.<anonymous> (D:\home\site\wwwroot\node_modules\node-fetch\index.js:133:11)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at TLSSocket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
These are just some examples and I wasn’t able to reproduce them. For every incoming request we initialize the API and reuse that API throughout the request.
Do you have any ideas what could cause this problem? Is there something I can do? Is it possible to set a keep alive agent for the requests to your API?
I am interested to see what you guys think. I haven’t found any way yet, to resolve this issue.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:13
- Comments:9 (3 by maintainers)
I’m seeing this too, did anyone find out what’s going on, or a potential fix? For me it started when the resource I was requesting (documents with a certain type) exceeded around 25 items.
This shouldn’t happen by default because we rely on the default implementation of the HttpAgent that allows you to have enough opened connection. If you use a proxy and need to implement your own HttpAgent, you need to setup enough sockets and use the keepalive option to keep to connection open if you have a lot of concurrent calls.
It would look like this