Batching of precache requests to prevent net::ERR_INSUFFICIENT_RESOURCES in Chrome
See original GitHub issueLibrary Affected: This likely is only related to workbox-precache
Browser & Platform: Google Chrome
Issue or Feature Request Description: I originally wrote this as a comment on #570, but I since that issue is closed, I thought I would post it as a new issue to get more visibility
It seems that workbox uses Promise.all
when making precache requests instead of explicitly batching or rate-limiting the requests, and it seems that Chrome can’t handle this under certain circumstances.
Specifically, I’m getting sporadic net::ERR_INSUFFICIENT_RESOURCES
failures.
After looking up this error, it seems that this represents some sort of resource exhaustion within chrome. A few other people have come across this here, and here, and it seems that the answer is to simply make fewer concurrent requests.
I noticed that @nachoab came up against this here and solved it by batching requests in chunks of 20, effectively limiting concurrency to 20 inflight requests at most… but that’s against the old sw-precache
repo and not directly applicable here.
The easiest and most flexible solution as I see it would be to make precacheAndRoute
(or just precache
) return a promise that resolves when all of the specified routes have been precached. This way I could do the rate-limiting in my serviceworker, instead of adding the burden of rate-limiting onto workbox.
Thoughts?
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
I appreciate the simplicity of downloading assets one by one and the goal of reducing impact on other network requests, but it would be nice if this was configurable.
For instance, imagine you’re pre-caching some data (e.g. API request) that isn’t available at the edge in your user’s region. Now retrieving that data from the other side of the world holds up all your other assets at the edge from pre-caching, when they could easily happen in parallel with a negligible performance impact if you allowed two requests at a time.
Latency aside, this also negates some HTTP/2 benefits, even in the simple case of all assets being at the edge already.
In my case, I have a lot of smallish assets and one large XML file, which holds everything else up. Not the end of the world, but it would be nice if I could expand the pipeline to 2-3 requests at a time.
Fantastic!