createWriteStream({resumable:false}) causes error to be swallowed and program to hang
See original GitHub issueEnvironment details
- OS: MacOS and Linux
- Node.js version: 10.6.0
- npm version: 6.1.0
@google-cloud/storage
version: 1.7.0
Steps to reproduce
- Run the code below (before, change projectId and bucket in the code, and run
npm install @google-cloud/storage @rauschma/stringio
) - It should fail with error code 429 (
rateLimitExceeded
), but instead the code never finishes. This is the problem. The program should fail, because we’re putting the same content in the same path too many times. (If you always put the text in random paths then everything works without a 429.) - Comment out
resumable: false
and run it again - It will fail with error code 429, as expected.
Code:
'use strict'
const Storage = require('@google-cloud/storage')
const {StringStream} = require('@rauschma/stringio')
const projectId = 'rendering-grid'
const bucket = 'test-storage-problem-can-delete'
async function main() {
const storage = new Storage({
projectId,
})
const put = async () => {
await new Promise((resolve, reject) => {
const writeStream = storage
.bucket(bucket)
.file('foo/bar')
.createWriteStream({
resumable: false,
metadata: {
contentType: 'text/plain',
},
})
writeStream.on('finish', resolve).on('error', reject)
const readStream = new StringStream('some debugging text')
readStream.on('error', reject)
readStream.pipe(writeStream)
})
}
for (let i = 0; i < 10; ++i) {
console.log('#### Run #', i + 1)
await Promise.all([...Array(10)].map(() => put().then(() => process.stdout.write('.'))))
console.log('')
}
}
main().catch(console.error)
So {resumable: false}
is causing the program to hang, I’m guessing because it’s not reporting the error on the stream.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:4
- Comments:10 (8 by maintainers)
Top Results From Across the Web
Node.js ChangeLog - Google Git
(Breaking) In fs.readFile() , if an encoding is specified and the internal toString() fails the error is no longer thrown but is passed...
Read more >The Difference Between Node.js 10 LTS and Node.js 12 LTS
Node.js® is a JavaScript runtime built on Chrome's V8 JavaScript engine.
Read more >mongodb | Yarn - Package Manager
Fixes bug when all replicaset members are down, that would cause it to fail to reconnect using the originally provided seedlist. 2.1.1 2015-12-13....
Read more >Node.js 13 ChangeLog - Nodejs API 文档
If a constructor function is passed to validate the instance of errors thrown in assert.throws() or assert.reject() , an assertion error will be...
Read more >sitemap-questions-17.xml - Stack Overflow
... -do-i-stop-ajax-calls-to-submitchanges-on-datacontext-from-causing-errors ... .com/questions/8408236/calling-new-sqlconnection-hangs-program 2022-08-13 ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
The PR didn’t change the default for all methods, only
util.makeWritableStream
, where we set the number of retries to 0, since retrying a POST is never possible.Think I found it: When this library tries to retry the request because of a 429 or any other error, the stream is already consumed and there’s no data available to write to the socket. Then the socket times out expecting data.
Good job finding a way to consistently reproduce the error. I think this is the same issue plaguing everyone in #27 (which we hit on a daily basis).
For the record, the new error since the “request” module was replaced by “teeny-request”:
The issue can be reduced to the following, bypassing a lot of the storage module’s stream complexity:
Sequence of events:
STREAM BODY
comment):These retries appear to happen twice after 60-second timeouts, before finally timing out entirely.
This seems like a fundamental flaw with using streams as a datasource when the sink might require a retry, and I have no great ideas for how to fix it. Brainstorming:
Accept that streams can’t be retried, and remove the retry attempts for non-resumable uploads.
Use the resumable upload mechanism and send chunks that reasonably fit in memory (ideally user-configurable) so that the chunk can be retried if needed. (For reference, Node.js’s default highWaterMark is 16,384 B.) Even with keepalive, the overhead of this might be insurmountable. A 1 GB file in 16k chunks = 61k requests.