Investigate using native streams
See original GitHub issueInspired by this tweet from @surma:
@MattiasBuelens Is it possible to offer up the polyfills for Readable, Writable and Transform individually? Most browsers have Readable, so ideally I’d only load Writable and Transform.
I’ve thought about this previously. Back then, I decided that it was not feasible because readable byte streams are not supported by any browser. A full polyfill would always need to provide its own ReadableStream implementation that supports byte streams. By extension, it would also need to provide its own implementations for WritableStream (that works with its ReadableStream.pipeTo()) and TransformStream (that uses its readable and writable streams).
Looking at this again, I think we can do better. If you don’t need readable byte streams, then the native ReadableStream should be good enough as a starting point for the polyfill. From there, the polyfill could add any missing methods (pipeTo, pipeThrough, getIterator,…) and implement them using the native reader from getReader().
This approach can never be fully spec-compliant though, since the spec explicitly forbids these methods to use the public API. For example, pipeTo() must use AcquireReadableStreamDefaultReader() instead of ReadableStream.getReader(), so it cannot be affected by user-land JavaScript code making modifications to ReadableStream.prototype. I don’t think that has to be a problem though: we are already a user-land polyfill written in JavaScript that modifies those prototypes, it would be silly for the polyfill to try and guard itself against other JavaScript code making similar modifications.
Steps in the spec that require inspecting the internal state of the stream or call into internal methods will need to be replaced by something that emulates the behavior using solely the public API.
-
Often, this will be easy: e.g.
ReadableStreamDefaultControllerEnqueue()becomescontroller.enqueue(). -
Sometimes, we have to be a bit more lenient.
ReadableStreamPipeTo()'s error propagation says:if source.[[state]] is or becomes
"errored"We can check if it becomes errored by waiting for the
source.closedpromise to become rejected. However, we can’t synchronously check if it is already errored. -
In rare cases, this may turn out to be impossible.
TransformStreamDefaultSinkWriteAlgorithmspecifies:If state is
"erroring", throw writable.[[storedError]].Usually, the writable stream starts erroring because the writable controller has errored, which the transform stream’s implementation controls. However, it could also be triggered by
WritableStream.abort(), which is out of the control of the transform stream implementation. In this case, the controller is only made aware of it after the writable stream finishes erroring (state becomes"errored") through itsabort()algorithm, which is already too late.
Of course, we can’t just flat-out remove byte stream support from the polyfill, just for the sake of using native streams more. The default should still be a full polyfill, but we might want to give users the option to select which features they want polyfilled (as @surma suggested in another tweet).
Anyway, I still want to give this a try. It might fail catastrophically, but then at least I’ll have a better answer on why we use so little from the native streams implementation. 😅
Issue Analytics
- State:
- Created 4 years ago
- Reactions:10
- Comments:34 (11 by maintainers)

Top Related StackOverflow Question
I’m currently working on splitting the
ReadableStreamclass into multiple modules (one for each feature), to get a better understanding on the dependencies between each feature.After that, I have to figure out how these dependencies should be implemented so they can work with either native and polyfilled streams, without breaking any (or too many) tests. For example:
ReadableStreamPipeTousesAcquireReadableStreamDefaultReader. For polyfilled streams, this should call our implementation of this abstract operation so we can setforAuthorCode = false. However, for native streams, we only havestream.getReader()so we will always haveforAuthorCode = true. This means that some tests will fail when implementingpipeTo()on top of native readers. I think it’s fine in this case, but this is just one of many cases that will need to be considered.I’m also worried that some of these dependencies on abstract operations cannot be implemented using only the public API of native streams. This would mean I’d have to approximate them, or leave them out entirely. That means more trade-offs about which test failures are acceptable and which aren’t. For example:
TransformStreamDefaultSinkWriteAlgorithmchecks whetherwritable.[[state]] === "erroring", but a regular writable sink would only know about this after all pendingwrite()s are completed and the sink’sabort()method gets called. That means the write algorithm cannot know whether it should skip thePerformTransform()call and let the writable stream become errored, which is definitely going to break at least one test.There’s still a lot of questions, and I’m figuring them out as I go. I’m doing this in my spare time, so it’s going to take a bit of time to get there! 😅
The original intention was to extend/patch native streams in the browser. But yes, if at all feasible, we may want to do the same for Node. 🙂
Now that I think about it, it might be good to use subpath imports to import the native implementation (if any). That way, we can avoid importing from
streams/webentirely when bundling for browsers, without weirdnew Function()hacks aroundrequire(). 😛We could then build the polyfill like this: