How to get the last good value on the stream when an error occurs?
See original GitHub issueI’m trying to do some error reporting, and I’d like to have both the error and the input value that caused the error simultaneously so that I can publish both.
Something like:
_([1, 2, 3, 4])
.map((x) => {
if (x > 2) throw new Error('Too big!');
return x + 10;
})
.consume((err, x, push, next) => {
console.log(err.message); // Should be 'Too Big!'
console.log(x); // Should be 3, not 2, not 12 and not 13
if (err) {
publishErrorEvent({ err, x });
} else {
next();
}
});
The problem is, it looks like .map
will always push x === undefined
into the stream whenever an error is thrown.
Is there a way to get the last valid value in the stream? I’m really hoping I don’t have to cache it to local state or something like that. I’ve tried using latest
and last
, but those don’t seem to do what I’m looking for.
Note: I’m trying to do this for some arbitrary pipelines, so I can’t just publish the error in the .map
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
How can I pass last stream value to error handler?
1 Answer 1 ... Just add it as a property of the Error object before you throw it or in wrapper code like...
Read more >Error handling in RxJS. or how to fail not with Observables
The simplest way to handle an error on a stream — is to turn an error into another stream. Another stream — another...
Read more >Error Handling in Streams - Documentation - Akka
recover recover allows you to emit a final element and then complete the stream on an upstream failure. Deciding which exceptions should be...
Read more >23.5 — Stream states and input validation - Learn C++
If an error occurs and a stream is set to anything other than goodbit, further stream operations on that stream will be ignored....
Read more >Troubleshoot throughput error in Amazon Kinesis Data Streams
Divide the SampleCount value by 60 seconds to calculate the average number of GetRecords calls made per second for each shard. If the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I see. That is a bit tricker but I do have 3 possible solutions. I’ll explain the solutions first then show how it fits into the test code.
I’m not sure this is really the best approach to the problem. This seems to be the exact use case for flatMap where for every input (your x) you are returning a stream of 0, 1, or infinite values or map().sequence(), or parallel, merge, mergeWithLimit. This would let you very easily tie inputs to outputs but also allow you to control how many items are processed at once.
Another even simpler option is to not make the stream responsible for relating the inputs to outputs and instead rely on logging to make that connection. If it’s just for reporting purposes then you could try something like:
This way the stream doesn’t need to track the required state to do what you’re looking for. However, this can certainly be done if the recommended paths above don’t apply.
There is a bit of state but if you look at the source of latest or last, this is more or less what they do.
This should cover most cases but there is potential for issues if say the pipeline uses more complex, async steps where inputs don’t match the order of outputs within the target pipeline.
This one is more stream focused using zip to relate inputs to outputs of the target pipeline. Should hold up to asynchronous pipelines better but as I said, there’s still some unknowns.
The big difference here is that we’re not throwing errors in a map function but using flatMap to properly map incoming values to either an error stream or a single value stream. This to me is the most robust solution but given the nature there may be some edge cases with async pipelines.
As for using it, pretty much like catchErrors but works with the through function.
Let me know if there’s more nuances that this still doesn’t cover.
I do my best to read these issues carefully so I best understand what you’re trying to do but from time to time I may misunderstand so it may take a couple of tries.
So the problem with the approach above as you noticed, is the x value that breaks is never pushed downstream. I would reach for a general higher-order-function to wrap your mapping function like:
The added bonus is that it can be used with Highland’s map, filter, each, etc… as well as JS’s array map, filter, and forEach functions.