Proposed depth cache change:
See original GitHub issueCurrently depth cache error handling (in depthHandler
) works like this:
else if ( context.lastEventUpdateId && depth.U !== context.lastEventUpdateId + 1 ) {
const msg = 'depthHandler: ['+symbol+'] Incorrect update ID. The depth cache is out of sync.';
if ( options.verbose ) options.log(msg);
throw new Error(msg);
}
In other words, if the first new depth event ID is not equal to context’s lastEventUpdatedID + 1, it logs an error message and throws an error.
Let’s go through an example:
First depth update:
- context lastUpdate: 0;
- first new depth event ID: 1
- last new depth event ID: 2
- depth events:
- set bids { 1: 0000 }
- set bids { 1.25: 0002 }
Second depth update:
- context lastUpdateID: 2
- first new depth event ID: 1
- last new depth event ID: 3
- set bids { 1: 0000 }
- set bids { 1.25: 0002 }
- set ask { 2: 1 }
Currently the second update would throw an error, because the first new depth ID != context lastUpdate ID + 1. That means you’re receiving duplicate updates.
I think this is an incorrect action, because the second depth update contains all the old updates AND new updates. Unless I’m misunderstanding how setting the local cache works, it does not reduce accuracy to put in old updates in twice (ie. setting bids { 1: 00 } two times), but it does skipping new depth updates.
Using our example above, on the second depth update, there is no harm in putting events 1 and 2 into the local cache again, but there is a harm in skipping event 3.
I would propose changing it to:
- Throw an error ONLY if context.lastUpdateEventID > depth.u (if the last update recorded is newer than the last update being sent, so all the information sent is old)
- Continuing to set all updates otherwise
Or, this:
} else if ( context.lastEventUpdateId && context.lastEventUpdateId > depth.u) {
const msg = '**** depthHandler: ['+symbol+'] Incorrect update ID. The depth cache is out of sync.';
if ( options.verbose ) options.log(msg);
throw new Error(msg);
} else {
for ( obj of depth.b ) { //bids
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (2 by maintainers)
Top GitHub Comments
Hi Keith,
I wish I had done taken more screenshots yesterday, since today the socket appears to be working much better.
You can find an example in the screenshot below of a* duplicate* update:
First update:
Second update:
In that example the update is a duplicate with no new information. It wouldn’t hurt to update the cache again, but it also wouldn’t change anything.
Yesterday I was also running many situations with an overlap, where for example:
Then the next depth update might start at U: 6, so the information for updates 4 and 5 would have been lost. I’ll log everything over the next week and try to get some screenshots of this occurring.
The error should not be due to reconnecting, since I have reconnect set to false.
On Sun, May 20, 2018 at 3:43 AM, Keith Kirton notifications@github.com wrote:
Hi @learnathoner. Thanks again for this. After investigating things a bit further this morning, I’d just like to confirm a few things. Have you tested this change locally at all, and confirmed the data is packaged as you expect in your examples? I’m just asking because this doesn’t follow the documentation on how to manage a depth cache locally as appears here.
In your example, the second event, with it’s overlapping update IDs, would violate the protocol definition as per instruction 6:
In my tests, I’ve never seen an update not conform to that documented pattern, and have only seen cases where the current update’s
U
is greater than the previous updateu + 1
, meaning the local cache has missed updates and now has stale data sitting in it.I am currently looking to sort the depthCache logic out with ideas I’ve had recently that are the cause of the issues we’ve been seeing, and will definitely keep the logic you’ve posted here in mind. If you are able to post concrete examples of overlapping update data though (as received from Binance directly), that would be a huge help.
NOTE: the crucial thing, is that this overlap occurs before the first out of sync error. If it only appears after a “reconnect” due to out of sync, the cause is something different and will be removed by the fix I’m currently implementing.