question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Proposed depth cache change:

See original GitHub issue

Currently depth cache error handling (in depthHandler) works like this:

 else if ( context.lastEventUpdateId && depth.U !== context.lastEventUpdateId + 1 ) {
const msg = 'depthHandler: ['+symbol+'] Incorrect update ID. The depth cache is out of sync.';
            if ( options.verbose ) options.log(msg);
            throw new Error(msg);
}

In other words, if the first new depth event ID is not equal to context’s lastEventUpdatedID + 1, it logs an error message and throws an error.

Let’s go through an example:

First depth update:

  • context lastUpdate: 0;
  • first new depth event ID: 1
  • last new depth event ID: 2
  • depth events:
    1. set bids { 1: 0000 }
    2. set bids { 1.25: 0002 }

Second depth update:

  • context lastUpdateID: 2
  • first new depth event ID: 1
  • last new depth event ID: 3
  1. set bids { 1: 0000 }
  2. set bids { 1.25: 0002 }
  3. set ask { 2: 1 }

Currently the second update would throw an error, because the first new depth ID != context lastUpdate ID + 1. That means you’re receiving duplicate updates.

I think this is an incorrect action, because the second depth update contains all the old updates AND new updates. Unless I’m misunderstanding how setting the local cache works, it does not reduce accuracy to put in old updates in twice (ie. setting bids { 1: 00 } two times), but it does skipping new depth updates.

Using our example above, on the second depth update, there is no harm in putting events 1 and 2 into the local cache again, but there is a harm in skipping event 3.

I would propose changing it to:

  • Throw an error ONLY if context.lastUpdateEventID > depth.u (if the last update recorded is newer than the last update being sent, so all the information sent is old)
  • Continuing to set all updates otherwise

Or, this:

 } else if ( context.lastEventUpdateId && context.lastEventUpdateId > depth.u) {
            const msg = '**** depthHandler: ['+symbol+'] Incorrect update ID. The depth cache is out of sync.';
            if ( options.verbose ) options.log(msg);
            throw new Error(msg);
        } else {
            for ( obj of depth.b ) { //bids

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
learnathonercommented, May 20, 2018

Hi Keith,

I wish I had done taken more screenshots yesterday, since today the socket appears to be working much better.

You can find an example in the screenshot below of a* duplicate* update:

First update:

  • context.lastEventUpdatedId: 5606 6208
  • depth.U: 5606 6209
  • depth.u: 5606 6211
  • Information for 209 - 211 is stored in the depth cache, context.lastEventUpdatedId was set to 211.

Second update:

  • context.lastEventUpdatedId: 5606 6211
  • depth.U: 5606 6209
  • depth.u: 5606 6211
  • Error thrown, no information stored. Continues from next depth update.

In that example the update is a duplicate with no new information. It wouldn’t hurt to update the cache again, but it also wouldn’t change anything.

Yesterday I was also running many situations with an overlap, where for example:

  • context.lastEventUpdatedId: 3
  • depth.U: 2
  • depth.u: 5
  • Error thrown, no information stored

Then the next depth update might start at U: 6, so the information for updates 4 and 5 would have been lost. I’ll log everything over the next week and try to get some screenshots of this occurring.

The error should not be due to reconnecting, since I have reconnect set to false.

On Sun, May 20, 2018 at 3:43 AM, Keith Kirton notifications@github.com wrote:

Hi @learnathoner https://github.com/learnathoner. Thanks again for this. After investigating things a bit further this morning, I’d just like to confirm a few things. Have you tested this change locally at all, and confirmed the data is packaged as you expect in your examples? I’m just asking because this doesn’t follow the documentation on how to manage a depth cache locally as appears here https://github.com/binance-exchange/binance-official-api-docs/blob/master/web-socket-streams.md#how-to-manage-a-local-order-book-correctly .

In your example, the second event, with it’s overlapping update IDs, would violate the protocol definition as per instruction 6:

  1. While listening to the stream, each new event’s U should be equal to the previous event’s u+1

In my tests, I’ve never seen an update not conform to that documented pattern, and have only seen cases where the current update’s U is greater than the previous update u + 1, meaning the local cache has missed updates and now has stale data sitting in it.

I am currently looking to sort the depthCache logic out with ideas I’ve had recently that are the cause of the issues we’ve been seeing, and will definitely keep the logic you’ve posted here in mind. If you are able to post concrete examples of overlapping update data though (as received from Binance directly), that would be a huge help.

NOTE: the crucial thing, is that this overlap occurs before the first out of sync error. If it only appears after a “reconnect” due to out of sync, the cause is something different and will be removed by the fix I’m currently implementing.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jaggedsoft/node-binance-api/issues/218#issuecomment-390466584, or mute the thread https://github.com/notifications/unsubscribe-auth/AbI8L5J439kPImJKtjvbPjvhVP7aFG14ks5t0SzEgaJpZM4UFyil .

1reaction
bkryptcommented, May 20, 2018

Hi @learnathoner. Thanks again for this. After investigating things a bit further this morning, I’d just like to confirm a few things. Have you tested this change locally at all, and confirmed the data is packaged as you expect in your examples? I’m just asking because this doesn’t follow the documentation on how to manage a depth cache locally as appears here.

In your example, the second event, with it’s overlapping update IDs, would violate the protocol definition as per instruction 6:

  1. While listening to the stream, each new event’s U should be equal to the previous event’s u+1

In my tests, I’ve never seen an update not conform to that documented pattern, and have only seen cases where the current update’s U is greater than the previous update u + 1, meaning the local cache has missed updates and now has stale data sitting in it.

I am currently looking to sort the depthCache logic out with ideas I’ve had recently that are the cause of the issues we’ve been seeing, and will definitely keep the logic you’ve posted here in mind. If you are able to post concrete examples of overlapping update data though (as received from Binance directly), that would be a huge help.

NOTE: the crucial thing, is that this overlap occurs before the first out of sync error. If it only appears after a “reconnect” due to out of sync, the cause is something different and will be removed by the fix I’m currently implementing.

Read more comments on GitHub >

github_iconTop Results From Across the Web

LUCIT-Systems-and-Development/unicorn-binance ... - GitHub
A local Binance DepthCache Manager for Python that supports multiple depth caches in one instance in a easy, fast, flexible, robust and fully-featured...
Read more >
Crest Ocean System - Depth Cache usage and setup - YouTube
In this video we describe what a Depth Cache is useful for, how it is used in one of the example scenes, and...
Read more >
An in depth look into caching (Part 1) | by Claudio Freire
A cache over a key with high cardinality will provide very small benefits, and may not be worth your trouble. Remember that adding...
Read more >
Cache hierarchy - Wikipedia
Cache hierarchy, or multi-level caches, refers to a memory architecture that uses a hierarchy of memory stores based on varying access speeds to...
Read more >
Depth Cache — python-binance 0.2.0 documentation
By default the depth cache will fetch the order book via REST request every 30 minutes. This duration can be changed by using...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found