Firestore Realtime listener drops a connection and does not reconnect
See original GitHub issue- Operating System version: Nodejs (Alpine based) docker container, node:8.7.0-alpine
- Firebase SDK version: 5.4.2
- Library version: @google-cloud/firestore 0.8.2
- Firebase Product: Firestore
We have a simple Firestore Realtime listener running on Docker container at Google Container Engine. Few times, listener for some reason have lost a connection and does not reconnect. Updates (sets, updates and deletes) does work but we don’t get notifications of updates, until we restart a container.
Error message we have got couple of times during last week:
Error: Error: Endpoint read failed
at sendError (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:254:15)
at maybeReopenStream (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:268:9)
at BunWrapper.currentStream.on.err (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:301:13)
at emitOne (events.js:120:20)
at BunWrapper.emit (events.js:210:7)
at StreamProxy.<anonymous> (/usr/src/app/node_modules/bun/lib/bun.js:31:21)
at emitOne (events.js:120:20)
at StreamProxy.emit (events.js:210:7)
at ClientDuplexStream.<anonymous> (/usr/src/app/node_modules/google-gax/lib/streaming.js:130:17)
at emitOne (events.js:115:13)
Once (first time) we have got this:
Error: Error: Transport closed
at sendError (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:254:15)
at maybeReopenStream (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:268:9)
at BunWrapper.currentStream.on.err (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:301:13)
at emitOne (events.js:120:20)
at BunWrapper.emit (events.js:210:7)
at StreamProxy.<anonymous> (/usr/src/app/node_modules/bun/lib/bun.js:31:21)
at emitOne (events.js:120:20)
at StreamProxy.emit (events.js:210:7)
at ClientDuplexStream.<anonymous> (/usr/src/app/node_modules/google-gax/lib/streaming.js:130:17)
at emitOne (events.js:115:13)
Our code is basically like this:
var admin = require("firebase-admin");
var serviceAccount = require("./credentials.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://xxxxxxxx.firebaseio.com"
});
var db = admin.firestore();
db.collection("demodata").onSnapshot(querySnapshot => {
querySnapshot.forEach(doc => {
//.. doing stuff with data
console.log(doc.data());
});
});
Is listener planned to survive on these situations, or should be build some kind retry system by ourselves?
Thanks!
Issue Analytics
- State:
- Created 6 years ago
- Comments:16 (6 by maintainers)
Top Results From Across the Web
Firestore stops updating after losing and regaining an internet ...
And Firestore may use an uncontrollable timer to dictate when it reconnects its listeners after a connection is regained.
Read more >Get realtime updates with Cloud Firestore - Firebase
When listening for changes to a document, collection, or query, you can pass options to control the granularity of events that your listener...
Read more >Firebase offline: What works, what doesn't, and what you need ...
Your users don't live in a world of perfect network connectivity. Connections get dropped, people go into tunnels, and users turn off data ......
Read more >Enabling Offline Capabilities | FlutterFire
The Firebase Realtime Database client automatically downloads the data at these locations and keeps it in sync even if the reference has no...
Read more >Connect Firestore to BigQuery: 2 Easy Methods - Learn | Hevo
Loading data from Firestore to BigQuery is not a complicated ... to keep your data in sync across client apps through real-time listeners....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
These log statements are very helpful. Our retry logic considers these streams as healthy once it is able to send out a network package (“Marking stream as healthy”). Unfortunately, just because the TCP layer accepts our packages, doesn’t necessarily indicate that the outbound network link is active. We may have to retry more aggressively. I will kick off an internal discussion.
We are targeting a release next week.