API google.pubsub.v1.Publisher exceeded 5000 milliseconds when running on Cloud Run
See original GitHub issueWhen the image is run locally on Docker the application is able to successfully publish messages to Cloud PubSub.
When the same image is deployed to Cloud Run, it won’t publish a single message. All attempts fail with the error:
GoogleError: Total timeout of API google.pubsub.v1.Publisher exceeded 5000 milliseconds before any response was received.
at repeat (/usr/root/node_modules/google-gax/build/src/normalCalls/retries.js:66:31)
at Timeout._onTimeout (/usr/root/node_modules/google-gax/build/src/normalCalls/retries.js:101:25)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)
code 4
Sample Code:
import { mongoose } from '@shared/connections'
import MongoDbModel from '@shared/models'
import { PubSub } from '@google-cloud/pubsub'
import {
REMINDER_COPY_SMS = '',
REMINDER_MAX_DAYS_SINCE_LAST_MSG = 0,
REMINDER_MAX_DAYS_SINCE_LAST_READING = 0,
REMINDER_MESSAGE_TEMPLATE = 'reading-reminder',
TOPICS_MESSAGE_DISPATCH,
KEYFILE_PATH,
} from '../constants.js'
const pubsubClient = new PubSub({
projectId: process.env.GOOGLE_CLOUD_PROJECT,
keyFilename: KEYFILE_PATH,
})
const msMaxSinceLastReading = REMINDER_MAX_DAYS_SINCE_LAST_READING * 24 * 60 * 60 * 1000
const msMaxSinceLastMsg = REMINDER_MAX_DAYS_SINCE_LAST_MSG * 24 * 60 * 60 * 1000
const { model: Patient } = new MongoDbModel(mongoose, 'Patient')
const findInactivePatients = () => Patient.find({
status: 'ACTIVE',
$and: [{
$or: [{
last_reading_at: { $exists: false },
}, {
last_reading_at: { $lt: new Date(Date.now() - msMaxSinceLastReading) },
}],
}, {
$or: [{
last_reading_reminder_at: { $exists: false },
}, {
last_reading_reminder_at: { $lt: new Date(Date.now() - msMaxSinceLastMsg) },
}],
}, {
// Test with known patients only
$or: [{
first_name: { $regex: /Bruno/i }, last_name: { $regex: /Soares/i },
}],
}],
}, {
first_name: 1,
last_name: 1,
gender: 1,
last_reading_at: 1,
phones: 1,
}, {
lean: true,
})
const updateLastReadingReminderAt = (patient) => Patient.findOneAndUpdate({
_id: patient._id,
}, {
last_reading_reminder_at: Date.now(),
})
const createMessagePayload = (patient) => ({
body: REMINDER_COPY_SMS,
channels: [{
name: 'sms',
contacts: patient.phones.map((phone) => phone.E164),
specifications: {
template: REMINDER_MESSAGE_TEMPLATE,
},
}],
})
const dispatchMessage = async (patient) => {
if (!patient.phones || !patient.phones.length) {
return
}
const payload = createMessagePayload(patient)
console.info('Worker reminder-report-vitals', 'event payload', JSON.stringify(payload))
try {
await pubsubClient.topic(TOPICS_MESSAGE_DISPATCH).publishMessage({ json: payload })
console.info('Worker reminder-report-vitals', 'event published to cloudPubSub')
pubsubClient.close()
await updateLastReadingReminderAt(patient)
console.info('Worker reminder-report-vitals', 'last_reading_reminder_at updated for patient', patient._id)
} catch (err) {
console.error(err)
}
}
const handler = async () => {
console.info('Worker reminder-report-vitals', 'exectution started')
try {
const inactiveList = await findInactivePatients()
inactiveList.forEach(dispatchMessage)
} catch (err) {
console.error(err)
}
}
export default handler
The code hangs after logging this line:
console.info('Worker reminder-report-vitals', 'event payload', JSON.stringify(payload))
Sample Event Payload:
{"body":"This is {{ORG}}. We haven't been receiving your vitals. Reply \"start\" to report your vitals now. Reply \"stop\" at any time to opt-out from automated reminders.","channels":[{"name":"sms","contacts":["+15555555555"],"specifications":{"template":"reading-reminder"}}]}
Environment details
- OS: Linux Alpine
- Node.js version: 14.16.1
@google-cloud/pubsub
version: 2.18.4
Steps to reproduce
- Create PubSub topic to receive messages and replace its name in constants.js TOPICS_MESSAGE_DISPATCH.
- Create nodejs application image from sample code. For sake of replicating only the issue, database operations may be removed - instead the sample event payload may be used.
- Run the container locally with Docker.
- Push the image to GCP Container Registry
- Deploy a Cloud Run service using the image
- See timeout erro in Cloud Logs
Issue Analytics
- State:
- Created 2 years ago
- Reactions:14
- Comments:48 (1 by maintainers)
Top Results From Across the Web
Troubleshooting | Cloud Pub/Sub Documentation
Learn about troubleshooting steps that you might find helpful if you run into problems using Pub/Sub. Cannot create a subscription.
Read more >Cloud Run PubSub high latency - node.js - Stack Overflow
pubsub.v1.Publisher exceeded 60000 milliseconds before any response was received. I this case a message is not sent at all or is highly delayed....
Read more >Google Cloud Pub/Sub Reliability User Guide: Part 1 Publishing
Any unavailability of the publish API can create risk of data loss. So, as an application designer, you must strike a balance between...
Read more >GCP pipeline: pub/sub-lookup-storage (part 2/2) | Syntio
Google Cloud, in addition to Cloud functions, offers Cloud Run as serverless option. ... And since GCP charges by execution time in milliseconds, ......
Read more >Part XIX. Appendix: Compendium of Configuration Properties
Name Default Description
aws.paramstore.default‑context application
aws.paramstore.enabled true Is AWS Parameter Store support enabled.
aws.paramstore.profile‑separator _
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@apettiigrew Remove ^… use version only. “@google-cloud/pubsub”: “2.17.0”
Are you calling it concurrently? I had a similar issue to this post when running om GCP and my issue was that i was sending lots of request concurrently without awaiting the promises, causing my service to scale down. Instead you can do multiple publishes in a single request and await the respones (if doing scale-to-zero)