Order not being respected
See original GitHub issueFor some reason, I am getting firehosed with events when dealing with only 2 message keys… even though I am not using auto commit. Any help here would be awesome
const Kafka = require("node-rdkafka")
const { v4 } = require("uuid")
let outstanding = 0
const wait = time => new Promise(resolve => setTimeout(resolve, time))
const consumer = new Kafka.KafkaConsumer({
"group.id": "workbench",
"metadata.broker.list": "localhost:9092",
"enable.auto.commit": false,
"auto.commit.enable": false,
})
var producer = new Kafka.Producer({
"metadata.broker.list": "localhost:9092",
})
producer.connect()
consumer.connect()
producer
.on("ready", () => {
// console.log("PRODUCER READY!")
// producer.produce("workbench-bar", null, new Buffer("bar!"), v4())
setInterval(() => {
if (outstanding > 0) {
return
}
try {
for (var i = 0; i < 10; i++) {
console.log("producing event foo!")
producer.produce("workbench-foo", null, new Buffer(v4()), "foo")
outstanding++
}
for (var i = 0; i < 10; i++) {
console.log("producing event bar!")
producer.produce("workbench-bar", null, new Buffer(v4()), "bar")
outstanding++
}
} catch (err) {
console.error("error producing message")
console.trace(err)
}
}, 10)
})
.on("event.error", e => console.trace(e))
consumer
.on("ready", () => {
// consumer.unsubscribe(["workbench-foo"])
consumer.subscribe(["workbench-bar", "workbench-foo"])
console.log("CONSUMER CONSUMING")
consumer.consume()
})
.on("data", async message => {
// consumer.seek(0)
console.log("MESSAGE!")
console.log(
`${message.topic}:${message.partition}`,
message.key.toString("utf8"),
message.value.toString("utf8"),
outstanding - 1
)
outstanding--
await wait(1000) // THIS ISN'T WAITING! I AM RECEIVING EVERY SINGLE MESSAGE WITH KEY
// "bar" ALL AT ONCE! HOW DO I GUARANTEE PROCESSING ORDER?
consumer.commitMessage(message)
})
.on("event", e => console.log(e))
Issue Analytics
- State:
- Created 5 years ago
- Comments:14 (2 by maintainers)
Top Results From Across the Web
What to Do When You're Not Getting the Respect You Deserve
Regardless of what might trigger our admiration: When it's not reciprocated, it can feel pretty awful. That feeling of being slighted?
Read more >What do you do when you do not feel respected? - Quora
Stop being reactive to their actions as if they never existed. When they talk to you, do not reply or do so with...
Read more >7 Things You Absolutely Must Do If You Want To Be Respected
If you want to be respected more by your boss, your staff or your colleagues, you absolutely must do these seven things consistently....
Read more >Configured assets order is not being respected #2327 - GitHub
If one app now wants to have it's own file - e.g. an own different favicon or logo - the path to the...
Read more >RESPECT, IN ORDER TO BE RESPECTED - LinkedIn
Say please, Say thank you and show respect to my elders. I let another person have my seat if they need it. Help...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Data callbacks aren’t going to wait, they get emitted as fast as they can on available threads. If you want the behavior you are looking for you need to do the consume calls manually or use the stream.
You can easily fix this by integrating async-function-queue, creating a queue (without concurrency, so with concurrency=1) per topic partition and enqueue your event handling jobs (async functions) on the correct queue from the node-rdkafka Consumer#on(data) event handler. This will effectively order consumption of messages per topic partition and allow concurrency for processing of messages on different topics/topic partitions.
Happy consuming!