Feature Request: Consumer<TKey,TValue>.ConsumeAsync
See original GitHub issueDescription
Create the method
async Task<Message<TKey,TValue>> Consumer<TKey,TValue>.ConsumeAsync(TimeSpan timeout,CancellationToken ct)
with the following behavior:
- If the local queue has a message, return that message immediately.
- If the local queue is empty, wait until a new message arrives, and deliver it.
- Preserve sequence of awaits: awaits should be serviced in the order in which they were added.
This would allow both pull-based (IEnumerable<Message<TKey,TValue>>
) and push-based (IObservable<Message<TKey,TValue>>
) accessors to be built without individual developers having to reimplement a correct polling loop for every project.
As someone who has recently implemented both of these patterns, this method would have made my job significantly easier, and would still add peace of mind that the error handling is being done correctly.
This method should replace Consumer<TKey,TValue>.Consume
, as it duplicates functionality, but is less flexible.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:9
- Comments:28 (16 by maintainers)
Top Results From Across the Web
What is a Feature Request? Understanding User Feedback
Feature requests are a form of product feedback you may frequently encounter as a SaaS product manager. They typically come in the form...
Read more >Feature Request Discussion : r/ArcBrowser
I have two Profiles (Personal & Work) with 6 spaces (the work space uses the work profile). There are essential extensions (like uBlock ......
Read more >Feature Request Content
This page covers the content found in a feature request for your iOS app. Suggest Edits. Users can add new feature requests or...
Read more >Feature Request Subforum
I was wondering if there is any particular reason the Feature Request System is not centralised in a particular forum or system somehow....
Read more >Where is the appropriate place for new feature requests? + ...
I have a new feature request but can't seem to determine definitively if it should go on these forums, or https://community.veeam.com/topic/new.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
your problem is that you’re constructing a consumer and subscribing to a topic before every consume. constructing a consumer is very expensive - and there is quite a bit back-and-forth to the cluster going on before it can start consuming messages (so there is a significant delay to first message). you should only ever make one consumer instance and you should keep it alive over the lifetime of your application.
other notes:
because we don’t have consumeAsync yet, rather than use
HostedService
, i would just set up a dedicated background thread (tied to app lifetime) and do a standard sync consume loop in that. this is completely fine, just not idiomatic C# (everything is async these days). it’s actually more than completely fine - it will be measurably more performant than an async approach, because that comes with a fair bit of overhead (compared to the # msgs / s you can get out of the kafka consumer!)alternatively you could fake an async consume method using task
await Task.Run(() => cosumer.Consumer(timeout))
. That has a lot more overhead than approach #1, but will allow you to use the standard hosted service pattern (you’ll still get 100’s of thousands of messages a second out of it). don’t use the timer approach, use the async loop approach.I’ve implemented
ConsumerAsync
on a local branch and it’s coming in version 1.0. thanks for your feedback. It will be included in the 1.0-experimental-6 nuget package.Poll/OnMessage/OnConsumeError
are being removed - you’re right they cause too much confusion.