Change feed processor cannot read from the beginning
See original GitHub issueDescribe the bug
A ChangeFeedProcessor
configured to read from the beginner per this page is never fed any records.
To Reproduce Clone https://github.com/CMihalcik/cosmosdb-change-feed-from-beginning, add Cosmos credentials and run.
Expected behavior
The processor’s ChangesHandler
delegate is called with the earliest items in the collection.
Actual behavior
The processor’s ChangesHandler
delegate is never called.
Environment summary SDK Version: 3.3.2 OS Version: macOS 10.13.6
Additional context When the change feed processor is configured this way, it doesn’t seem to be called with any items, even those created after the processor is started.
Issue Analytics
- State:
- Created 4 years ago
- Comments:22 (11 by maintainers)
Top Results From Across the Web
Change feed processor in Azure Cosmos DB
The change feed processor is initialized for that specific date and time, and it starts reading the changes that happened afterward.
Read more >Reading Azure Cosmos DB change feed
There are two ways you can read from the change feed with a push model: Azure Functions Azure Cosmos DB triggers and the...
Read more >How to read documents from Change Feed in Azure ...
1. Lease collection is empty. Insert documents in the monitored collection. Start Change Feed app. All changes received from the first document ...
Read more >Working with the Azure Cosmos DB Change Feed ...
The Change Feed Processor simplifies the reading of the Change Feed and it distributes the processing of events across multiple consumers. It's ...
Read more >At-Least-Once delivery using the Azure Cosmos DB Change ...
The change feed processor is part of the Azure Cosmos DB SDK V3. It simplifies the process of reading the change feed… ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I haven’t looked at your example, was only trying to rule out stupid stuff 😉 (I’m still using V2 as some critical features I rely on are missing in the V3 rendition so far); perhaps the team will respond in due course.
Let me just clarify my previous comment to make sure we are talking about the same thing (I think we are but just want to make sure 😃 )
For the 20 different change feed processors (with different processorName and lease container configuration), there’s no dependency on the number of physical partitions. You can create as many of these as you’d like.
For a particular deployment unit, the number of instances is bound by the number of physical partitions. If this upper bound is an issue you could either split up the work between different deployment units (basically make the logic in each delegate quicker to execute) or temporarily raise (then lower) throughput to increase the number of physical partitions. For example, you could try raising throughput to 30,000 RUs then lowering back down to your desired amount a few hours later.
You can track progress on User Voice: https://feedback.azure.com/forums/263030-azure-cosmos-db?filter=top&page=2. I don’t believe this entry has been submitted yet but please feel free to submit and we will post periodic updates.