question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Provide partitionid to ChangeFeedObserverFactory.Create

See original GitHub issue

Is your feature request related to a problem? Please describe.

No - there’s no problem; I have a workaround, but it involves boilerplate…

Describe the solution you’d like

I’d like an overload/duplicate of ChangeFeedObserverFactory that gets to know the partitionid being assigned at the time of creation of the object

I note that you guys are doing some work making fluent builders around this so wanted to make you aware of things that I’ve yearned for in this space in case there’s a potential overlap with your plans.

Describe alternatives you’ve considered

Right now I do lots of convoluted stuff in theOpenAsync handler, which I would prefer to do in the constructor of my ChangeFeedObserver, so I’m not reliant on the assumption that OpenAsync will be called before ProcessChangesAsync - this would allow me to make a much more succinct wrapper.

Additional context

Sketch syntax would be:

IChangeFeedObserver CreateObserver (ctx : IChangeFeedObserverContext) => new MyObserver(ctx);

var processor = builder.WithObserverFactory(CreateObserver).BuildAsync()

This will let me write the following F#, which can also be done in C# pretty cleanly:

type Observer(log, partitionId) =
    inherit DefaultObserver() 
    let producer = producer.Start(partitionId)
    do log.Information("Started {range}", partitionId)
    new (log, ctx : IChangeFeedObserverContext) = new Observer(log, ctx.PartionKeyRangeId)
    interface IDisposable with Dispose() =
        producer.Dispose()
        log.Information("Disposed {range}", partitionId)
    override ProcessChangesAsycn(ctx, docs, ct) = ....

I’d be delighted to road test any proposed syntax on any branch any time.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
bartelinkcommented, Sep 10, 2019

(summarizing a tangent I injected in https://github.com/Azure/azure-cosmos-dotnet-v3/pull/782#issuecomment-529714013) @ealsur perhaps you can shed some light on how you guys envisage the API as a whole addressing the following concerns;

Diagnostics:

  • if 3 ranges have been assigned to a processor host but one is failing to read (imagine a hotspot is consuming all the RUs)
  • in general, one wants to be able to distinguish which ranges are progressing when analysing throughput

Manual checkpointing and/or being able to read-ahead:

I cover it further in #616 but the deeper need is to be able to break the temporal coupling between 1) reading a batch from the CF, 2) checkpointing 3) requesting the next batch 4) getting to process the next one. There are lots of scenarios where you want to be able to read continuously, aggregating the processing one is performing, and provide backpressure only at such point as one has N batches in progress

Decoupling/overlapping the reading of data, processing and writing of checkpoints:

There are also performance benefits to allowing the consumer to control the reading ahead by being able to return immediately. this is important especially if batch size limits can only be expressed via a max item count, having varying document sizes can mean 10 items are anywhere from 10K to 1MB and paying for a roundtrip to the aux collection in between each pull is pretty harmful to throughput)

0reactions
bartelinkcommented, Jul 10, 2019

I understand that the range assignments are fluid. The comparison with Kafka was not based on me using the rangeId e.g. as a sharding key - I appreciate it’s not sufficiently stable.

My point here was that various partitions can deliver payloads at varying levels of throughput. If I’m building a consumer, I’d like to be able to controll throughput/backpressure by saying “I’ve got 2MB inflight from partition 2, let’s not declare completion for this range until we get to work soe of that off”. The old API used to let me do this - if the new API only says “you got a batch” without saying where from, I can’t do that.

The bottom line is that it was useful for troubleshooting purposes to be able to see the varying throughput levels across the ranges. I’d be very concerned to lose this key information in favor of a scheme where batches just get fired at me without any ability to e.g. identify that 3 partitions are stuck for some reason.

(Ironically Kafka doesn’t let you control at this level, but it does offer richer diagnostics than even the V2 CFP did - losing this information would be very significant for operability)

Read more comments on GitHub >

github_iconTop Results From Across the Web

azure cosmosdb - How do I supply PartitionKey for ...
When I run this sample against partitioned collection, I get exception while registering the observer factory. await host.
Read more >
Azure Cosmos DB .NET change feed Processor API, SDK ...
Learn all about the Change Feed Processor API and SDK including release dates, retirement dates, and changes made between each version of ...
Read more >
Change Feed Processor Host fails to renew the lease ...
Change Feed Processor Host fails to renew its own lease under heavy load, ... the CloseAsync) the Factory creates a new IChangeFeedObserver.
Read more >
Multiple Partition Keys in Azure Cosmos DB (Part 2)
The observer factory is super-simple. Just create a class that implements IChangeFeedObserverFactory, and return an observer instance (where the ...
Read more >
ChangeFeedEventHost.cs
... partitioned collections are distributed across instances/observers. /// - New instance takes leases from existing instances to make distribution equal.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found