ExecuteStoredProcedureStreamAsync is only half-Stream
See original GitHub issuetldr: I would like an overload of ExecuteStoredProcedureStreamAsync
which takes a Stream
as a parameter.
Scripts.ExecuteStoredProcedureStreamAsync
returns a ResponseMessage (aka Stream) so we can handle it however we wish. Streams are handy because we can control serialization (and other reasons).
However, it takes dynamic[]
as a parameter representing the parameters to the stored procedure. In ScriptsCore, this is converted to a stream using this.clientContext.CosmosSerializer.ToStream<dynamic[]>(parameters);
. In other words, it uses CosmosSerializer.
While I suppose this is fine, I would like more control over when things get serialized.
From a higher-level point of view, I would prefer that my entire experience with Cosmos is Stream-based, and that when using Stream methods, there’s “no such thing” as serialization. In other words, Stream-based methods should never serialize nor deserialize “user” objects (but can still serialize internal structures, eg: change feed [documents:[…]]).
Challenges with adding an overload to ExecuteStoredProcedureStreamAsync:
- The system expects an array but the user passes something else, and as a stream, we can’t tell the difference until the server probably gives us a BadRequest response.
dynamic[] parameters
andStream streamPayload
cannot be in the same position becausenull
can be passed in and this would be a breaking change
Mitigations:
If the user serializes something that’s not an array, just allow it and let the server blow up with a BadRequest. No big deal.
As to the parameter positions:
Option 1: Change the positions. It seems “awkward” but we can do string storedProcedureId, Stream streamPayload, PartitionKey partitionKey
which is safely differentiated from string storedProcedureId, PartitionKey partitionKey, dynamic[] parameters
Option 2: Change the name. But I can’t think of a non-awkward alternative name for ExecuteStoredProcedureStreamAsync
.
I’ll post a PR for this. But I realize I might be stepping on some design-toes so feel free to consider the PR as just a further explanation to this issue.
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (8 by maintainers)
Top GitHub Comments
Stream API’s makes fully sense if the required payload comes to application layer (WebAPI etc…) already as stream. If the application is preparing the stream then serialization is happening at-least once on client (with SDK or application).
We are not against having a stream input API for SPROC. But best is to address #1057.
/cc: @ealsur, @abhijitpai
@abhijitpai Transactions.
The bounds are the 2MB payload limit.
What I’d like is predictability in transactions. It’s one of the reasons I was so excited to see
TransactionalBatch
and why I was so disappointed to run into #1057 the 100 document limit.I suppose that 100 documents is… well, predictable.
The way I see it is that stored procedures have an unpredictable ability to do work before saying “Welp, that’s all for you this time!”. But whatever that limit is, it seemed to handle whatever we threw at it within the 2MB payload limit without any problems.
My perception when I saw
TransactionalBatch
was that now there was predictable functionality and as long as I stayed within the 2MB payload limit, the server would accept it.In many ways, 100 documents is more predictable than 2MB. Maybe it’s the right way to go (if you need limits). Selfishly, I’d prefer that only the 2MB limit existed. Even more selfishly, I’d prefer that there were no limits at all. But knowing the feasibility of this in your back-end is beyond me.