Allow ISKFunction to return IAsyncEnumerable
See original GitHub issueIs there any way currently to allow ISKFunction
to return an IAsyncEnumerable
? If not is there any plan to?
Issue Analytics
- State:
- Created 4 months ago
- Reactions:1
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Return IAsyncEnumerable from an async method
What I'm trying to say is that async only supports certain return types. Having IAsyncEnumerable<T> as the return type should be a compiler ......
Read more >Passing through IAsyncEnumerable<T> within an async ...
Using an async on a method signature with a return of IAsyncEnumerable forces you to yield each item back instead of being able...
Read more >IAsyncEnumerable with yield in C# - Code Maze
Async Streams or IAsyncEnumerable<T> provides a way to iterate over an IEnumerable collection asynchronously while using the yield operator to ...
Read more >Async Enumerable in C# (Part 1)
Let's start by focusing at the "top" of a LINQ pipeline. What if the method that produces my sequence needs to make some...
Read more >C# Async Yield Return: IAsyncEnumerable will change ...
Usually an async method will return a task. Your first thought when using yield return in your async method may be to have...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Since you don’t have this yet in Semantic Kernel, let me give my view on this:
I’m working on my own implementation of generative LLM integration here: https://github.com/MithrilMan/AIdentities since a couple of months (I didn’t knew about semantic kernel and when I found it it was already late and anyway wasn’t adhering 100% to my vision) and I implemented what @wayne-o is asking in my design: I’ve a concept of Skills and when a skill is executed, this is the signature of the call
IAsyncEnumerable<Thought> ExecuteAsync(Prompt prompt, SkillExecutionContext context, CancellationToken cancellationToken);
The core of my skill/brain thing is here (readme isn’t updated) https://github.com/MithrilMan/AIdentities/tree/25e60f875979a0d595561fb7d0f1f5e82e5ab7a1/src/AIdentities.Shared/Features/CognitiveEngineIt works, but I’m sick of a problem that arises around the use of IAsyncEnumerable: when the API fails in the middle of generating a streamed content, it fails badly because it fails when you start to consume the iterator and .Net doesn’t have a nice way to handle such cases, so if you want to implement a retry pattern for example by using polly (and we’ve seen how this is important with OpenAI that lately had big problems serving properly their API), you can’t just wrap the call to the stream method because the exception of course arise when you materialize the enumerator. Unluckily .Net doesn’t allow to call a
yield return
within a try/catch block, so you have to create all sort of workaround to have a proper retry method.Also when you stream chunks of text, what would you do if something goes wrong in the API ? In a simple chat application you can just ask the user to send again the message, but if it’s part of a skill (or skfunction as you call it) It’s very likely that You’d need to inform the consumer that you have to start over and that it should forget the partial generated text (= higher complexity)
I’ve yet not figured out how to properly and affectively handle such scenarios, in my case when it fails, fails badly, but this makes me wonder if it’s worth to use streaming calls in skills at all.
Of course it’s cool to see that my Skill is producing a text as a stream and the user can start reading as soon as it’s generating, also what I called AIdentity can generate different kind of “thoughts” that can be streemed too (think of it like a kind of log in some scanrio) but at the same time if it breaks in the middle you’ll have big troubles to rollback what you eventually did with the partial generated text, without mentioning minor problems it can give if you try to parse that text on a markdown viewer for example (that’s why recently in my chat section I’m building the message on a normal div and then swith to markdown once completed), and I’m starting to think that maybe all this complexity isn’t worth the effort.
I’m a bit torn as to whether or not implementing stream at the function level is something I’d want to deal with. Maybe just being able to signal to consumer events like “StartingTextGeneration” / “EndingTextGeneration” could be enough
@RogerBarreto has some ideas in the works to make the requested behavior possible. TBD - when we get those changes in, let’s make sure to tag this issue.