Kubernetes deploy with Durable functions
See original GitHub issueWe recently ran into an issue where moving our Durable functions to a Kubernetes Cluster. We are using the “func kubernetes deploy” to generate the Yml file.
By default the Queue based functions are split from the HTTP triggered functions but they share the same Secrets. Because of the Orchestrator function being disabled the HTTP function cannot start a new instance of the orchestrator with this error:
Error: "The function 'RoadsideBatteryJobWorkflow' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered!" "Asgard.Odin.Workflows.RoadsideBatteryJob.Triggers.StartJobWorkflow" System.ArgumentException: The function 'RoadsideBatteryJobWorkflow' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered! at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableTaskExtension.ThrowIfFunctionDoesNotExist(String name, FunctionType functionType) in D:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\DurableTaskExtension.cs:line 1062 at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableClient.Microsoft.Azure.WebJobs.Extensions.DurableTask.IDurableOrchestrationClient.StartNewAsync[T](String orchestratorFunctionName, String instanceId, T input) in D:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\ContextImplementations\DurableClient.cs:line 140 at Asgard.Odin.Workflows.RoadsideBatteryJob.Triggers.StartJobWorkflow.Run(HttpRequestMessage request, String instanceId, IDurableClient orchestrationClient) in /src/dotnet-function-app/Asgard.Odin.Workflows.RoadsideBatteryJob/Triggers/StartJobWorkflow.cs:line 59
I have spoken to the guys at the Durable Function Extensions and the workarounds for this is to have different configuration for each of these two deployments in Kubernetes as the AzureStorageWebJobs setting has to be different.
The question is that is it possible to stop the splitting from happening in the Yml file? Or is there another way to handle this either by Environment Variables in the Yml file?
Thanks in advance
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
For those working on Non C# Durable Function Apps, such as Python or JavaScript function apps, here is an example of using the
externalClient
setting in your HTTP-starter’s function.json to enable your HTTP and non-HTTP deployments to reach each other:https://github.com/microsoft/durabletask-mssql/issues/41#issuecomment-885299181
Setting
externalClient
totrue
in your HTTP-starter’s function.json will disable the local check for the orchestrator you are trying to trigger, and will allow the requested orchestrator to be scheduled.Right, I’m aware of the two separate pod problem. That’s why I mentioned the version of the extension. Strangely, I didn’t run into this problem when using
func kubernetes deploy
when I was testing last week.BTW, I looked through the code just now, and I think there is a workaround where you can add
"externalClient": true
to your binding description in function.json to disable this validation.The fact that there is no ScaledObject for Durable Functions triggers is a known issue. We recently added this for Durable Function apps that use our (yet to be announced) SQL backend (https://github.com/Azure/azure-functions-core-tools/pull/2503), but haven’t done any work for this in the existing production scenarios. I think the idea is that you’d need to configure an external scaler. @TsuyoshiUshio I believe you worked on an external scaler for Durable Functions? Can you point to any documentation for this?