Concurrent Connections
See original GitHub issueThis might be expected behaviour. I’ve created a logic app which runs the following every ten minutes:
{ "definition": { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "actions": { "HTTP": { "inputs": { "body": { "warmup": true }, "method": "POST", "uri": "https://pachal-survey.azurewebsites.net/api/SubmitGeo" }, "runAfter": {}, "type": "Http" }, "HTTP_2": { "inputs": { "body": { "warmup": true }, "method": "POST", "uri": "https://pachal-survey.azurewebsites.net/api/SubmitGeo" }, "runAfter": {}, "type": "Http" }, "HTTP_3": { "inputs": { "body": { "warmup": true }, "method": "POST", "uri": "https://pachal-survey.azurewebsites.net/api/SubmitGeo" }, "runAfter": {}, "type": "Http" } }, "contentVersion": "1.0.0.0", "outputs": {}, "parameters": {}, "triggers": { "Recurrence": { "recurrence": { "frequency": "Minute", "interval": 10 }, "type": "Recurrence" } } } }
What is notice is that one of the three invocations has a 3 second response time, and the remaining two have 30 second response times from the PowerShell function. I’ve been running this test for the past two days. If a PowerShell function on a consumption plan has more than one concurrent connection is it suppose to cause a cold startup for each concurrent request?
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (4 by maintainers)
Top GitHub Comments
@ghclo Please note, however, that the CPU context switching overhead may or may not be noticeable depending on what your function code does. Within the 10-lane highway analogy, a single toll booth will not create any traffic jam if the distance between cars on each lane is large enough. This is exactly what happens with I/O-bound workloads, which are quite typical for PowerShell.
For example, if your function restarts an Azure VM and you need to execute it for 100 VMs as soon as possible, each execution will take a few seconds consuming very little CPU, and CPU context switching will not be your most important concern. On the contrary, if you try to match the number of threads to the number of cores, you will end up either spending time on executing everything sequentially, or spending money on many underutilized worker instances. In this situation, increasing
PSWorkerInProcConcurrencyUpperBound
orFUNCTIONS_WORKER_PROCESS_COUNT
will increase the throughput on this specific workload almost proportionally without a costly scale-out. By deoptimizing locally (allowing some CPU context switching), you actually optimize globally.CPU-bound workloads are different. If your functions are busy performing heavy computations, CPU context switching will matter. But both “CPU-bound” and “I/O-bound” are just simplified abstractions. In reality, you have to deal with a mix unique to your specific workload, so you want to experiment with the settings and find what works for you.
@ghclo He is referring to low-level CPU context switching, which is the CPU switching from working on one thing to working on another.
CPU’s (for the most part) can’t do things in parallel in a single core, and each powershell worker is a single core instance. If concurrency is 10, picture a 10-lane highway that plenty of cars can drive but all the roads merge to a single lane for a toll booth. You have a traffic jam at that point because only one car at a time can go through, then it opens back up to 10 lanes.
The 10 lane highway is the concurrent powershell processes, the 1 lane toll booth is the CPU 😃
The solution is to write your solutions in a way that provision enough workers to handle the load. For instance, you can use the Durable functions pattern, or simply connect your functions together with Queue Input and Output bindings. When the queue backs up, Azure functions will automatically add more workers (toll booths) to handle the traffic.