question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ZERO SUPPORT FOR LOGIC APP (V2 STANDARD) DEVOPS DEPLOYMENT.

See original GitHub issue

Hi, my name is klawrawkz. I am fine. How are you?

What’s the story with Logic apps? How do we create an enterprise grade devops process for “Standard” v2 Logic Apps where we cannot use the function app “id” property to create a connections and the JSON is (finally) compartmentalized to reflect the different aspects of the “app”? What’s the devops deployment scenario when logic apps are compartmentalized into various separate .json files that need to be deployed as well as the various “elastic” app service components we need to “host” these pathetic web apps?

These new “Standard” Logic Apps are bifurcated such that the function connection requires a different construct than does v1 “logic app”. Not that the v1 logic app connections are “functional” in any devops sense, but the JSON construct is different in terms of deployment. This difference made a devops deployment almost “possible” except that connections do not work without MANUAL intervention. In V2, the function connection is contained in the Logic App’s “connections.json” file. These function connections are seemingly (or ACTUALLY) “volatile” (because they don’t work reliably if deployed via “automation” <heh> aka via VS CODE) as they frequently lead to strange parsing errors that are only reported when tracking the streaming logs. Thus it becomes much more important to provide best practices for DevOps deployment regarding these “new” Logic Apps.

Below is a sample of Function connection in a “Standard” Logic App. Note the “connectionName” is required rather than “id” E.G.:

            "My_Super_Duper_Function": {
                "inputs": {
                    "body": "@outputs('Compose')",
                    "function": {
                        "connectionName": "mySuperDuperFunctionConnection"
                    },
                    "method": "POST"
                },
                "runAfter": {
                    "Compose": [
                        "Succeeded"
                    ]
                },
                "type": "Function"
            }

The “mySuperDuperFunctionConnection” is defined in the “Connections.json” file. E.G.

{
    "functionConnections": {
        "azureFunctionOperation": {
            "authentication": {
                "name": "Code",
                "type": "QueryString",
                "value": "@appsetting('azureFunctionOperation_functionAppKey')"
            },
            "displayName": "mySuperDuperFunctionConnection",
            "function": {
                "id": "/subscriptions/some-random-subscription-guidThingy-c844cc805171/resourceGroups/MY_SUPER_DUPER_RG/providers/Microsoft.Web/sites/mySuperDuperfunction/functions/superDuperFunctionEndPoint"
            },
            "triggerUrl": "https://mySuperDuperfunction.azurewebsites.net/api/superDuperFunctionEndPoint"
        }
    },
    "managedApiConnections": { ... so on and so forth ...

Are you actively supporting these logic app thingies? I mean, are you promoting them as “enterprise grade” services and supporting them? (…If the enterprise I work for had it to do over again, I can assure you that logic “apps” would not be in the equation.) This is how miserable the experience is. Not to mention what we are finding- non-existent support, and non-existent guidance. This on top of a pretty uniformly unsatisfactory experience generally.

There is seemingly nothing “Enterprise” grade about this stuff. Seriously, a bunch of JSON mashed up, converted behind the scenes into C# "function code, and unceremoniously dumped into a web app?

I have searched high and low for guidance all around the interweeeeebz. Disappointing. No helpful guidance. No samples. Nothing relevant to a modern enterprise based on automated devops principles. …Honestly, my enterprise org feels terribly neglected. I don’t know what to tell them except be “patient”. Someday AWS will completely take over… DUH. Seriously. Where’s the feedback? Where’s the guidance? Where’s the sense of “community spirit”? What is the story? The common sense approach to deploying these things through devops is difficult or (in my case) impossible to find. My scenario must be pretty common place. 1) Take logic app. 2) Connect logic app to a function app. 3) Do some development. 4) Be marginally satisfied with the “results” of the development effort. 5) Set up a devops process and use this modern approach to deploy code to the enterprise. Easy-peasy, right? Where is the guidance???

What say you???

I hope you are doing well.

Sincerely,

klawrawkz

AB#15950956

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:22

github_iconTop GitHub Comments

2reactions
klawrawkzcommented, May 7, 2022

@mirzaciri, @rohithah, and @WenovateAA,

I have concluded my research, sleuthed, found, deciphered, and so on, what I believe is a “best case enterprise scenario” that really is a minimally acceptable approach limited by what I suppose we could call the “Enterprise Logic App Lifecycle Conundrum”. In so doing, I also discovered a migration path away from Logic Apps (thingies) via translating JSON “instructions” to C# code base. I’ll get to the end game solution in a moment. First, though, let’s address the assertion that Microsoft has provided “enterprise grade” samples demonstrating best practices for Logic Apps.

I argue this is not the case. In fact, my assertion is that Logic Apps do not have the capabilities required to be appropriate for release as “enterprise components”. I argue to the contrary, that Logic Apps should not be used in a production environment. Certainly they should not be used for mission critical applications where scaling, reliability, and disaster recovery are requirements.

Now on to the Microsoft logic app “samples”. My complaint about the sample materials that @WenovateAA mentions is that the information is so basic as to be completely useless in illustrating principals and practices for any organization that is conducting serious work. The sample(s) are valuable in that they point to concepts which need to be taken into account when an organization considers “should we be using Logic Apps in the enterprise”. Sadly, though, these samples contain an insufficient amount of content to guide an organization to implement “enterprise best practices”. There is not enough meat on the bones of the samples to be useful to me and my organization. That’s my opinion, and my line in the sand over which no one shall cross and… You get the idea.

IMHO, the samples do not provide the serious guidance that is required by a enterprises who are intent upon developing, testing, deploying, and maintaining scalable performant software based solutions. That’s my claim and I’m sticking with it. A second opinionated claim I’ll make is that Microsoft does not provide enterprise appropriate guidance because there is none to produce. Ergo, one rational conclusion enterprises can arrive at is that Logic Apps are not suitable for enterprise use. To those enterprises who disagree, deploy such thingies at your own peril. The future development and maintenance nightmares, not to mention the subsequent attrition as front line IT workers flee the sinking ship that was once your thriving business concern, could hint at the colossal error that was made when you opted for logic app thingies in the enterprise. I can say this in good faith because THE TERROR IS MINE.

When to use logic app thingies? These “thingies” (my term of art for logic apps) are most appropriately Azure Toys For Tots in the enterprise. What I mean is probably the best use for logic apps (lower case intentional, yup) is to enable brainstorming by non-technical users who wish to demo some future vision for software product. Perhaps the marketing staff creates a slew of logic app thingies all lashed together and covered with mud, to use in demonstrating new use cases for developing some novel service the enterprise could sell. The IT group could create a playground, a secure and isolated RG, where marketing or product development employees can freely and safely create their logic-app-thingy mudballs demonstrating new and modernistic ideas for the sales guys to peddle based on whatever it is the company does. Then once the thingy demos are all over, the ideas have been promulgated, and sales and marketing folks are safely deployed to the golf course, IT can SALT the earth surrounding and including the thingy sandpit and blast all the mudballs down the drain so they can do no harm.

One Solution To “Enterprise Management And Deployment” Of Thingies - Create A Devops Pipeline Comprised Of

  1. Terraform
  2. Azure REST API/ Azure WebJobs SDK
  3. Bash or other shell scripts
  4. Function deployment tasks

This could have been a preamble, but as it stands this strategy is a “postamble”. JSON is a DATA PROTOCOL. JSON IS NOT A FIRST CLASS DEVELOPMENT LANGUAGE. Nobody has to learn JSON as if its a language, because there is nothing to learn other than the syntax. Not a development language, I must repeat. I must repeat. I must repeat… This has been a public service announcement. Thank you.

  1. The DevOps thingy Infrastructure Approach is Terraform. We use Terraform to create all required infrastructure resources including hosting environment, hosting application (web app), application insights, storage accounts, etc. As I say, RM templates are convoluted and difficult to work with. Why not use Terraform to simplify this work. Using terraform state management, we are guaranteed that our Azure landing zone is always current, up to date, and configured as expected. … Oh boy, JSON is not a development language, lol…

  2. Azure REST API contains functions that allow you to create a new thingy (workflow) in the terraformed thingy hosting environment (web app). If you choose to use the REST API then you will not need to implement step 4. Be advised that the REST API is better suited for devops where thingies are already extant in Azure. Below are some sample REST API calls that can be used in the deployment. I leave out the headers and auth tokens to keep the sample shorter.

  3. Bash/Shell script is used to write any custom application config settings you may require. This can also be done via terraform, so you may be able to eliminate this devops step.

  4. Function App deployment step. This approach is best for a new thingy deployment. If you have installed the vs code “Function” extension, you will find that this will allow you to create a logic workflow thingy app. Once you have “developed” your thingy workflow app, the project files can be used to create a devops deployment. Use Azure git as the source in devops, and create a function deployment step using logic app thingy details in the function deployment task. The function deployment task will handle your connections.json file for you. So in your dev environment be sure to include your Azure connections. As long as the connections are verified as being functional, they can be deployed via devops.

Migration Path Out Of The Morass Of Thingydome - Translate Workflow JSON To IL And IL To C# WebJobs

SDKs And Approach

Azure WebJobs SDK/Azure WebJobs SDK Core Extensions/Azure Functions Host We use the Azure Functions Host to translate thingy “workflow” json instructions to C# code via IL. Then we use the WebJobs SDK to create triggers. The SDK/Core Extensions provide triggers and bindings we register by calling config.UseCore(). The Core extensions provide general purpose bindings that account for most of the common scenarios we find in thingies. There is a binding for ExecutionContext provides invocation specific system information in your thingy. When using the SDK, connections are first class objects and don’t require the special handling demanded by JSON-based thingies. For example, the following sample code demonstrates a means of 1) creating a connection, and 2) accessing the Invocation ID for a specific function (previously a JSON-based thingy).


// Connection and configuration code in Azure WebJob SDK...
    var _myQueueConn = ConfigurationManager
        .ConnectionStrings["MyQueueConnection"].ConnectionString;
    MyHostConfiguration config = new MyHostConfiguration();
    config.QueueConnectionString = _myQueueConn;
    MyWorkflowJobHost workflowHost = new MyWorkflowJobHost(config);
    workflowHost.RunAndBlock();

// Queue trigger and logging code in Azure WebJobs SKD...
public static void ProcessQueue(
    [QueueTrigger("items")] Widget widget,
    TextWriter logEntry,
    ExecutionContext myExectionContext)
{
    logEntry.WriteLine("InvocationId: {0}", myExecutionContext.InvocationId);
}

Implementation is easy. All we need to do is to register the Core extensions by calling config.Core() in our startup code. The SDK provides a rich dashboard experience for monitoring services. We can track the invocation ID above using the dashboard logs that are provided out of the box by the SDK. Programmatic access to this data allows us to connect the dots between an invocation the generated logs. Or we can use Application Insights for analytics.

By means of this “translation” process we are easily able to convert JSON-based thingies to C# code and develop enterprise worthy “workflows” without the encumbrance of JSON-based thingies and the accompanying headaches brought on by JSON-based thingies in the enterprise.

// Yup.

    // There
   // You
  // Have
 // It
// .
1reaction
mirzaciricommented, Jun 2, 2022

In fact it is the only resource i encountered where the official docs is telling me to run a netwerk debugger to check how a managed connector request is formatted, like how are you expecting someone that wants to take the easy route (by choosing Logic Apps over C#), to handle stuff like that?

Could not agree more. Recently we needed to use the Salesforce connector, and after 3-4 pages through google search, we found out that the best way was to look at the network traffic to fetch the “correct” payload on how to build the connector. If you are an enterprise, with multiple connectors, this is just stupid (and the authorization is a whole new ballpark of stupidity unfortunately)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Set up DevOps for Standard logic apps - Azure
This article shows how to deploy a Standard logic app project to single-tenant Azure Logic Apps from Visual Studio Code to your infrastructure ......
Read more >
Single-tenant versus multi-tenant Azure Logic Apps
Learn the differences between single-tenant, multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
Read more >
DevOps for Azure Logic Apps Standard | by Vinnarason James
When it comes to DevOps story, Azure function supports Zip deployment model. Logic App standard enjoys the same model to deploy the workflows....
Read more >
Deploying Logic App Standard Workflows - Blog
Deploy the Logic App Service; Creating Workflows; Creating Connections. By using Logic Apps in Azure, we can automate and orchestrate workflows ...
Read more >
How to deploy Logic App code via Azure DevOps?
There is a good article on how to prepare the logic apps for CI/CD into multiple environments using ARM template. You just need...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found