question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to integrate dotnet lambda layers in a CI/CD pipeline

See original GitHub issue

My current CI/CD looks like this:

  1. code: source code on github
  2. build: automatically executes after code is pushed to github. The code is transformed in deployment artifacts in TeamCity. For example, my build script runs the dotnet lambda package command. This generates a zip file in the TeamCity artifacts.
  3. deploy: when a developer is ready to run a deployment, he manually triggers a deployment in octopus. This uses the deployment artifacts created in the previous step. My octopus project uses terraform to deploy my AWS infrastructure.

I have the feeling that this a pretty standard CI/CD pipeline. I could easilly map this to AWS CodeBuild and AWS Code Pipeline.

I then have some problems figuring how to integrate the dotnet lambda publish-layer and dotnet lambda deploy-function commands in this pipeline. Since both commands operate on my source code, the only option is to integrate them in step 2 (build step). The problem is that this step is only there to build. It is not there to deploy. In fact, it does not have any access to AWS. Furthermore, all my AWS infrastructure is deployed in step 3. It therefore wouldn’t make sense to deploy my lambda layers and lambdas in step 2.

To resolve this, I have the feeling that a new command like dotnet lambda package-layer would be more helpful. It would work in a way similar to dotnet lambda package, i.e.:

  • it would run in the build step of my CI/CD pipeline
  • it would generate a zip file (including the manifest) that corresponds to the lambda layer
  • this zip file would become a build artifact that could be used by my deployment step

Then, the --function-layers parameters of dotnet lambda package could accept this local zip file as an input (i.e. the command would not only accept layer arns).

I feel that this would be a more natural way to integrate lambda layers in a CI/CD pipeline.

If you look at the lambda layers integration in terraform, you will see that only nodejs layers are supported. Also, you will see that the aws_lambda_layer_version resource takes a zip file as input (filename). So, if a command like dotnet lambda package-layer could create a zip file, it could be integrated naturally in a CI/CD pipeline that uses terraform.

So, any clarification on how to use lambda layers in a CI/CD pipeline would be appreciated.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:10
  • Comments:13 (3 by maintainers)

github_iconTop GitHub Comments

12reactions
mabeadcommented, May 1, 2019

You say that for CI/CD you were imagining that layers would be created in a separate pipeline. I will try to read between the line and see what it would look like. For a lot of folks, I think that a pipeline is tied to a repository and a repository has only one pipeline. So whenever a commit is done in that repository, the pipeline starts. In the case of layers, it would mean that there would be one repository that defines a layer (lets call it the “layer repository”) and one (or more) repositories would reference this layer (lets call it the “consumer repository”). So the pipeline of the layer repository would call dotnet lambda publish-layer and the consumer repositories would call dotnet lambda deploy-function with --function-layers that contains the arn of the desired layer. This would probably mean that the consumer repositories would need the layer arn in their source code / script. So when a new layer version is published, they would need to update their arn. (note here that it would become complex really fast if the team deploys on multiple regions/accounts)

For teams where dependencies are standardized (ex: everyone must use Newtonsoft vs X, everyone must use FluentValidation version Y, everyone must use AutoMapper version Z, etc), this could be an efficient process. But for other teams where such standardization does not exist, it would be a painful process. In my team for example, we have about 25 .net core micro-services that each have their own set of dependencies. It would be a colossal effort to try to agree on a common set of dependencies.

Let’s just imagine that we don’t have to agree on a common set of dependencies. Instead, we have one layer repository (and pipeline) per consumer repository. So if a developer wants to change the version of newtonsoft, he would need to go in the layer repository, do a commit, start the pipeline, deploy the resulting layer, change the lambda arn in the consumer repository, commit, test it. A little painful 😦 And now imagine that he then finds out that the new version of newtonsoft does not work in his consumer repository… I’m pretty sure that he would complain that he is losing is time… things could be much simpler for him.

Another problem that I see with this approach is that the deployment of layers and lambdas is now dependent on dotnet lambda. I feel that dotnet lambda deploy-function is useful for local development and quick experimentations. But when it’s time to integrate lambdas in a more global context of a micro-service that needs other aws resources (ex: dynamodb table, S3 bucket, etc), I think that people don’t use the dotnet lambda tool. Instead, they use tools like Terraform or CloudFormation that can deploy all the aws resources at the same time without needing external tools like dotnet lambda.

Let’s now see an alternative approach that I feel is more CI/CD friendly.

In a given repository (i.e. for a given micro-service), I have 2 csproj files:

  • layer.csproj
  • service.csproj

The layer.csproj contains all the PackageReference of my dependencies. Ex:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
    <AWSProjectType>Lambda</AWSProjectType>
    <OutputType>Library</OutputType>
    <StartupObject />
  </PropertyGroup>

  <ItemGroup>    
    <PackageReference Include="AWSSDK.Lambda" Version="3.3.17.12" />    
    <PackageReference Include="FluentValidation.AspNetCore" Version="8.0.100" />    
    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.2" />
    <PackageReference Include="Amazon.Lambda.AspNetCoreServer" Version="2.1.0" />
    <PackageReference Include="Polly" Version="6.1.1" />
  </ItemGroup>

</Project>

Then, service.csproj only has a ProjectReference on layer.csproj. It does not have any PackageReference. Ex:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
    <AWSProjectType>Lambda</AWSProjectType>
  </PropertyGroup>

  <ItemGroup>    
    <ProjectReference Include="../LANDR.DownloadBin.Layer/LANDR.DownloadBin.Layer.csproj" />
  </ItemGroup>

</Project>

Let’s now see what the build script of this repository could look like. First, it would need to create a runtime package store for the layer that can later be uploaded to S3. If I understand correctly, the dotnet lambda publish-layer does:

  1. call dotnet store to generate a runtime package store. This includes an artifact.xml and a directory structure that includes all the assemblies of the layer.
  2. upload artifact.xml to S3
  3. zip the runtime package store
  4. upload the zip to S3 just beside the artifact.xml
  5. create the layer in AWS

As already said, I think that a build script rarely has access to AWS and should not be responsible to deploy. It should only generate artifacts. With this in mind, step 1 fits in a build script while step 2 to 5 don’t fit.

Here’s an alternative: instead of the --s3-bucket and --region parameters, the dotnet lambda publish-layer could accept a --zip-file parameter, The command would then simply do:

  1. call dotnet store to generate a runtime package store. This includes an artifact.xml and all the a directory structure that includes all the assemblies of the layer.
  2. create a zip from the resulting directory

So it does not use AWS and works perfectly so far. Since the command does not publish, maybe a more proper name would be dotnet lambda package-layer.

Then, to package my lambda, instead of using the dotnet lambda package with --function-layers that takes a lambda arn, I would just feed the path of the generated zip in --function-layers. This would be the same path as I used for the --zip-file parameter of dotnet lambda publish-layer. dotnet lambda package would extract the artifact.xml from the zip and feed it to the --manifest option of dotnet publish that it calls under the hood.

So here’s the resulting artifact that I have:

  • a zip file that contains my layer
  • a zip file that contains my lambda (it does not include the dependencies that are in my layer)

All this was done without using AWS resources (store & read from S3).

These two zips would become build artifacts that can later be used by my deployment pipeline. In my case, I deploy with terraform. My terraform defintion would look like this:

resource "aws_lambda_layer_version" "lambda_layer" {
  filename = "lambda_layer_payload.zip"
  layer_name = "lambda_layer_name"
  source_code_hash = "${base64sha256(file("lambda_layer_payload.zip"))}"

  compatible_runtimes = ["dotnetcore2.1"]
}

resource "aws_lambda_function" "test_lambda" {
  filename         = "lambda_function_payload.zip"
  function_name    = "lambda_function_name"
  handler          = "some_handler"
  source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
  runtime          = "dotnetcore2.1"
  layers = ["${aws_lambda_layer_version.lambda_layer.arn}"]
}

As usual, terraform would use the source_code_hash to determine if a new layer needs to be created or not. When the source code hash does not change, terraform does not create a new layer. So developers would not have to think about updating the layer on AWS when they change a dependency. It just happens automatically without them having to think or do anything.

I think that this workflow is more natural because:

  • build scripts do not depend on AWS resources (S3)
  • dotnet lambda is not used for deployment
  • it integrates naturally with deployment tools like terraform
  • layers are only created when they change
  • easy to integrate to a multi-region / multi-account deployment strategy
5reactions
mabeadcommented, May 1, 2019

Wow, I enabled the --verbose in my previous comment 😉 To summarize, I think that other CI/CD workflows could be enabled by:

  • adding a new dotnet lambda package-layer command
  • changing dotnet lambda package to support local file paths in the --function-layers parameter
Read more comments on GitHub >

github_iconTop Results From Across the Web

Automated CI/CD pipeline for .NET Core Lambda functions ...
In this approach, you define the pipeline in CodePipeline with two stages: AWS CodeCommit and AWS CodeBuild. CodeCommit is the fully-managed ...
Read more >
AWS Lambda layers in .NET
The blog post referenced above does a great job detailing how to use the Amazon.Lambda.Tools .NET Core Global Tools to create a Lambda...
Read more >
How to correctly deploy a dotnet lambda layer with AWS ...
I am trying to deploy an AppSync API with multiple lambda function data sources. Common functionality between these functions will be housed ...
Read more >
Deploying AWS Lambda using .NET CLI | by Gautamrajotya
In this section, we will create a Bitbucket CICD Pipeline to deploy AWS Lambda using .NET CLI. Bitbucket Pipelines. Bitbucket Pipelines is an...
Read more >
How to set up a CI/CD pipeline for AWS Lambda? - greeeg.com
You first need to configure Terraform for your project by specifying which version to use, where to store the infrastructure state (persistance ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found