How to integrate dotnet lambda layers in a CI/CD pipeline
See original GitHub issueMy current CI/CD looks like this:
- code: source code on github
- build: automatically executes after code is pushed to github. The code is transformed in deployment artifacts in TeamCity. For example, my build script runs the
dotnet lambda package
command. This generates a zip file in the TeamCity artifacts. - deploy: when a developer is ready to run a deployment, he manually triggers a deployment in octopus. This uses the deployment artifacts created in the previous step. My octopus project uses terraform to deploy my AWS infrastructure.
I have the feeling that this a pretty standard CI/CD pipeline. I could easilly map this to AWS CodeBuild and AWS Code Pipeline.
I then have some problems figuring how to integrate the dotnet lambda publish-layer
and dotnet lambda deploy-function
commands in this pipeline. Since both commands operate on my source code, the only option is to integrate them in step 2 (build step). The problem is that this step is only there to build. It is not there to deploy. In fact, it does not have any access to AWS. Furthermore, all my AWS infrastructure is deployed in step 3. It therefore wouldn’t make sense to deploy my lambda layers and lambdas in step 2.
To resolve this, I have the feeling that a new command like dotnet lambda package-layer
would be more helpful. It would work in a way similar to dotnet lambda package
, i.e.:
- it would run in the build step of my CI/CD pipeline
- it would generate a zip file (including the manifest) that corresponds to the lambda layer
- this zip file would become a build artifact that could be used by my deployment step
Then, the --function-layers
parameters of dotnet lambda package
could accept this local zip file as an input (i.e. the command would not only accept layer arns).
I feel that this would be a more natural way to integrate lambda layers in a CI/CD pipeline.
If you look at the lambda layers integration in terraform, you will see that only nodejs layers are supported. Also, you will see that the aws_lambda_layer_version
resource takes a zip file as input (filename
). So, if a command like dotnet lambda package-layer
could create a zip file, it could be integrated naturally in a CI/CD pipeline that uses terraform.
So, any clarification on how to use lambda layers in a CI/CD pipeline would be appreciated.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:10
- Comments:13 (3 by maintainers)
Top GitHub Comments
You say that for CI/CD you were imagining that layers would be created in a separate pipeline. I will try to read between the line and see what it would look like. For a lot of folks, I think that a pipeline is tied to a repository and a repository has only one pipeline. So whenever a commit is done in that repository, the pipeline starts. In the case of layers, it would mean that there would be one repository that defines a layer (lets call it the “layer repository”) and one (or more) repositories would reference this layer (lets call it the “consumer repository”). So the pipeline of the layer repository would call
dotnet lambda publish-layer
and the consumer repositories would calldotnet lambda deploy-function
with--function-layers
that contains the arn of the desired layer. This would probably mean that the consumer repositories would need the layer arn in their source code / script. So when a new layer version is published, they would need to update their arn. (note here that it would become complex really fast if the team deploys on multiple regions/accounts)For teams where dependencies are standardized (ex: everyone must use Newtonsoft vs X, everyone must use FluentValidation version Y, everyone must use AutoMapper version Z, etc), this could be an efficient process. But for other teams where such standardization does not exist, it would be a painful process. In my team for example, we have about 25 .net core micro-services that each have their own set of dependencies. It would be a colossal effort to try to agree on a common set of dependencies.
Let’s just imagine that we don’t have to agree on a common set of dependencies. Instead, we have one layer repository (and pipeline) per consumer repository. So if a developer wants to change the version of newtonsoft, he would need to go in the layer repository, do a commit, start the pipeline, deploy the resulting layer, change the lambda arn in the consumer repository, commit, test it. A little painful 😦 And now imagine that he then finds out that the new version of newtonsoft does not work in his consumer repository… I’m pretty sure that he would complain that he is losing is time… things could be much simpler for him.
Another problem that I see with this approach is that the deployment of layers and lambdas is now dependent on
dotnet lambda
. I feel thatdotnet lambda deploy-function
is useful for local development and quick experimentations. But when it’s time to integrate lambdas in a more global context of a micro-service that needs other aws resources (ex: dynamodb table, S3 bucket, etc), I think that people don’t use thedotnet lambda
tool. Instead, they use tools like Terraform or CloudFormation that can deploy all the aws resources at the same time without needing external tools likedotnet lambda
.Let’s now see an alternative approach that I feel is more CI/CD friendly.
In a given repository (i.e. for a given micro-service), I have 2 csproj files:
The layer.csproj contains all the
PackageReference
of my dependencies. Ex:Then, service.csproj only has a
ProjectReference
on layer.csproj. It does not have anyPackageReference
. Ex:Let’s now see what the build script of this repository could look like. First, it would need to create a runtime package store for the layer that can later be uploaded to S3. If I understand correctly, the
dotnet lambda publish-layer
does:dotnet store
to generate a runtime package store. This includes an artifact.xml and a directory structure that includes all the assemblies of the layer.As already said, I think that a build script rarely has access to AWS and should not be responsible to deploy. It should only generate artifacts. With this in mind, step 1 fits in a build script while step 2 to 5 don’t fit.
Here’s an alternative: instead of the
--s3-bucket
and--region
parameters, thedotnet lambda publish-layer
could accept a--zip-file
parameter, The command would then simply do:dotnet store
to generate a runtime package store. This includes an artifact.xml and all the a directory structure that includes all the assemblies of the layer.So it does not use AWS and works perfectly so far. Since the command does not publish, maybe a more proper name would be
dotnet lambda package-layer
.Then, to package my lambda, instead of using the
dotnet lambda package
with--function-layers
that takes a lambda arn, I would just feed the path of the generated zip in--function-layers
. This would be the same path as I used for the--zip-file
parameter ofdotnet lambda publish-layer
.dotnet lambda package
would extract the artifact.xml from the zip and feed it to the--manifest
option ofdotnet publish
that it calls under the hood.So here’s the resulting artifact that I have:
All this was done without using AWS resources (store & read from S3).
These two zips would become build artifacts that can later be used by my deployment pipeline. In my case, I deploy with terraform. My terraform defintion would look like this:
As usual, terraform would use the
source_code_hash
to determine if a new layer needs to be created or not. When the source code hash does not change, terraform does not create a new layer. So developers would not have to think about updating the layer on AWS when they change a dependency. It just happens automatically without them having to think or do anything.I think that this workflow is more natural because:
dotnet lambda
is not used for deploymentWow, I enabled the
--verbose
in my previous comment 😉 To summarize, I think that other CI/CD workflows could be enabled by:dotnet lambda package-layer
commanddotnet lambda package
to support local file paths in the--function-layers
parameter