Feature Proposal: Improvements to `serverless deploy`/`serverless deploy function`
See original GitHub issueThis is a Feature Proposal
Description
This issue provides an implementation proposal to think through several related issues – multiple issues recently (#3932, #4061, #4042, #3871 (ish), probably others) have indicated a need to support some improvements to the deploy
and/or deploy function
command, and hopefully by bringing all these ideas together, we can think through a solution that addresses all of them at once.
Serverless 1.17
via #3838 implemented the ability for a stack to only deploy if things have changed. However, it is not necessarily the case that Serverless needs to deploy all resources simply because a single function has changed.
An example use case where this is an issue: a REST API composed of many independent functions/packages, each it’s own independently-versioned microservice, orchestrated together by a base serverless.yml
. Something like Lerna gives us the ability to maintain several independent “packages” in a monorepo whereby each package is an independent endpoint (with it’s own package.json
etc), and it can handle diffing functions via a lerna update
command, so currently in CI we can pass that output and run serverless deploy function -f endpointA -f endpointB
and only deploy the functions that have changed.
However, this approach has two obvious disadvantages:
- (On AWS) it does not upload code to s3, meaning it is subject to the 50mb/function limit
- It won’t deploy any changes to other resources or configuration for each function. For instance, a change in the amount of RAM configured for each function (#4042)
Currently, our team simply runs serverless deploy
to deploy the whole stack, including other configuration changes, however this seems less than ideal as part of this magic is the guarantee that only resources that have changed get deployed.
(The other issue with this approach in general is that stack sizes quickly run up against Cloudformation’s 200 resource limit, though this is largely solved via @dougmoscrop’s excellent serverless-plugin-split-stacks
(see: #2387, #3411, #3976, #3504, #3441)).
Proposed fixes
I’d like to dive into this problem a bit, but I’m not sure the best way forward, and I’ll be honest - I’m not terribly familiar with the technical constraints underpinning these issues. As discussion progresses I’ll flesh this out a bit more (especially w/r/t the technical overviews). Most likely the ideal solution lies in between these proposals.
However, so far it seems to me the following ways could solve these problems:
1a. Treat sls function deploy
more like sls deploy
Deploy both functions and their configuration, and deploy to s3
on the sls function deploy -f
command.
Upsides
- Fits well into existing workflow
- Isn’t too magical as you must specify the specific functions to update
- Is easy to mix and mash with custom tooling (ie we have a script that simply generates the list of changed endpoints,
- Is much more intuitive than the current way
sls deploy function
functions - certainly was a shock to me to find out it didn’t operate this way already
Downsides
- How/when do we deploy service-wide changes?
- Is this even possible with the way the Cloudformation magic happens? (Update: not nicely, see: https://github.com/serverless/serverless/issues/4071#issuecomment-321721333)
- Is more work for the developer, as identified by https://github.com/serverless/serverless/issues/3932#issue-242222323. For larger projects, we can assume CI will handle the messy bits, but that may be the exception rather than the rule
- The nomenclature for this may be a bit confusing, since the command is
deploy function,
not iedeploy (micro)service
Technical overview
WIP
1b. Just make sls deploy function -f
use s3
Upsides
- Seemingly easy to implement, small fix (?)
Downsides
- Doesn’t really solve all the problems
- From a CI level, the developer would still would need to figure out how/when to update service-wide resources
Technical Overview
WIP
1c. Add a -f
option to serverless deploy
Upsides
- Brings
serverless deploy
into parity withserverless function deploy
- Isn’t prescriptive, does exactly what it says on the tin: ie, deploys that function(s) and it’s resources
Downsides
- Potentially confusing? Why is there a
serverless deploy function -f
and aserverless deploy -f
? - Still, what about provider-level resources? Should we just deploy those automatically every time?
- CloudFormation may not like this solution either
Technical Overview
WIP
2. Only deploy what needs to be deployed when running serverless deploy
There’s probably a few ways to do this, but basically, Serverless could get smarter than 1.17
and deploy only what functions have changed, rather than a binary (don’t deploy, deploy.)
Upsides
- Easy for the developer! It just works.
- No additional CI manuvering
- May be more obvious than existing
- Less work for Serverless to do? But maybe not, see https://github.com/serverless/serverless/issues/3932#issuecomment-315017682
Downsides
- May be too magical. What if it doesn’t Just Work?
- How do we know when a function has changed? Delta the package checksums?
- May not fit well into existing CI workflows, would likely need to be opt-in
- How do we determine whether a resource has changed?
- A pretty major change in the way the command has worked before
- https://github.com/serverless/serverless/issues/3932#issuecomment-315017682
- Likely more technically challenging than the previous solutions
Technical Overiew
WIP
3a. Provide orchestration over multiple serverless.yml
files
I don’t like this solution very much, but this problem (and the stack-splitting issues) could be solved by simply splitting up the serverless.yml
into multiple pieces – one for each microservice – and running serverless deploy
on each one.
Upsides
- This is possible right now by simply running a bash
for loop
and doing some file concatenation - Keeps the
serverless.yml
much shorter - Encourages separation of responsibility/thinking about services individually: in the scenario outlined above, each package is effectively it’s own Serverless service
Downsides
- Likely quite obnoxious for the developer in general
- Lots of extra “stuff” blowing up AWS. Suddenly for a 20-endpoint service you have 20 serverless.yml’s, 20 deployment buckets/prefixes, etc (times, of course, the number of stages.)
- Scope creep - Serverless now needs/acts like a
serverless-compose
😉 - No single point of truth across a single service composed of many microservices. It’s nice to see all the resources in one place, and edit the file globally.
- Not easy to share resources and config between functions (#3442)
- Could break the variable system/make it more obnoxious
- Docker does it, but concatenating multiple files (say a base.serverless.yml and endpoint.serverless.yml) is error-prone and introduces lots of issues with precedence and merging that are just annoying in general
Technical Overview
WIP
3b. Use multiple serverless.yml
’s, and #3442’s import/export syntax
Similar to the above, just using import/export instead of concatenation.
Upsides
- Works right now (?)
- Keeps the
serverless.yml
shorter - Encourages separation of responsibility/thinking about services individually: in the scenario outlined above, each package is it’s own Serverless service
Downides
- Most of the same ones from 3a
- More stuff in more places, similar to 3a.
- Unclear what can and cannot be shared using the import/export syntax
- Requires more external tooling to make it as easy as
serverless deploy
- Not well documented
- No single point of truth across a single service composed of many microservices
Technical Overview
WIP
Issue Analytics
- State:
- Created 6 years ago
- Reactions:12
- Comments:8 (6 by maintainers)
I really I’d to see some of those improvements around. As per the nature of the Serverless Architecture, I’d like to be able to deploy just one small piece of the entire service if I have changed just a single function or added a new one, without affecting what it’s already working, so not having the risk of turning what’s in-place down. This would be extremely useful, especially when we’re packing individually.
@brettneese The version resources are created at build time when the CF template is generated.
In general, version resources point to and represent the actually deployed lambda code after deployment. Their Ref semantics returns the full arn, including the version number, so that you can explicitly call it or tag a specific version with an alias (that’s what my alias plugin does).
After some additional thoughts, I’m quite confident that a consistent approach should start with
function deploy
, i.e. to change it that it uses CF to deploy the demanded function. Afterwards the standarddeploy
optimization can be adapted. The other way around would still have two interfering approaches in the system in parallel.This is what
function deploy
should do:That should work and CF should only update the one changed function (which is fast). The major improvement here is, that changes in the function’s configuration (memory size, environment vars, etc.) will automatically be deployed together with the function’s code. Currently that’s a real big issue with
deploy function
.That everything in the CF template is created at build time is currently a limitation of the framework, but that’s like it is now - and that’s something that cannot be changed easily. Full support of CI/CD without the need of packaging for the different environments/stages/etc. would need a different task/issue for the implementation/feature request for now.