Keep warm within Cloudwatch
See original GitHub issueHi guys, I have one page in my app requires me to use SSR. We have nearly 1M profiles that could possibly be used in this page. Right now if i visit the page once, it takes ~ 3s to respond, and each time after that is < 1s. I created a CloudWatch rule in AWS to invoke the defaultLambda
once every 5 minutes, but it doesn’t seem to keep the container alive. Do you have a preferred method to accomplish this? I am wondering if i need to include some parameters to actually trigger the function correctly, like a path of some kind.
Thanks!
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
How to Keep Your AWS Lambda Functions Warm
Keep Lambda warm by using a CloudWatch timer. You need to ping your Lambda function every 5–15 minutes to keep it warm. Learn...
Read more >Is it possible to keep an AWS Lambda function warm?
Ideally, there is a setting that I missed called "keep warm" that increases the cost of the Lambda functions, but always keeps a...
Read more >Keep Your AWS Lambda Functions Warm and Avoid Cold Starts
In Cloud watch logs, you can see once warmup event is received, it contains an action — warmer. This is detected by the...
Read more >Optimize AWS Lambda Function Cold Starts - Jeremy Daly
Non-VPC functions are kept warm for approximately 5 minutes whereas VPC-based functions are kept warm for 15 minutes. Set your schedule for ...
Read more >Recommended CloudWatch alarms for Amazon OpenSearch ...
Recommended CloudWatch alarms for Amazon OpenSearch Service ; Nodes minimum is < x for 1 day, 1 consecutive time, x is the number...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes you can just use a proxy from a single Lambda, though I guess it may cost more since you have to pay for proxy bandwidth (though it should be minimal since you are making HEAD requests)? Lambda has a generous free tier so I was just trying to take advantage of just using the different Lambda regions
Yes, the above is correct, unfortunately you can’t keep all the edge Lambdas warm that way, only the source one in us-east-1.
I was also trying to reduce cold starts, here are a couple “workarounds”:
Feel free to use my code (basically, in us-west-1 I use the proxy to trigger the CF request for each distribution and also call the CF distribution from each Lambda - for example, I deployed to us-west and us-east).
Unfortunately cold starts are an inherent problem with Lambda spinning up a new container on-demand Though there are efforts right now to separate the Lambda@Edge code from the core routing code. Once that is done, it will be easier to make it work with regular Lambda (where you can use provisioned concurrency / CloudWatch events to keep your Lambda warm), and other platforms like Cloudflare workers (which promises 0 ms cold starts, but the platform is more limited right now. Though it looks like they are improving support for Node.js and also increasing the code size limits soon: https://blog.cloudflare.com/node-js-support-cloudflare-workers/). There’s also CloudFront Functions which is very new and even more limited. Though I guess it can only handle routing the static pages in S3 since it can’t make network/file system requests and has small memory/code size limits (2 MB memory, 10 KB code): https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/