question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Keep warm within Cloudwatch

See original GitHub issue

Hi guys, I have one page in my app requires me to use SSR. We have nearly 1M profiles that could possibly be used in this page. Right now if i visit the page once, it takes ~ 3s to respond, and each time after that is < 1s. I created a CloudWatch rule in AWS to invoke the defaultLambda once every 5 minutes, but it doesn’t seem to keep the container alive. Do you have a preferred method to accomplish this? I am wondering if i need to include some parameters to actually trigger the function correctly, like a path of some kind.

Thanks!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
dphangcommented, May 10, 2021

@dphang thanks for this – for business reasons implementing your proxy based solution for invoking the distro’s actually makes a lot of sense. Is there any reason to actually have the lambda deployed based on region though? Unless I’m misunderstanding, the call via the proxy would already be able to trigger the distribution in that region / city.

Yes you can just use a proxy from a single Lambda, though I guess it may cost more since you have to pay for proxy bandwidth (though it should be minimal since you are making HEAD requests)? Lambda has a generous free tier so I was just trying to take advantage of just using the different Lambda regions

1reaction
dphangcommented, May 7, 2021

Yes, the above is correct, unfortunately you can’t keep all the edge Lambdas warm that way, only the source one in us-east-1.

I was also trying to reduce cold starts, here are a couple “workarounds”:

  1. One hack is to create a dummy page that will invoke most of your dependencies. Then you can create a Lambda in each region you care about (or use a proxy for cities you care bout) and make a HEAD request to CF (why HEAD? so you don’t incur network bandwidth fees). Though this can end up being somewhat expensive depending on frequency, regions, if you use proxies, etc. And it is not guaranteed to keep everything warm, as there may be multiple CF edge locations in a city.

Feel free to use my code (basically, in us-west-1 I use the proxy to trigger the CF request for each distribution and also call the CF distribution from each Lambda - for example, I deployed to us-west and us-east).

import fetch from "node-fetch";
import HttpsProxyAgent from "https-proxy-agent/dist/agent";

// Proxies for specific cities
const proxies: { [key: string]: string } = {
  seattle:
    "YOUR_PROXY",
};

export const handler = async (event: any = {}): Promise<any> => {
  const warmCount = process.env.WARM_COUNT ?? 3;

  const warmer = async (proxy?: string, city?: string): Promise<void> => {
    console.log(`Starting fetch for city ${city}`);
    let response;
    if (proxy) {
      const agent = new HttpsProxyAgent(proxy);
      response = await fetch(
        "https://xxx.cloudfront.net/dummy-endpoint",
        {
          method: "HEAD",
          agent: agent,
        }
      );
    } else {
      response = await fetch(
        "https://xxx/dummy-endpoint",
        {
          method: "HEAD",
        }
      );
    }
    console.log(
      `Fetch completed for city ${city} with status: ${response.status}`
    );
  };

  const warmers = [];

  // Only warm cities if in us-west-1 region lambda
  if (process.env.REGION === "us-west-1") {
    for (const city in proxies) {
      const proxy = proxies[city];
      for (let i = 0; i < warmCount; i++) {
        warmers.push(warmer(proxy, city));
      }
    }
  }

  // Push Lambda's warmer as well
  for (let i = 0; i < warmCount; i++) {
    warmers.push(warmer());
  }

  // Run all warmers in parallel to force Lambda to spin up WARM_COUNT containers for each city.
  // You can also add an artificial delay (e.g 100 ms) in your dummy endpoint in order to guarantee one invocation doesn't finish before others have started, so it doesn't reuse same container)
  // Wait until all settled no matter whether an individual one failed.
  await Promise.allSettled(warmers);
};
  1. You can also have your users “warm” your Lambda as soon as they enter the app, or right before the page that you want to keep warm. For example, your main page may be statically cached, but maybe a specific link on that page goes to an SSR page (e.g browsing to a dynamic page). So you don’t want those subsequent requests to be slow for a user in a particular region. You can create a dummy endpoint for your API/page Lambdas and have your user’s browser make a HEAD request to these dummy endpoints asynchronously as soon as page is loaded/mounted.

Unfortunately cold starts are an inherent problem with Lambda spinning up a new container on-demand Though there are efforts right now to separate the Lambda@Edge code from the core routing code. Once that is done, it will be easier to make it work with regular Lambda (where you can use provisioned concurrency / CloudWatch events to keep your Lambda warm), and other platforms like Cloudflare workers (which promises 0 ms cold starts, but the platform is more limited right now. Though it looks like they are improving support for Node.js and also increasing the code size limits soon: https://blog.cloudflare.com/node-js-support-cloudflare-workers/). There’s also CloudFront Functions which is very new and even more limited. Though I guess it can only handle routing the static pages in S3 since it can’t make network/file system requests and has small memory/code size limits (2 MB memory, 10 KB code): https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Keep Your AWS Lambda Functions Warm
Keep Lambda warm by using a CloudWatch timer. You need to ping your Lambda function every 5–15 minutes to keep it warm. Learn...
Read more >
Is it possible to keep an AWS Lambda function warm?
Ideally, there is a setting that I missed called "keep warm" that increases the cost of the Lambda functions, but always keeps a...
Read more >
Keep Your AWS Lambda Functions Warm and Avoid Cold Starts
In Cloud watch logs, you can see once warmup event is received, it contains an action — warmer. This is detected by the...
Read more >
Optimize AWS Lambda Function Cold Starts - Jeremy Daly
Non-VPC functions are kept warm for approximately 5 minutes whereas VPC-based functions are kept warm for 15 minutes. Set your schedule for ...
Read more >
Recommended CloudWatch alarms for Amazon OpenSearch ...
Recommended CloudWatch alarms for Amazon OpenSearch Service ; Nodes minimum is < x for 1 day, 1 consecutive time, x is the number...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found