question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

apollo-server-cache-redis ERR max number of clients reached in AWS Lambda

See original GitHub issue

I’m using apollo-server-lambda to setup a GraphQL apollo server on an AWS lambda and using a free redis instance from redislabs.com for caching responses and persisted queries.

There’re only 2/3 developers using the endpoint, however its already reaching the connection limit (>30 connections) for some reason. it seems that the library is not closing connections or perhaps the redis connection is not cached, causing the lambda to create a new connection on every request.

Redislabs Connections Metrics: image

Apollo package versions

    "apollo-server-cache-redis": "^1.1.4",
    "apollo-server-lambda": "^2.9.7",
    "apollo-server-plugin-response-cache": "^0.3.5",

handler.ts

import 'source-map-support/register'
import { setupXRay } from '../helpers/setup-xray'
import { createApolloServer } from '../graphql/apollo-server'
import { initConnection } from '../database/connection'
import { isWarmupRequest } from '../helpers/handle-warmup-plugin'
import * as mongoose from 'mongoose'
import { APIGatewayEvent, Callback } from 'aws-lambda'

setupXRay()

export const graphqlHandler = (event: APIGatewayEvent, context: any, callback: Callback): void => {
  context.callbackWaitsForEmptyEventLoop = false
  if (isWarmupRequest(event, context)) {
    callback(null, {
      statusCode: 200,
      body: 'warmed'
    })
  } else {
    const server = createApolloServer()
    initConnection().then((connection: typeof mongoose) => {
      console.log('creating handler')
      server.createHandler({
        cors: {
          origin: '*',
          credentials: true
        }
      })(event, { ...context, mongooseConnection: connection }, callback)
    })
  }
}

apollo-server.ts

import { ApolloServer } from 'apollo-server-lambda'
import responseCachePlugin from 'apollo-server-plugin-response-cache'
import { RedisCache } from 'apollo-server-cache-redis'
import { resolvers } from './resolvers'
import { validateAuthHeader } from '../helpers/authentication'
import { typeDefs } from './schema'
import { OvsContext } from '../interfaces/OvsContext'

const IS_OFFLINE = process.env.IS_OFFLINE

export const redisCache: RedisCache = !IS_OFFLINE && new RedisCache({
  host: process.env.REDIS_HOST,
  password: process.env.REDIS_PASSWORD,
  port: process.env.REDIS_PORT
})

export function createApolloServer (): ApolloServer {
  const server: ApolloServer = new ApolloServer({
    typeDefs,
    resolvers,
    mocks: false, /* {    Date: () => {      return new Date()}} */
    playground: {
      endpoint: IS_OFFLINE ? 'http://localhost:3000/graphql' : `${process.env.BASE_URL}/graphql`
    },
    introspection: true,
    tracing: false,
    cacheControl: { defaultMaxAge: 60 * 60 }, // 1h
    ...(!IS_OFFLINE && {
      engine: {
        apiKey: process.env.ENGINE_API_KEY,
        // debugPrintReports: true,
        schemaTag: IS_OFFLINE ? 'offline' : process.env.AWS_STAGE
      }
    }),
    ...(!IS_OFFLINE && { cache: redisCache }),
    plugins: [IS_OFFLINE ? responseCachePlugin() : responseCachePlugin({ cache: redisCache })],
    context: async ({ event, context }): Promise<OvsContext> => {
      const mongooseConnection = context.mongooseConnection
      try {
       // get the user token from the headers
        const decodedToken = await validateAuthHeader(event.headers.Authorization)
        // try to retrieve a user with the token
        const User = mongooseConnection.model('User')
        const user = await User.findById(decodedToken._id).lean()
        // add the user to the context
        return { user, mongooseConnection }
      } catch (err) {
      // console.log(err)
        return { mongooseConnection, err }
      }
    },
    ...(!IS_OFFLINE && {
      persistedQueries: {
        cache: redisCache
      }
    })
  })
  return server
}

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
Vednuscommented, Nov 25, 2019

Try moving your redis cache variable outside of your handler function, then check whether it exists before creating it again with new RedisCache, similar to how mongoose suggests you deal with a db connection in this example: https://mongoosejs.com/docs/lambda.html

I was having the same problem and this seems to fix it.

0reactions
glassercommented, Oct 11, 2022

Apollo Server 4 replaces a hard-coded set of web framework integrations with a simple stable API for building integrations. As part of this, the core project no longer directly supports a Lambda integration. Check out the @as-integrations/aws-lambda package; this new package is maintained by Lambda users and implements support for the API Gateway v1 and v2 protocols. If you need to work with different AWS projects, you may be able to add that support to that package, or take another approach such as combining the @vendia/serverless-express package with Apollo Server 4’s built-in expressMiddleware (this is the same approach that apollo-server-lambda takes in AS3). Sorry for the delay!

Read more comments on GitHub >

github_iconTop Results From Across the Web

redis: max number of clients reached - Stack Overflow
I have this redis cache where values are set about 100 times each day. After running for some days perfectly I am getting...
Read more >
Resolve Lambda throttling "Rate Exceeded" and 429 errors
My AWS Lambda function is producing "Rate exceeded" and 429 "TooManyRequestsException" errors. Why is my function throttled, and how do I ...
Read more >
Request a concurrency limit increase for your Lambda function
The default concurrency limit per AWS Region is 1,000 invocations at any given time. The default burst concurrency quota per Region is between ......
Read more >
Troubleshooting Lambda configurations - AWS Documentation
Once the timeout value is reached, the function is stopped by the Lambda services. If a timeout value is set close to the...
Read more >
Lambda quotas - AWS Documentation
Maximum sizes, limits, and quotas for Lambda functions and API requests. ... Lambda sets quotas for the amount of compute and storage resources...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found