question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[aws-eks] AWS load balancer controller support

See original GitHub issue

It’s a common use case to deploy ALB ingress controller on EKS, it would be helpful to support it in L2 class level.

Use Case

Deploy ALB ingress controller for using ALB to deploy ingress of K8S.

Proposed Solution

Might implement a new L2 class ALBIngressController like below,

import * as yaml from 'js-yaml';
import * as request from 'sync-request';

export interface ALBIngressControllerProps {
    readonly cluster: Cluster;
    readonly version: string;
    readonly vpcId: string;
}

class ALBIngressController extends Construct {
   constructor(scope: Construct, id: string, props: ALBIngressControllerProps) {
      
      const albBaseResourceBaseUrl = `https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/${props.version}/docs/examples/`;

      const albIngressControllerPolicyUrl = `${albBaseResourceBaseUrl}iam-policy.json`;
      const albNamespace = 'kube-system';
      const albServiceAccount = props.cluster.addServiceAccount('alb-ingress-controller', {
          name: 'alb-ingress-controller',
          namespace: albNamespace,
       });

     const policyJson = request('GET', albIngressControllerPolicyUrl).getBody();
     ((JSON.parse(policyJson))['Statement'] as []).forEach((statement, idx, array) => {
         albServiceAccount.addToPolicy(iam.PolicyStatement.fromJson(statement));
     });

      const rbacRoles = yaml.safeLoadAll(request('GET', `${albBaseResourceBaseUrl}rbac-role.yaml`).getBody())
          .filter((rbac: any) => { return rbac['kind'] != 'ServiceAccount' });
       const albDeployment = yaml.safeLoad(request('GET', `${albBaseResourceBaseUrl}alb-ingress-controller.yaml`).getBody());

      const albResources = props.cluster.addResource('aws-alb-ingress-controller', ...rbacRoles, albDeployment);

     const albResourcePatch = new eks.KubernetesPatch(this, `alb-ingress-controller-patch-${props.version}`, {
      cluster,
      resourceName: "deployment/alb-ingress-controller",
      resourceNamespace: albNamespace,
      applyPatch: {
        spec: {
          template: {
            spec: {
              containers: [
                {
                  name: 'alb-ingress-controller',
                  args: [
                    '--ingress-class=alb',
                    '--feature-gates=wafv2=false',
                    `--cluster-name=${props.cluster.clusterName}`,
                    `--aws-vpc-id=${props.vpcId}`,
                    `--aws-region=${stack.region}`,
                  ]
                }
              ]
            }
          }
        }
      },
      restorePatch: {
        spec: {
          template: {
            spec: {
              containers: [
                {
                  name: 'alb-ingress-controller',
                  args: [
                    '--ingress-class=alb',
                    '--feature-gates=wafv2=false',
                    `--cluster-name=${props.cluster.clusterName}`,
                  ]
                }
              ]
            }
          }
        }
      },
    });
    albResourcePatch.node.addDependency(albResources);
   }
}

Other

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:35
  • Comments:22 (6 by maintainers)

github_iconTop GitHub Comments

3reactions
BriceGestascommented, Jan 27, 2021

We had the same issue and unfortunately the HelmChart part for Cert Manager did not work for us.

Here is a version we made which works fine using manifests on official documentation but can be improved of course. We encountered some CloudFormation limitations (the size of event payload which cannot exceed 262144 bytes for example) and made some workaround so it can work correctly.

Versions:

  • CDK: 1.86.0
  • AWS Load Balancer Ingress Controller: 2.1.0
  • Cert Manager: 1.1.0

Alb Ingress Controller deployment

private deployAlbIngressController() {
        const certManagerResult = this.awsCertManagerService.deployCertManager(this.cluster);
        const albIngressControllerProps = {
            cluster: this.cluster,
            region: this.conf.provisionConf.awsRegion,
            vpcId: this.conf.provisionConf.awsVpcIntegration?.vpcId,
            platformName: this.conf.platformName,
            deploymentDir: this.deploymentDir,
            waitCondition: certManagerResult.waitCondition
        };
        new AwsAlbIngressController(this.cluster, CDKNamingUtil.k8sALBIngressController(this.conf.platformName), albIngressControllerProps);
    }

AwsCertManagerService.ts

import {log} from "../log";
import * as styles from "../styles";
import * as fs from "fs";
import * as path from "path";
import * as jsYaml from "js-yaml";
import * as eks from "@aws-cdk/aws-eks";
import CDKNamingUtil from "../util/CDKNamingUtil";
import {Configuration} from "../configuration";
import {AwsPlatform} from "../model/Configuration";
import * as cdk from "@aws-cdk/core";

export interface DeployCertManagerResult {
    cdkManifests: eks.KubernetesManifest[];
    waitCondition: cdk.CfnWaitCondition;
}

interface K8sManifestJson {
    kind: string;
    metadata: {
        name: string;
    };
}

interface ManifestGroup {
    manifests: K8sManifestJson[];
    size: number;
}

export default class AwsCertManagerService {

    deploymentDir: string;
    conf: Configuration<AwsPlatform>;

    constructor(conf: Configuration<AwsPlatform>, deploymentDir: string) {
        this.conf = conf;
        this.deploymentDir = deploymentDir;
    }
    /*
    * Returns the parent Construct so we can depend on it when deploying ALB Ingress Controller
     */
    deployCertManager(cluster: eks.Cluster): DeployCertManagerResult {
        log(styles.title(`*** Deploying Kubernetes cert-manager ***`))
        // https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/installation/
        const certManagerManifest = fs.readFileSync(path.join(this.deploymentDir, 'resources', 'alb-ingress-controller', 'cert-manager-v1.1.0.yaml'), {encoding: 'utf8'});
        const manifests: K8sManifestJson[] = jsYaml.loadAll(certManagerManifest);

        const groups: ManifestGroup[] = this.splitManifestsInGroups(manifests);

        groups.forEach((group, groupIndex) => {
            console.log(`cert-manager manifests group ${groupIndex}: ${group.manifests.length} manifests, size: ${group.size} bytes `)
        });

        const cdkManifests = groups.map((group, groupIndex) => {
            return new eks.KubernetesManifest(cluster, `${CDKNamingUtil.k8sCertManager(this.conf.platformName)}-part-${groupIndex}`, {
                cluster: cluster,
                manifest: group.manifests,
                overwrite: true
            });
        });

        // Define a wait condition and handle for cert manager to be fully deployed
        const waitConditionHandle = new cdk.CfnWaitConditionHandle(cluster, CDKNamingUtil.k8sCertManagerWaitConditionHandle(this.conf.platformName));
        const waitCondition = new cdk.CfnWaitCondition(cluster, CDKNamingUtil.k8sCertManagerWaitCondition(this.conf.platformName), {
            count: 1,
            handle: waitConditionHandle.ref,
            timeout: '600',
        });
        for (let certManagerManifest of cdkManifests) {
            waitConditionHandle.node.addDependency(certManagerManifest);
        }

        const certManagerWaitConditionSignal = cluster.addManifest(CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName), {
            kind: "Pod",
            apiVersion: "v1",
            metadata: {
                name: CDKNamingUtil.k8sCertManagerWaitConditionSignal(this.conf.platformName),
                namespace: "default"
            },
            spec: {
                initContainers:
                    [{
                        name: "wait-cert-manager-service",
                        image: "busybox:1.28",
                        command: ['sh', '-c', 'echo begin sleep && sleep 60 && echo end sleep']
                    }],
                containers:
                    [{
                        name: "cert-manager-waitcondition-signal",
                        image: "curlimages/curl:7.74.0",
                        args: [
                            '-vvv',
                            '-X',
                            'PUT',
                            '-H', 'Content-Type:',
                            '--data-binary', '{"Status" : "SUCCESS","Reason" : "Configuration Complete", "UniqueId" : "ID1234", "Data" : "Cert manager should be ready by now."}',
                            waitConditionHandle.ref
                        ]
                    }],
                restartPolicy: "Never"
            }
        })
        certManagerWaitConditionSignal.node.addDependency(waitConditionHandle)

        return {
            cdkManifests,
            waitCondition
        };
    }

    private splitManifestsInGroups(manifests: K8sManifestJson[]): ManifestGroup[] {
        // Max payload size for CloudFormation event is 262144 bytes
        // (we got that information from an error message, not from the doc)
        const maxGroupSize = Math.floor(262144 * .8)
        const groups: ManifestGroup[] = []

        // Splitting all manifest in groups so total size of group is less than 262144 bytes
        manifests.forEach(manifest => {
            const manifestSize = JSON.stringify(manifest).length;
            console.log(`cert-manager manifest '${manifest.kind}/${manifest?.metadata?.name}' size is ${manifestSize} characters`);
            const lastGroup = (groups.length && groups[groups.length - 1]) || null;
            if (lastGroup === null || (lastGroup.size + manifestSize) > maxGroupSize) {
                groups.push({
                    manifests: [manifest],
                    size: manifestSize
                });
            } else {
                lastGroup.manifests.push(manifest);
                lastGroup.size += manifestSize;
            }
        });

        return groups;
    }
}

AwsAlbIngressController .ts

import * as cdk from "@aws-cdk/core";
import * as eks from "@aws-cdk/aws-eks";
import * as iam from "@aws-cdk/aws-iam";
import * as jsYaml from "js-yaml";
import * as fs from "fs";
import * as path from "path";
import CDKNamingUtil from "../util/CDKNamingUtil";

export interface IAlbIngressControllerProps {
    readonly cluster: eks.Cluster;
    readonly vpcId?: string;
    readonly region: string;
    readonly deploymentDir: string;
    readonly platformName: string;
    readonly waitCondition: cdk.CfnWaitCondition;
}

const AWS_LOAD_BALANCER_CONTROLLER = 'aws-load-balancer-controller';

export class AwsAlbIngressController extends cdk.Construct {

    constructor(scope: cdk.Construct, id: string, props: IAlbIngressControllerProps) {

        super(scope, id);

        // If stack is deployed again, make sure this service is well deleted (see inside Lens tool)
        const albNamespace = 'kube-system';
        const albServiceAccount = props.cluster.addServiceAccount(AWS_LOAD_BALANCER_CONTROLLER, {
            name: AWS_LOAD_BALANCER_CONTROLLER,
            namespace: albNamespace
        });

        const policy: { Statement: any[] } = JSON.parse(fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'iam-policy.json'), {encoding: 'utf8'}));
        policy.Statement.forEach(statement => albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement)))

        let albManifest = fs.readFileSync(path.join(props.deploymentDir, 'resources', 'alb-ingress-controller', 'alb-ingress-controller-v2.1.0.yaml'), {encoding: 'utf8'});
        albManifest = albManifest.replace(/your-cluster-name/g, CDKNamingUtil.kubernetesClusterName(props.platformName));
        const ingressControllerManifest = new eks.KubernetesManifest(this, CDKNamingUtil.k8sALBIngressController(props.platformName), {
            cluster: props.cluster,
            manifest: jsYaml.loadAll(albManifest),
            overwrite: true
        });

        ingressControllerManifest.node.addDependency(props.waitCondition);
    }
}

We had to update manifests because of original descriptions formatting not readabled as is by CloudFormation

2reactions
zxkanecommented, Jun 1, 2021

For whom is interesting deploying ALB into EKS via CDK, you can refer to the implementation of below solution,

https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242

Read more comments on GitHub >

github_iconTop Results From Across the Web

Installing the AWS Load Balancer Controller add-on
The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions the following resources: ... The AWS...
Read more >
Welcome - AWS Load Balancer Controller
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by ......
Read more >
Running the Latest AWS Load Balancer Controller in Your ...
The load balancer distributes incoming application traffic across multiple targets, in this case, is our EKS cluster. The ALB Ingress Controller runs as...
Read more >
Setting up the LB controller - Amazon EKS Workshop
“AWS Load Balancer Controller” is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by ......
Read more >
AWS Load Balancer Controller on EKS Cluster
AWS Load Balancer Controller on EKS Cluster · When the Ingress resource is created in kubernetes API, the alb-ingress-controller observes the ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found