(aws-eks): kubectl layer is not compatible with k8s v1.22.0
See original GitHub issueDescribe the bug
Running an empty update on an empty EKS cluster fails while updating the resource EksClusterAwsAuthmanifest12345678
(Custom::AWSCDK-EKS-KubernetesResource
).
Expected Behavior
The update should succeed.
Current Behavior
It’s fails with error:
Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)
Reproduction Steps
This is what I did:
- Deploy an empty cluster:
export class EksClusterStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const clusterAdminRole = new iam.Role(this, "ClusterAdminRole", {
assumedBy: new iam.AccountRootPrincipal(),
});
const vpc = ec2.Vpc.fromLookup(this, "MainVpc", {
vpcId: "vpc-1234567890123456789",
});
const cluster = new eks.Cluster(this, "EksCluster", {
vpc: vpc,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
clusterName: `${id}`,
mastersRole: clusterAdminRole,
defaultCapacity: 0,
version: eks.KubernetesVersion.V1_22,
});
cluster.addFargateProfile("DefaultProfile", {
selectors: [{ namespace: "default" }],
});
}
}
- Add a new fargate profile
cluster.addFargateProfile("IstioProfile", {
selectors: [{ namespace: "istio-system" }],
});
- Deploy the stack and wait for the failure.
Possible Solution
No response
Additional Information/Context
I checked the version of kubectl
in the lambda handler and it’s 1.20.0
which AFAIK is not compilable with cluster version 1.22.0
. I’m not entirely sure how the lambda is created. I thought it matches the kubectl
with whatever version the cluster has. ~But it seems it’s not~ It is not the case indeed (#15736).
CDK CLI Version
2.20.0 (build 738ef49)
Framework Version
No response
Node.js Version
v16.13.0
OS
Darwin 21.3.0
Language
Typescript
Language Version
3.9.10
Other information
Similar to #15072?
Issue Analytics
- State:
- Created a year ago
- Reactions:34
- Comments:26 (6 by maintainers)
Top Results From Across the Web
aws-cdk/aws-eks module - AWS Documentation
This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, the library also supports defining ...
Read more >Deprecated API Migration Guide - Kubernetes
Migrate manifests and API clients to use the storage.k8s.io/v1 API version, ... API version of APIService is no longer served as of v1.22....
Read more >AWS EKS vs. ECS vs. Fargate vs. Kops - CAST AI
What is Elastic Kubernetes Service (EKS)?. EKS is a service that provides and manages a Kubernetes control plane on its own. You have...
Read more >AWS EKS Load Balancer from Kubernetes Service
It is more like AWS Application Load Balancers. We are not going to cover Ingress in this article. we have a dedicated article...
Read more >System requirements - Calico - Tigera
Kubernetes requirements. Supported versions. We test Calico v3.24 against the following Kubernetes versions. v1.22; v1.23; v1.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@akefirad Yesterday I had the same issue. As a temporary solution, you can create your own lambda layer version and pass it as a parameter to the Cluster construct. Here is my solution in python. It’s just a combination of AwsCliLayer and KubectlLayer
My code building layer.zip every synth, but you can build it once you need it and save layer.zip in your repository.
assets/kubectl-layer/build.sh
assets/kubectl-layer/Dockerfile
assets/kubectl-layer/requirements.txt
kubectl_layer.py
There is a separate module you need to install
aws-cdk.lambda-layer-kubectl-v23
then you can importfrom aws_cdk import lambda_layer_kubectl_v23