‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1"
See original GitHub issuePlease add your +1 👍 to let us know you have encountered this
Status: IN-PROGRESS
Overview:
Version 1.106.0 and later of the aws-eks construct library throw an error when trying to update a KubernetesManifest
object, this includes objects used in the cluster.addManifest
method.
Complete Error Message:
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'
Workaround:
Downgrade to version 1.105.0 or below
Original opening post
When updating a KubernetesManifest, the deploy fails with an error like:
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'
This issue occurs with Kubernetes versions 1.16, 1.17, and 1.20.
Reproduction Steps
- Deploy a simple EKS stack with a manifest
import { Stack, App } from "@aws-cdk/core";
import {
Cluster,
KubernetesManifest,
KubernetesVersion,
} from "@aws-cdk/aws-eks";
const app = new App();
const stack = new Stack(app, "repro-prune-invalid-resource", {
env: {
region: process.env.CDK_DEFAULT_REGION,
account: process.env.CDK_DEFAULT_ACCOUNT,
},
});
const cluster = new Cluster(stack, "cluster", {
clusterName: "repro-prune-invalid-resource-test",
version: KubernetesVersion.V1_16,
prune: true,
});
const manifest = new KubernetesManifest(stack, `pdb`, {
cluster,
manifest: [
{
apiVersion: "policy/v1beta1",
kind: "PodDisruptionBudget",
metadata: {
name: "test-pdb",
namespace: "default",
},
spec: {
maxUnavailable: 1,
selector: {
matchLabels: { app: "thing" },
},
},
},
],
});
app.synth();
This deploys successfully.
- Make a small change to the manifest, such as changing
maxUnavailable: 1
tomaxUnavailable: 2
and deploy again
This results in the error above.
What did you expect to happen?
I would have expected the deploy to have succeeded, and updated the maxUnavailable
field in the deployed Manifest from 1 to 2.
What actually happened?
11:22:46 AM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingr
ess" in version "networking.k8s.io/v1"\n'
Logs: /aws/lambda/repro-prune-invalid-resource-awscd-Handler886CB40B-hFxU42VXJuOz
at invokeUserFunction (/var/task/framework.js:95:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async onEvent (/var/task/framework.js:19:27)
at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: 1be7dfcb-288d-4309-8b8c-cadafb97fd09)
Environment
- CDK CLI Version : 1.108.0
- Framework Version: 1.108.0
- Node.js Version: v12.18.4
- OS : Linux
- Language (Version): Typescript 4.3.2
Other
This is 🐛 Bug Report
Issue Analytics
- State:
- Created 2 years ago
- Reactions:7
- Comments:10 (4 by maintainers)
Top Results From Across the Web
Resolve the Kubernetes object access error in Amazon EKS
I receive the following error in Amazon Elastic Kubernetes ... The output returns the Amazon Resource Name (ARN) of the IAM user or...
Read more >AWS. EKS CDK Troubleshooting - KubectlV23Layer
Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1 ...
Read more >terraform-aws-eks ) module
eks. aws. Terraform module to create an Elastic Kubernetes (EKS) cluster and associated resources. Published December 20, ...
Read more >Troubleshooting | EKS Anywhere - AWS
Failed to create cluster {"error": "error initializing capi resources in cluster: error executing init: Fetching providers\nInstalling cert-manager ...
Read more >AWS EKS Kubernetes ALB Ingress Service Basics
We should not see anything like below log in ALB Ingress Controller log, if we see we did something wrong with ALB Ingress...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@robertd I just ran into this problem as well.
Fargate EKS 1.23 built using CDK 2.53.0
Cluster looks as follows:
As you can see I supplied the matching kubectl layer for k8s 1.23. Nevertheless I keep seeing the error:
I have upgraded CDK to 2.55.0 and upgraded EKS to 1.24 and I saw the error again.
@asgerjensen Did you make any progress?
version 1.110.1 was released with the patch.