question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1"

See original GitHub issue

Please add your +1 👍 to let us know you have encountered this


Status: IN-PROGRESS

Overview:

Version 1.106.0 and later of the aws-eks construct library throw an error when trying to update a KubernetesManifest object, this includes objects used in the cluster.addManifest method.

Complete Error Message:

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'

Workaround:

Downgrade to version 1.105.0 or below


Original opening post

When updating a KubernetesManifest, the deploy fails with an error like:

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "networking.k8s.io/v1"\n'

This issue occurs with Kubernetes versions 1.16, 1.17, and 1.20.

Reproduction Steps

  1. Deploy a simple EKS stack with a manifest
import { Stack, App } from "@aws-cdk/core";
import {
  Cluster,
  KubernetesManifest,
  KubernetesVersion,
} from "@aws-cdk/aws-eks";

const app = new App();
const stack = new Stack(app, "repro-prune-invalid-resource", {
  env: {
    region: process.env.CDK_DEFAULT_REGION,
    account: process.env.CDK_DEFAULT_ACCOUNT,
  },
});

const cluster = new Cluster(stack, "cluster", {
  clusterName: "repro-prune-invalid-resource-test",
  version: KubernetesVersion.V1_16,
  prune: true,
});

const manifest = new KubernetesManifest(stack, `pdb`, {
  cluster,
  manifest: [
    {
      apiVersion: "policy/v1beta1",
      kind: "PodDisruptionBudget",
      metadata: {
        name: "test-pdb",
        namespace: "default",
      },
      spec: {
        maxUnavailable: 1,
        selector: {
          matchLabels: { app: "thing" },
        },
      },
    },
  ],
});

app.synth();

This deploys successfully.

  1. Make a small change to the manifest, such as changing maxUnavailable: 1 to maxUnavailable: 2 and deploy again

This results in the error above.

What did you expect to happen?

I would have expected the deploy to have succeeded, and updated the maxUnavailable field in the deployed Manifest from 1 to 2.

What actually happened?

11:22:46 AM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | pdb/Resource/Default
Received response status [FAILED] from custom resource. Message returned: Error: b'poddisruptionbudget.policy/test-pdb configured\nerror: error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1, Kind=Ingress, Namespaced=true: no matches for kind "Ingr
ess" in version "networking.k8s.io/v1"\n'

Logs: /aws/lambda/repro-prune-invalid-resource-awscd-Handler886CB40B-hFxU42VXJuOz

at invokeUserFunction (/var/task/framework.js:95:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async onEvent (/var/task/framework.js:19:27)
at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: 1be7dfcb-288d-4309-8b8c-cadafb97fd09)

Environment

  • CDK CLI Version : 1.108.0
  • Framework Version: 1.108.0
  • Node.js Version: v12.18.4
  • OS : Linux
  • Language (Version): Typescript 4.3.2

Other


This is 🐛 Bug Report

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:7
  • Comments:10 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
charlesakalugwucommented, Dec 20, 2022

@robertd I just ran into this problem as well.

Fargate EKS 1.23 built using CDK 2.53.0

Cluster looks as follows:

      kubernetes_cluster = eks.Cluster(
          self,
          id=f"{prefix}-cluster",
          version=version,
          vpc=vpc,
          vpc_subnets=[
              ec2.SubnetSelection(
                  subnet_group_name="private-subnet",
              ),
          ],
          cluster_logging=[
              eks.ClusterLoggingTypes.AUDIT,
          ],
          default_capacity=0,
          endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE,
          kubectl_layer=kubectl_v23.KubectlV23Layer(self, id=f"{prefix}-kubectl"),
          masters_role=masters_role,
          output_masters_role_arn=False,
          place_cluster_handler_in_vpc=True,
          secrets_encryption_key=kms_key_data,
          output_cluster_name=False,
          output_config_command=False,
          tags=tags,
      )

As you can see I supplied the matching kubectl layer for k8s 1.23. Nevertheless I keep seeing the error:

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/foo configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n'

I have upgraded CDK to 2.55.0 and upgraded EKS to 1.24 and I saw the error again.

@asgerjensen Did you make any progress?

1reaction
NetaNircommented, Jun 28, 2021

version 1.110.1 was released with the patch.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Resolve the Kubernetes object access error in Amazon EKS
I receive the following error in Amazon Elastic Kubernetes ... The output returns the Amazon Resource Name (ARN) of the IAM user or...
Read more >
AWS. EKS CDK Troubleshooting - KubectlV23Layer
Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1 ...
Read more >
terraform-aws-eks ) module
eks. aws. Terraform module to create an Elastic Kubernetes (EKS) cluster and associated resources. Published December 20, ...
Read more >
Troubleshooting | EKS Anywhere - AWS
Failed to create cluster {"error": "error initializing capi resources in cluster: error executing init: Fetching providers\nInstalling cert-manager ...
Read more >
AWS EKS Kubernetes ALB Ingress Service Basics
We should not see anything like below log in ALB Ingress Controller log, if we see we did something wrong with ALB Ingress...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found