question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RFC: Kubernetes Patch Support

See original GitHub issue

This document proposes a solution for managing shared Kubernetes (k8s) resources, also known as “patching.” Users are welcome to respond with any comments or questions directly on this issue.

Summary

Kubernetes resources commonly have more than one controller making changes to them. These controllers can include kubectl, the k8s control plane, custom operators, or infrastructure as code (IaC) tools like Pulumi. This presents particular challenges for tools that manage state independently of k8s, and need to compute diffs based on this state.

Our k8s provider currently manages resources using Client-Side Apply (CSA), which is supported by all versions of k8s. CSA works by including an annotation on k8s resources that encodes the last applied configuration of the resource. This approach has some critical limitations:

  1. CSA does not account for multiple controllers; the last-applied-configuration annotation is set by whichever controller made the latest update
  2. Some controllers do not set the last-applied-configuration annotation, which can lead to other controllers inadvertently reverting changes

A newer management method called Server-Side Apply (SSA) is available starting in k8s v1.18 (March 2020). SSA adds a new section called managedFields to all k8s resources with information about which controller has set each resource field. This allows multiple controllers to independently manage the same resource without accidentally overwriting the same fields. This functionality can be used to patch and manage shared resources safely. Using SSA introduces some additional complexity to the resource lifecycle that needs to be understood and configurable by the user to avoid unexpected changes to shared resources.

Motivation

Some cloud providers provision k8s resources as part of their managed k8s offerings. Platform teams often want to update these default configurations, but attempting to update an existing resource will return an error in the current Pulumi model. Resolving such a conflict currently requires a separate import step prior to making changes, and cannot be accomplished in a single pulumi up operation. The following are some examples where patch behavior would be preferable:

  1. Change metadata of an existing Namespace (labels or annotations)
  2. Update data in a shared ConfigMap
  3. Change configuration of a CustomResource provisioned by an external controller
  4. Change the number of Pod replicas in a shared Deployment
  5. Ensure that a ConfigMap exists with a specified configuration
  6. Ensure that a StorageClass exists with a specified configuration

Proposal

The following changes will be made to configure the lifecycle of shared resources using Patch support.

  1. SSA can be enabled by setting a Provider option. This option will be required to use Patch support. SSA support will be disabled by default until the next major release of pulumi-kubernetes.
  2. Each SSA-enabled Provider will set a unique manager ID per resource to enable unambiguous management of the same resource from multiple controllers. The manager ID can be set explicitly as a patchOption.
  3. Each patch will be represented in Pulumi SDKs as new resource classes corresponding to each k8s resource kind. Patch resources will be named as <Resource>Patch, and will live in the same namespaces as the corresponding resource. For example, apps.v1.Deployment will correspond to apps.v1.DeploymentPatch.
  4. ~The resource name will be the name of the k8s resource to patch, and will be in the form [namespace/]name. For example, a ConfigMap named app-config in the app-ns namespace, will be referenced as app-ns/app-config.~
  5. Unlike normal resource classes, every argument in a Patch resource will be optional except for .metadata.name. This allows users to specify only the parts of the configuration that they want to patch.
  6. Patch classes will include an additional configuration argument to specify patch-specific behavior. Edit: these options will be specified using annotations for the initial release.
    1. force - boolean option to indicate that the Pulumi configuration will override any conflicting configuration for shared resources; defaults to false
    2. manager - string option to set the name of the manager for the SSA operation; will be automatically set to a unique value per resource if not provided
  7. ~The retainOnDelete resource option will be true by default, but can be overridden by explicitly setting it to false. If retainOnDelete is false, then the shared resource will be deleted from the cluster when the stack is destroyed.~ When a Patch resource is destroyed, it will relinquish ownership of any fields that it manages. Any field that becomes unmanaged will reset to its default value.
  8. Auto-naming isn’t supported for Patch resources, so the .metadata.name field is required.
  9. Users can explicitly transfer ownership of managed fields by setting the manager patchOption and running an update. These changes can be persisted across a pulumi destroy operation by setting the retainOnDelete option to true.
  10. A new annotation, pulumi.com/patchForce, will be supported on existing resource classes. This annotation indicates that the provided resource definition will override existing resource configuration in case of conflict.

This pseudocode example shows how a Patch resource will be structured in each SDK.

new k8s.core.v1.NamespacePatch(
  name="patch-example", // [required] resource name
  args={ // [required] typed resource args to patch existing resource
    metadata: {
        annotations: {
            "pulumi.com/patchForce": "true", // patch-specific arg for conflict resolution
            "pulumi.com/patchManager": "example", // patch-specific arg for field manager name
        },
        name: "kube-public", // .metadata.name is required -- all other fields are optional
    },
  }, 
  resourceOptions={ ... }, // [optional] standard resource options
);

Flowcharts

The following flowcharts show the expected behavior with the SSA Provider option enabled. The normal resource classes can be used for “upsert” workflows, which will create the resource if it does not exist, or update it if it does. The Patch resource classes can be used to manage individual fields of an existing resource.

Upsert behavior

The pulumi.com/patchForce annotation can be used to automatically resolve conflicts if there is an existing resource with the same name.

upsert flowchart

Patch behavior

patch flowchart

SDK Examples

Change metadata of an existing Namespace

new k8s.core.v1.NamespacePatch("kube-public-patch", {
    metadata: {
        annotations: {
            team: "acmecorp",
        },
        labels: {
            foo: "bar",
        },
        name: "kube-public",
    },
});
name: namespace-metadata
runtime: yaml
resources:
   kube-public-patch:
      type: kubernetes:core/v1:NamespacePatch
      properties:
         metadata:
            name: kube-public
            annotations:
               team: "acmecorp"
            labels:
               foo: bar

Update data in a shared ConfigMap

new k8s.core.v1.ConfigMapPatch("cm-patch", {
    metadata: {
        name: "app-config",
        namespace: "app-ns",
    },
    data: {
        foo: "bar",
    },
});
name: configmap-data
runtime: yaml
resources:
   cm-patch:
      type: kubernetes:core/v1:ConfigMapPatch
      properties:
         metadata:
            name: app-config
            namespace: app-ns
         data:
            foo: bar

Change configuration of a CustomResource provisioned by an external controller

new k8s.apiextensions.CustomResourcePatch("oidc", {
    apiVersion: "authentication.gke.io/v2alpha1",
    kind: "ClientConfig",
    metadata: {
        name: "clientconfig",
        namespace: "kube-public",
    },
    spec: {
        authentication: {
            oidc: {
                clientID: "example",
            },
        },
    },
});

Change the number of Pod replicas in a shared Deployment

new k8s.apps.v1.DeploymentPatch("nginx-replicas", {
    metadata: {
        annotations: {
            "pulumi.com/patchForce": "true",
        },
        name: "nginx",
    },
    spec: {
        replicas: 3,
    },
});
name: replicas
runtime: yaml
resources:
   nginx-replicas:
      type: kubernetes:apps/v1:DeploymentPatch
      properties:
         metadata:
            annotations:
               pulumi.com/patchForce: "true"
            name: nginx
         spec:
            replicas: 3

Ensure that a ConfigMap exists with a specified configuration

new k8s.core.v1.ConfigMap("upsert-app-config", {
   metadata: {
      annotations: {
          "pulumi.com/patchForce": "true",
      },
      name: "app-config",
      namespace: "app-ns",
   },
   data: {
      foo: "bar"
   },
});
name: configmaps
runtime: yaml
resources:
   upsert-app-config:
      type: kubernetes:core/v1:ConfigMap
      properties:
         metadata:
            name: app-config
            namespace: app-ns
            annotations:
               pulumi.com/patchForce: "true"
         data:
            foo: bar

Ensure that a StorageClass exists with a specified configuration

new k8s.storage.v1.StorageClass("gp2-storage-class", {
   metadata: {
      annotations: {
         "pulumi.com/patchForce": "true",
         "storageclass.kubernetes.io/is-default-class": "true",
      },
      name: "gp2",
   },
   provisioner: "kubernetes.io/aws-ebs",
   parameters: {
      type: "gp2",
      fsType: "ext4",
   },
});
name: default-storage-classes
runtime: yaml
resources:
  gp2-storage-class:
    type: kubernetes:storage.k8s.io/v1:StorageClass
    properties:
      metadata:
        name: gp2
        annotations:
          pulumi.com/patchForce: "true"
          storageclass.kubernetes.io/is-default-class: "true"
      provisioner: kubernetes.io/aws-ebs
      parameters:
        type: gp2
        fsType: ext4 

Prior art

Terraform’s k8s provider has limited support for patching k8s resources, which is exposed with purpose-built resources in their SDK. They currently support patching labels or annotations on any resource, or patching ConfigMap resources. These operations all require that the resource already exists, and was created by another controller. They support a “force” flag that works similarly to the proposed Force patch option. For deletes, they relinquish management of the specified fields, but don’t delete the resource.

By comparison, this proposal supports all resource kinds and fields. This proposal also supports an “upsert” workflow that does not require the resource to exist prior to running pulumi up. A combination of the upsert and patch operations give the user granular control over the intended update semantics.

Alternatives considered

We have worked on this problem off and on since 2018, but had not reached a satisfactory answer. Previous attempts were based around CSA, which presents additional challenges for getting the current state of a resource, making atomic updates to the resource, and handling conflicts with other controllers.

The leading candidate solution used a combination of resource methods, get and patch, to specify the desired state. This solution had several problems that stalled progress, which were documented in this update from January 2022. Additionally, this approach relies on resource methods, which are more complicated to implement cross-language, and are not currently supported in our YAML SDK.

Another alternative that was considered was doing the equivalent of kubectl apply, and not attempting to integrate this tightly with the Pulumi model. This approach would have made it difficult to preview changes and understand the state of the resources after the patch was applied. Like the previous solution, this was underpinned by CSA, which significantly complicates the implementation. It is now possible to use kubectl apply in the SSA mode, which would make this approach more viable. We previously suggested this as a workaround using the pulumi-command provider to execute the apply commands, but at the cost of poor previews and unclear delete semantics.

We initially wanted to expose Patch behavior through the existing resource classes rather than creating new resource classes specific to Patch. However, we discovered that this approach would not be possible to implement without a breaking change to our existing SDKs.

Compatibility

For the initial rollout of patch support, we will allow users to opt in with a Provider feature flag. The existing enableDryRun option will be deprecated in favor of a combined option that enables both Server-Side Diff and Server-Side Apply. Client-Side Apply will continue to be supported until the next major release of the k8s provider. That release will drop support for CSA and k8s versions older than v1.18.

Related issues

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:10
  • Comments:30 (27 by maintainers)

github_iconTop GitHub Comments

2reactions
AaronFrielcommented, Jun 10, 2022

@lblackstone @viveklak what do you think about changing the last item about delete behavior then: …

The problem with this is that any fields which no longer have a manager revert to default values. This subtlety seems like it has the potential to cause unpleasant surprises.

As a concrete example, consider a k8s Deployment where we patch the number of replicas from a previous value of 3 to a new value of 5. Relinquishing control of the replicas field on deletion would reset the value to the default of 1 rather than putting it back to 3. We would need to accurately track the previous state and then do some update gymnastics on deletion to undo our changes. This would be different for each resource kind, and I don’t think the previous state would always be desirable, since it could have changed since we checkpointed it. Ultimately, I think the proposed delete/retain choice is a better option since it’s clear to explain and to implement.

I think that’s OK or expected behavior for server-side apply. Leaving managed fields around that aren’t truly managed seems to go against the Kubernetes model. If a user cares to ensure a field maintains a value after a stack is destroyed, we should recommend transferring ownership to another stack by setting retainOnDelete true while setting the manager ID to a well known value so that another stack can take it over: https://kubernetes.io/docs/reference/using-api/server-side-apply/#transferring-ownership

Our experienced Kubernetes practitioners who use Pulumi will be thankful that they can transfer knowledge of how server-side apply works. For folks who are surprised, we can say that our Patch resource follows the semantics of Kubernetes so closely that we can refer them to those docs as well as our own.

The parallel I’d apply here, in terms of “least surprise”, is like that of how ConfigMap replaces & immutability work. If we deviate from the Kubernetes API server’s model, we surprise Kubernetes practitioners. (In our defense though, it’s really quite annoying to make sure ConfigMaps propagate updates correctly to deployments in other tools.)

1reaction
lblackstonecommented, Jun 13, 2022

@lblackstone : Would this proposal also resolve the issue in #1118 which exposes secrets in the last applied state annotation?

It resolves it for Providers that opt into SSA management since the last-applied-configuration annotation would no longer be used. I expect that SSA behavior will be the only supported method in the next major release of the provider, at which point the issue would be entirely resolved.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Update API Objects in Place Using kubectl patch
This task shows how to use kubectl patch to update an API object in place. The exercises in this task demonstrate a strategic...
Read more >
Understanding OpenKruise Kubernetes Resource Update ...
JSON patch (RFC 6902). Add a container named nginx: kubectl patch deployment/foo --type='json' -p \ '[{"op":"add ...
Read more >
Kubernetes RFC
Our current thinking is that Kubernetes support would take the following shape: A new Kubernetes Cluster target. Two new deployment steps: ...
Read more >
Patch Kubernetes Manifests
Update a Kubernetes resource in place using the Patch Manifest ... Kind, The Kubernetes Kind of your resource e.g. deployment, service etc.
Read more >
Patching Kubernetes Resources in Golang - dwmkerr.com
The Kubernetes Patch API supports a few different methods for ... are in the RFC 6902), or the slightly more readable jsonpatch.com).
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found