Updating a configmap's data with enableDryRun fails with "configsmap already exists"
See original GitHub issueChanging the data on a ConfigMap with enableDryRun: true
on EKS 1.19/ K3S v1.20.4 causes the following error:
kubernetes:core/v1:ConfigMap (test2):
error: resource default/pierlucg-test was not successfully created by the Kubernetes API server : configmaps "pierlucg-test" already exists
Steps to reproduce
Run pulumi up
twice with the following code:
const eksProviderDry = new k8s.Provider('eksdry', {
enableDryRun: true,
kubeconfig: 'mykubeconfig'
});
const configMapTest = new k8s.core.v1.ConfigMap(
'this-will-fail',
{
data: {foo: `${Date.now()}`},
metadata: {
name: `failing-configMap`,
},
},
{provider: eksProviderDry},
);
const eksProvider = new k8s.Provider('eks', {
kubeconfig: 'mykubeconfig'
});
const configMapWorking = new k8s.core.v1.ConfigMap(
'this-will-succeed',
{
data: {foo: `${Date.now()}`},
metadata: {
name: `fonctioning-configMap`,
},
},
{provider: eksProvider},
);
This will be the result the second time:
Previewing update (mystack.dev):
Type Name Plan Info
pulumi:pulumi:Stack myproject-mystack.dev 1 error
+- ├─ kubernetes:core/v1:ConfigMap this-will-succeed replace [diff: ~data]
+- └─ kubernetes:core/v1:ConfigMap this-will-fail replace [diff: ~data]; 1 error
Diagnostics:
pulumi:pulumi:Stack (myproject-mystack.dev):
error: preview failed
kubernetes:core/v1:ConfigMap (this-will-fail):
error: Preview failed: resource default/failing was not successfully created by the Kubernetes API server : configmaps "failing" already exists
Context (Environment)
Using Typescript EKS Server Version: v1.19.8-eks-96780e Pulumi Version: v3.1.0 “@pulumi/kubernetes” Version: “^3.1.0”
I also tested with a local K3S cluster and the issue still occurs.
Might affect https://github.com/pulumi/pulumi-kubernetes/issues/1556
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Update k8s ConfigMap or Secret without deleting the existing ...
7. I think you mean " kubectl replace fails if a configmap does not already exist". · 3 · Consider adding --save-config to...
Read more >configmap not overwritten · Issue #3933 · helm/helm - GitHub
+1 we are using helm latest version. It's a big issue when you need to run a fix in prod and the configmap...
Read more >Configure a Pod to Use a ConfigMap - Kubernetes
This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. Before ...
Read more >Updating an existing ConfigMap using kubectl replace ...
Below is an example for replacing the DNS resolv.conf value embedded in 'data.kubelet' of the ConfigMap. kubectl get cm -n kube ...
Read more >How to Update a Kubernetes Secret or ConfigMap - Atomist Blog
Updating a Kubernetes Secret or Kubernetes ConfigMap is simpler than you thought. Use the dry-run feature of 'kubectl' and then pipe the ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I’ll leave the decision whether to close this issue up to you. With my newly found understanding (thank you), I now know how to avoid this problem. That being said, I agree that there might be a better way to at least provide a more meaningful error.
Thanks again!
I tried with
enableDryRun: false
and the configmapupdatedwas replaced just fine.