Changing a ConfigMap's data causes a replacement
See original GitHub issueChanging a ConfigMap’s data causes a replacement of that resource, causing other resources that depend on it to be replaced as well (e.g. Deployment using envFrom
), causing downtime.
Expected behavior
The ConfigMap should be updated in place.
Steps to reproduce
Running pulumi up
twice with the following code:
const testConfigMap = new k8s.core.v1.ConfigMap(
'test',
{
data: {foo: `${Date.now()}`},
}
);
will result in a replacement of that configMap
+- └─ kubernetes:core/v1:ConfigMap test replace [diff: ~data]
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
ConfigMaps - Kubernetes
Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data...
Read more >replacing property in data section of ConfigMap at runtime ...
I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml ...
Read more >App Rollout via ConfigMap Data Change | TypeScript - Pulumi
Running preview now shows that this change will cause us to replace the ConfigMap with a new one containing the new data, and...
Read more >Updating Kubernetes Deployments on a ConfigMap Change
Have our Deployment reference that specific ConfigMap version (in a version-control & CI friendly way); Rollout a new revision of our Deployment.
Read more >Kubernetes ConfigMaps and Configuration Best Practices
ConfigMaps aren't built for very large amounts of data either: ... as environment variables require a pod restart to apply new changes.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I know this is a closed issue so happy to start a new ticket. Unrelated to deployments, in EKS there is an aws-auth ConfigMap. I noticed when trying to add new users and roles Pulumi will replace the ConfigMap. This causes all of the NodeGroups to permanently enter a failing state and never recover (it does seems random and it doesn’t happen all of the time [but most of the time]).
I understand for Deployments that a replace is beneficial, but it would be nice if the user could control the behavior if they need to instead of making assumptions on what is using the ConfigMap.
Ah, ok. The reason you’re seeing the replacement is because you are manually specifying the ConfigMap name rather than using auto-naming:
If you remove that and let Pulumi auto-name the ConfigMap, the Deployment will update rather than replace.
Our k8s provider intentionally treats ConfigMap and Secret resources as immutable rather than using the PATCH API for a couple reasons:
I’d be interested to hear more about the use case if you are intentionally trying to reuse the same ConfigMap.