Reflection is to slow. Pod goes into ImagePullBackOff
See original GitHub issueI am using Gitops and on pull requests of my app I create a new environment to test the PR. I will create a namespace and the required pods. But until the kubernetes-reflector replicated the imagepullsecret, the pods already went into ImagePullBackOff state.
kubernetes-reflector logs show:
2022-05-04 12:38:13.495 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:35:19.1644564. Faulted: False.
2022-05-04 12:38:13.504 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-04 12:38:13.768 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretMirror) Auto-reflected my-ns/my-secret where permitted. Created 0 - Updated 0 - Deleted 0 - Validated 9.
2022-05-04 13:12:02.234 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:57:19.8038623. Faulted: False.
2022-05-04 13:12:02.235 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-04 13:15:50.371 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:38:42.5296269. Faulted: False.
2022-05-04 13:15:50.372 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-04 13:37:29.214 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:59:15.7097240. Faulted: False.
2022-05-04 13:37:29.214 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-04 13:37:29.554 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretMirror) Auto-reflected my-ns/my-secret where permitted. Created 0 - Updated 0 - Deleted 0 - Validated 9.
2022-05-04 13:44:03.810 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:32:01.5747240. Faulted: False.
2022-05-04 13:44:03.810 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-04 14:03:22.530 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:47:31.9659988. Faulted: False.
2022-05-04 14:03:22.530 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-04 14:14:12.683 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:30:08.6807986. Faulted: False.
2022-05-04 14:14:12.683 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-04 14:30:02.454 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:52:33.0470167. Faulted: False.
2022-05-04 14:30:02.454 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-04 14:30:02.562 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretMirror) Auto-reflected my-ns/my-secret where permitted. Created 1 - Updated 0 - Deleted 0 - Validated 9.
2022-05-04 14:30:02.600 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretMirror) Created my-new-ns/my-secret as a reflection of my-ns/my-secret
2022-05-04 14:39:40.142 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:36:17.6118289. Faulted: False.
2022-05-04 14:39:40.142 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
As I see it, the reflector tried to reflect secrets at 12:38, 13:37 and 14:30. So in the worst case, I’d have to wait for almost an hour until the secret get reflected.
Is this how the reflector works, or have I done something wrong? Can the reflector maybe get notified when a new namespace has been created and then “instantly” reflect secrets? Or alternatively can it check for new namespaces every 30 seconds or so?
Issue Analytics
- State:
- Created a year ago
- Reactions:3
- Comments:22 (3 by maintainers)
Top Results From Across the Web
Kubernetes ImagePullBackOff error: what you need to know
The status ImagePullBackOff means that a Pod couldn't start, because Kubernetes couldn't pull a container image. The 'BackOff' part means that ...
Read more >What Is Kubernetes ImagePullBackOff Error and How to Fix It
The ImagePullBackOff error is a common error message in Kubernetes that occurs when a container running in a pod fails to pull the...
Read more >Troubleshooting ImagePullBackOff and ErrImagePull Errors
ImagePullBackOff / ErrImagePull error means that a pod cannot pull an image from a container registry. Learn to identify and resolve this error....
Read more >Kubernetes ImagePullBackOff: Troubleshooting with Examples
Here are some of the possible causes behind your pod getting stuck in the ImagePullBackOff state: Image is not defined properly. Tag may ......
Read more >Troubleshooting Kubernetes ImagePullBackOff
This article will provide an overview of possible causes for a pod entering into ImagePullBackOff state while starting a container.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The new version (work in progress) should fix this issue.
I can observe the same behavior in an AWS EKS cluster with k8s 1.21 and Argo CD. But I have no idea how to debug it.