question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Problem with privileges during installation

See original GitHub issue

This is a…


[ ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report  
[ ] Documentation issue or request

Description

I have a clean instance of minishift (full-reset)

syndesis minishift --full-reset --install --project syndesis --openshift-version 3.11.0

Installation of Syndesis takes so long (20 minutes for me). During the installation, the pods are been created, terminated and again created. it is repeated several times. This error is repeated in the operator log:

{"level":"error","ts":1551456915.553912,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"syndesis-controller","request":"syndesis/app","error":"roles.rbac.authorization.k8s.io \"camel-k\" is forbidden: attempt to grant extra privileges: [{[*] [camel.apache.org] [*] [] []}] user=&{system:serviceaccount:syndesis:syndesis-operator 8f4e4614-3c3c-11e9-8e73-5254003010a8 [system:serviceaccounts system:serviceaccounts:syndesis system:authenticated] map[]} ownerrules=[{[get] [ user.openshift.io] [users] [~] []} {[list] [ project.openshift.io] [projectrequests] [] []} {[get list] [ authorization.openshift.io] [clusterroles] [] []} {[get list watch] [rbac.authorization.k8s.io] [clusterroles] [] []} {[get list] [storage.k8s.io] [storageclasses] [] []} {[list watch] [ project.openshift.io] [projects] [] []} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/healthz /healthz/*]} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[create] [ build.openshift.io] [builds/docker builds/optimizeddocker] [] []} {[create] [ build.openshift.io] [builds/jenkinspipeline] [] []} {[create] [ build.openshift.io] [builds/source] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]} {[delete] [ oauth.openshift.io] [oauthaccesstokens oauthauthorizetokens] [] []} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[impersonate] [authentication.k8s.io] [userextras/scopes.authorization.openshift.io] [] []} {[create get] [ build.openshift.io] [buildconfigs/webhooks] [] []} {[create delete deletecollection get list patch update watch] [] [pods pods/attach pods/exec pods/portforward pods/proxy] [] []} {[create delete deletecollection get list patch update watch] [] [configmaps endpoints persistentvolumeclaims replicationcontrollers replicationcontrollers/scale secrets serviceaccounts services services/proxy] [] []} {[get list watch] [] [bindings events limitranges namespaces/status pods/log pods/status replicationcontrollers/status resourcequotas resourcequotas/status] [] []} {[get list watch] [] [namespaces] [] []} {[impersonate] [] [serviceaccounts] [] []} {[create delete deletecollection get list patch update watch] [apps] [daemonsets deployments deployments/rollback deployments/scale replicasets replicasets/scale statefulsets statefulsets/scale] [] []} {[create delete deletecollection get list patch update watch] [autoscaling] [horizontalpodautoscalers] [] []} {[create delete deletecollection get list patch update watch] [batch] [cronjobs jobs] [] []} {[create delete deletecollection get list patch update watch] [extensions] [daemonsets deployments deployments/rollback deployments/scale ingresses networkpolicies replicasets replicasets/scale replicationcontrollers/scale] [] []} {[create delete deletecollection get list patch update watch] [policy] [poddisruptionbudgets] [] []} {[create delete deletecollection get list patch update watch] [networking.k8s.io] [networkpolicies] [] []} {[create delete deletecollection get list patch update watch] [ build.openshift.io] [buildconfigs buildconfigs/webhooks builds] [] []} {[get list watch] [ build.openshift.io] [builds/log] [] []} {[create] [ build.openshift.io] [buildconfigs/instantiate buildconfigs/instantiatebinary builds/clone] [] []} {[update] [ build.openshift.io] [builds/details] [] []} {[edit view] [build.openshift.io] [jenkins] [] []} {[create delete deletecollection get list patch update watch] [ apps.openshift.io] [deploymentconfigs deploymentconfigs/scale] [] []} {[create] [ apps.openshift.io] [deploymentconfigrollbacks deploymentconfigs/instantiate deploymentconfigs/rollback] [] []} {[get list watch] [ apps.openshift.io] [deploymentconfigs/log deploymentconfigs/status] [] []} {[create delete deletecollection get list patch update watch] [ image.openshift.io] [imagestreamimages imagestreammappings imagestreams imagestreams/secrets imagestreamtags] [] []} {[get list watch] [ image.openshift.io] [imagestreams/status] [] []} {[get update] [ image.openshift.io] [imagestreams/layers] [] []} {[create] [ image.openshift.io] [imagestreamimports] [] []} {[get] [ project.openshift.io] [projects] [] []} {[get list watch] [ quota.openshift.io] [appliedclusterresourcequotas] [] []} {[create delete deletecollection get list patch update watch] [ route.openshift.io] [routes] [] []} {[create] [ route.openshift.io] [routes/custom-host] [] []} {[get list watch] [ route.openshift.io] [routes/status] [] []} {[create delete deletecollection get list patch update watch] [ template.openshift.io] [processedtemplates templateconfigs templateinstances templates] [] []} {[create delete deletecollection get list patch update watch] [extensions networking.k8s.io] [networkpolicies] [] []} {[create delete deletecollection get list patch update watch] [ build.openshift.io] [buildlogs] [] []} {[get list watch] [] [resourcequotausages] [] []} {[get list create update delete deletecollection watch] [syndesis.io] [* */finalizers] [] []} {[get list create update delete deletecollection watch] [] [pods services endpoints persistentvolumeclaims configmaps secrets serviceaccounts] [] []} {[get list] [] [events] [] []} {[get list create update delete deletecollection watch] [rbac.authorization.k8s.io] [roles rolebindings] [] []} {[get list create update delete deletecollection watch] [template.openshift.io] [processedtemplates] [] []} {[get list create update delete deletecollection watch] [image.openshift.io] [imagestreams] [] []} {[get list create update delete deletecollection watch] [apps.openshift.io] [deploymentconfigs] [] []} {[get list create update delete deletecollection watch] [build.openshift.io] [buildconfigs] [] []} {[get list create update delete deletecollection watch] [authorization.openshift.io] [rolebindings] [] []} {[get list create update delete deletecollection watch] [route.openshift.io] [routes routes/custom-host] [] []} {[get list create update delete deletecollection watch] [camel.apache.org] [*] [] []} {[get list create update delete deletecollection watch] [monitoring.coreos.com] [alertmanagers prometheuses servicemonitors prometheusrules] [] []} {[get list create update delete deletecollection watch] [integreatly.org] [grafanadashboards] [] []} {[get list watch] [] [configmaps endpoints persistentvolumeclaims pods replicationcontrollers replicationcontrollers/scale serviceaccounts services] [] []} {[get list watch] [] [bindings events limitranges namespaces/status pods/log pods/status replicationcontrollers/status resourcequotas resourcequotas/status] [] []} {[get list watch] [] [namespaces] [] []} {[get list watch] [apps] [daemonsets deployments deployments/scale replicasets replicasets/scale statefulsets statefulsets/scale] [] []} {[get list watch] [autoscaling] [horizontalpodautoscalers] [] []} {[get list watch] [batch] [cronjobs jobs] [] []} {[get list watch] [extensions] [daemonsets deployments deployments/scale ingresses networkpolicies replicasets replicasets/scale replicationcontrollers/scale] [] []} {[get list watch] [policy] [poddisruptionbudgets] [] []} {[get list watch] [networking.k8s.io] [networkpolicies] [] []} {[get list watch] [ build.openshift.io] [buildconfigs buildconfigs/webhooks builds] [] []} {[get list watch] [ build.openshift.io] [builds/log] [] []} {[view] [build.openshift.io] [jenkins] [] []} {[get list watch] [ apps.openshift.io] [deploymentconfigs deploymentconfigs/scale] [] []} {[get list watch] [ apps.openshift.io] [deploymentconfigs/log deploymentconfigs/status] [] []} {[get list watch] [ image.openshift.io] [imagestreamimages imagestreammappings imagestreams imagestreamtags] [] []} {[get list watch] [ image.openshift.io] [imagestreams/status] [] []} {[get] [ project.openshift.io] [projects] [] []} {[get list watch] [ quota.openshift.io] [appliedclusterresourcequotas] [] []} {[get list watch] [ route.openshift.io] [routes] [] []} {[get list watch] [ route.openshift.io] [routes/status] [] []} {[get list watch] [ template.openshift.io] [processedtemplates templateconfigs templateinstances templates] [] []} {[get list watch] [ build.openshift.io] [buildlogs] [] []} {[get list watch] [] [resourcequotausages] [] []} {[get] [ image.openshift.io] [imagestreams/layers] [] []}] ruleResolutionErrors=[]","stacktrace":"github.com/syndesisio/syndesis/install/operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/syndesisio/syndesis/install/operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/syndesisio/syndesis/install/operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/syndesisio/syndesis/install/operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

After a lot of time (20 minutes for me) the Syndesis is installed. In the server log is this exception:

2019-03-01 16:36:32.731 ERROR [-,,,] 1 --- [ning]: pollPods] i.s.s.l.j.c.ActivityTrackingController   : Unexpected Error occurred.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://openshift.default.svc/api/v1/namespaces/syndesis/pods?labelSelector=syndesis.io/component%3Dintegration . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:syndesis:syndesis-server" cannot list pods in the namespace "syndesis": no RBAC policy matched.
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:327) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:605) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:70) ~[kubernetes-client-3.1.4.fuse-710001.jar!/:na]
	at io.syndesis.server.logging.jsondb.controller.ActivityTrackingController.listPods(ActivityTrackingController.java:327) ~[server-logging-jsondb-1.6-SNAPSHOT.jar!/:1.6-SNAPSHOT]
	at io.syndesis.server.logging.jsondb.controller.ActivityTrackingController.pollPods(ActivityTrackingController.java:271) ~[server-logging-jsondb-1.6-SNAPSHOT.jar!/:1.6-SNAPSHOT]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_151]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[na:1.8.0_151]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_151]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[na:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_151]

However, after some time (± 16minutes), the Syndesis is reinstalled again. So all pods except syndesis-operator are deleted and created again.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:45 (25 by maintainers)

github_iconTop GitHub Comments

1reaction
mkralik3commented, Mar 5, 2019

Yes, the installation on minishift looks ok, without pods restarting.

0reactions
heiko-brauncommented, Mar 5, 2019
Read more comments on GitHub >

github_iconTop Results From Across the Web

[Solved] The installer has insufficient privileges to access
1. Check your permissions. Sometimes this error can be caused if you don't have the necessary permissions over the installation directory.
Read more >
The installer has insufficient privileges to access Fix in ...
The installer has insufficient privileges to access Fix in Windows 11 / 10 · Fix 1 – Take full control of the setup...
Read more >
The installer has insufficient privileges to access this directory ...
The installer has insufficient privileges to access this directory.Go to the folder1. right click2. Properties3. Security tab4. Advanced5.
Read more >
Error while installing a program: The installer has insufficient
This issue generally occurs in one of the following scenarios: The SYSTEM group does not have Full Control privileges.
Read more >
Error message "Verify that you have sufficient privileges to ...
Solution · Click on Start and type 'Services.msc' and press ENTER. · Right-click the Windows Installer service, and then click Properties. · If...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found