question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

openshift_control_plane : Wait for control plane pods to appear

See original GitHub issue

Description

I want to install Openshift Origin 3.11 but When do I run “deploy_cluster.yaml” I get: TASK [openshift_control_plane : Wait for control plane pods to appear] ********************************************************************************************************************************************************************** Monday 12 November 2018 14:47:58 +0100 (0:00:00.097) 0:03:50.274 ******* FAILED - RETRYING: Wait for control plane pods to appear (60 retries left). FAILED - RETRYING: Wait for control plane pods to appear (60 retries left). FAILED - RETRYING: Wait for control plane pods to appear (60 retries left). FAILED - RETRYING: Wait for control plane pods to appear (59 retries left). FAILED - RETRYING: Wait for control plane pods to appear (59 retries left). FAILED - RETRYING: Wait for control plane pods to appear (59 retries left). FAILED - RETRYING: Wait for control plane pods to appear (58 retries left). FAILED - RETRYING: Wait for control plane pods to appear (58 retries left). FAILED - RETRYING: Wait for control plane pods to appear (58 retries left). FAILED - RETRYING: Wait for control plane pods to appear (57 retries left). FAILED - RETRYING: Wait for control plane pods to appear (57 retries left). FAILED - RETRYING: Wait for control plane pods to appear (57 retries left). FAILED - RETRYING: Wait for control plane pods to appear (56 retries left). FAILED - RETRYING: Wait for control plane pods to appear (56 retries left). FAILED - RETRYING: Wait for control plane pods to appear (56 retries left). FAILED - RETRYING: Wait for control plane pods to appear (55 retries left). FAILED - RETRYING: Wait for control plane pods to appear (55 retries left). FAILED - RETRYING: Wait for control plane pods to appear (55 retries left). FAILED - RETRYING: Wait for control plane pods to appear (54 retries left). FAILED - RETRYING: Wait for control plane pods to appear (54 retries left). FAILED - RETRYING: Wait for control plane pods to appear (54 retries left).

Version

Please put the following version information in the code block indicated below.

  • Your ansible version per ansible --version [root@kak-tst-openshift-admin openshift-ansible]# ansible --version ansible 2.6.5 config file = /root/openshift-ansible/ansible.cfg configured module search path = [u’/root/.ansible/plugins/modules’, u’/usr/share/ansible/plugins/modules’] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] [root@kak-tst-openshift-admin openshift-ansible]#

If you’re operating from a git clone:

  • The output of git describe [root@kak-tst-openshift-admin openshift-ansible]# git describe openshift-ansible-3.11.43-1-8-gf82a3a6

Operating sytem: CentOS Linux release 7.5.1804 (Core)

Inventory file: [OSEv3:children] masters nodes etcd lb nfs

[masters] kak-tst-openshift-master1.kak-tst.internal kak-tst-openshift-master2.kak-tst.internal kak-tst-openshift-master3.kak-tst.internal

[etcd] kak-tst-openshift-master1.kak-tst.internal kak-tst-openshift-master2.kak-tst.internal kak-tst-openshift-master3.kak-tst.internal

[lb] kak-tst-openshift-lb.kak-tst.internal

[nodes] kak-tst-openshift-master[1:3].kak-tst.internal openshift_node_group_name=‘node-config-master’ kak-tst-openshift-infra[1:3].kak-tst.internal openshift_schedulable=true openshift_node_group_name=‘node-config-infra’ kak-tst-openshift-node[1:4].kak-tst.internal openshift_schedulable=true openshift_node_group_name=‘node-config-compute’

[nfs] kak-tst-nfs.kak-tst.internal

[OSEv3:vars] openshift_additional_repos=[{‘id’: ‘centos-paas’, ‘name’: ‘centos-paas’, ‘baseurl’ :‘https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311’, ‘gpgcheck’ :‘0’, ‘enabled’ :‘1’}]

ansible_ssh_user=root ansible_become=True ansible_service_broker_install=False openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability openshift_node_groups=[{‘name’: ‘node-config-master’, ‘labels’: [‘node-role.kubernetes.io/master=true’]}, {‘name’: ‘node-config-infra’, ‘labels’: [‘node-role.kubernetes.io/infra=true’]}, {‘name’: ‘node-config-compute’, ‘labels’: [‘node-role.kubernetes.io/compute=true’]}] openshift_deployment_type=origin os_sdn_network_plugin_name=‘redhat/openshift-ovs-multitenant’ osm_default_node_selector=‘node-role.kubernetes.io/compute=true’ openshift_hosted_router_selector=‘node-role.kubernetes.io/infra=true’ openshift_hosted_registry_selector=‘node-role.kubernetes.io/infra=true’ openshift_hosted_router_selector=‘node-role.kubernetes.io/infra=true’ openshift_hosted_router_replicas=1 openshift_hosted_registry_replicas=1 openshift_master_cluster_method=native openshift_master_cluster_hostname=osconsole.kak-tst.internal openshift_master_cluster_public_hostname=osconsole.kak-tst.internal openshift_master_console_port=443 openshift_master_api_port=443 openshift_metrics_install_metrics=True openshift_logging_install_logging=True osm_use_cockpit=true osm_cockpit_plugins=[‘cockpit-kubernetes’] openshift_master_metrics_public_url=https://hawkular-metrics.apps.kak-tst.internal oreg_url=kak-tst-katello.kak-tst.internal:5000/kak-origin_docker_container-openshift_origin-${component}😒{version} openshift_docker_blocked_registries=registry.access.redhat.com,registry.hub.docker.com,github.com,docker.io openshift_docker_insecure_registries=kak-tst-katello.kak-tst.internal:5000,172.30.0.0/16 openshift_docker_additional_registries=kak-tst-katello.kak-tst.internal:5000 openshift_examples_modify_imagestreams=true openshift_master_identity_providers=[{‘name’: ‘freeipa’, ‘challenge’: ‘true’, ‘login’: ‘true’, ‘kind’: ‘LDAPPasswordIdentityProvider’, ‘attributes’: {‘id’: [‘dn’], ‘email’: [‘mail’], ‘name’: [‘cn’], ‘preferredUsername’: [‘uid’]}, ‘bindDN’: ‘uid=admin,cn=users,cn=accounts,dc=kak-tst,dc=internal’, ‘bindPassword’: ‘*******’, ‘ca’: ‘ipa-ca.crt’, ‘insecure’: ‘false’, ‘url’: ‘ldap://kak-tst-ipa.kak-tst.internal/cn=users,cn=accounts,dc=kak-tst,dc=internal?uid?sub?(memberOf=cn=fejleszto_1,cn=groups,cn=accounts,dc=kak-tst,dc=internal)’}] openshift_master_default_subdomain=apps.kak-tst.internal

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7

github_iconTop GitHub Comments

1reaction
swepps1commented, Nov 14, 2018

Hello,

Thank you your help! I have solution. My problem was “ldapprovider ca” option: ‘ca’: ‘ipa-ca.crt’

Thank you!

0reactions
breeze1974commented, Apr 2, 2019

The below fixed my issue. I use a proxy in my environment. I had to add the hostname to no_proxy

$ cat <<EOF > /etc/environment http_proxy=http://10.xx.xx.xx:8080 https_proxy=http://10.xx.xx.xx:8080 ftp_proxy=http://10.xx.xx.xx:8080 no_proxy=127.0.0.1,localhost,172.17.240.84,172.17.240.85,172.17.240.86,172.17.240.87,10.96.0.0/12,10.244.0.0/16,v-openshift1-lnx1,v-node01-lnx1,v-node02-lnx1,console,console.inet.co.za EOF

$ cat <<EOF > /etc/systemd/system/docker.service.d/no-proxy.conf [Service] Environment=“NO_PROXY=artifactory-za.devel.iress.com.au, 172.30.9.71, 172.17.240.84, 172.17.240.85, 172.17.240.86, 172.17.240.87” Environment=“HTTP_PROXY=http://10.xx.xx.xx:8080/” Environment=“HTTPS_PROXY=http://10.xx.xx.xx:8080/” EOF

Read more comments on GitHub >

github_iconTop Results From Across the Web

Playbook fails on "Waiting for control plane pods to start"
FAILED - RETRYING: Wait for all control plane pods to become ready (58 retries left).
Read more >
RETRYING: Wait for control plane pods to appear · Issue #9575
Folks, 'Wait for control plane pods to appear' failing means API server failed to start. There might be a billion reasons for that...
Read more >
Fix Openshift error: Wait for all control plane pods to become ...
Solution. This happens when the control plane container cannot be started for some reason. Step 1: Check containers status on the Openshift master...
Read more >
ansible - Openshift_control_plane : Report control plane errors
Solved ! Move my environment to higher specifications. I saw some logs show that the resources I use before 1vcpu and RAM 2GB...
Read more >
openshift/openshift-ansible - Gitter
I'm trying install openshift 3.11 using ansible ... FAILED - RETRYING: Wait for all control plane pods to come up and become ready...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found