OKD 3.11 - deploy_cluster.yml fails ("Unable to connect to the server: unexpected EOF")
See original GitHub issueDescription
Provide a brief description of your issue here. For example:
The latest commit (8ce8a45542ed29f0b325417a9aab1b673f33c2e1) doesn’t allow to finish the playbook deploy_cluster.yml successfully.
I can see recent commit didn’t pass tests - is that OK?
Version
Please put the following version information in the code block indicated below.
- Your ansible version per
ansible --version
: 2.6.5
If you’re running from playbooks installed via RPM: playbooks/deploy_cluster.yml
- The output of
ansible-playbook -i infrastructure/openshift/ansible/hosts.cfg openshift-ansible/playbooks/deploy_cluster.yml
Place the output between the code block below:
TASK [openshift_logging_kibana : Set logging-kibana service] ***************************************************************************************************************************************
fatal: [openshift-master]: FAILED! => {"changed": false, "msg": {"cmd": "/bin/oc get service logging-kibana -o json -n openshift-logging", "results": [{}], "returncode": 1, "stderr": "Unable to connect to the server: unexpected EOF\n", "stdout": ""}}
Steps To Reproduce
$ ansible-playbook -i infrastructure/openshift/ansible/hosts.cfg openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook -i infrastructure/openshift/ansible/hosts.cfg openshift-ansible/playbooks/deploy_cluster.yml
Expected Results
Describe what you expected to happen.
deploy_cluster.yml finishes with success
Observed Results
Describe what is actually happening.
TASK [openshift_logging_kibana : Set logging-kibana service] ***************************************************************************************************************************************
fatal: [openshift-master]: FAILED! => {"changed": false, "msg": {"cmd": "/bin/oc get service logging-kibana -o json -n openshift-logging", "results": [{}], "returncode": 1, "stderr": "**Unable to connect to the server**: unexpected EOF\n", "stdout": ""}}
For long output or logs, consider using a gist
Additional Information
Provide any additional information which may help us diagnose the issue.
-
Your operating system and version: Ansible host: Ubuntu 18.04.1 LTS OKD master & nodes: CentOS Linux release 7.6.1810 (Core)
-
Your inventory file (especially any non-standard configuration parameters)
[masters]
small-openshift-master
[etcd]
small-openshift-master
[nodes]
small-openshift-master openshift_node_group_name=node-config-master
small-node-infra openshift_node_group_name=node-config-infra
small-node-compute openshift_node_group_name=node-config-compute
[OSEv3:children]
masters
nodes
etcd
[OSEv3:vars]
ansible_user=openshift
ansible_become=yes
#Shouldn't be needed when we using officail RHEL repositories
openshift_additional_repos=[{'id': 'centos-okd', 'name': 'centos-okd', 'baseurl' :'http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/', 'gpgcheck' :'0', 'enabled' :'1'}]
#htpasswd as identity provider configured for tests only, intended to be reconfigured to use keycloak
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_file=/home/openshift/.htpasswd
openshift_deployment_type=origin
openshift_master_api_port=443
openshift_master_console_port=443
openshift_master_default_subdomain=small.irisium.poc
openshift_master_cluster_hostname=small.irisium.poc
openshift_master_cluster_public_hostname=small.irisium.poc
osm_custom_cors_origins=['small.irisium.poc']
openshift_logging_install_logging=true
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
debug_level=2
#Local storage configuration
#Bugs:
# - config map for local volume provisioner not structured properly
# - storage classes not being created
# - installation task is not fetching image variable (it uses default value)
# Those things has to be overcomed by some inventory customization of shifting this part configuration
# to kubernetes native yaml configs
#
# Run infrastructure/deploy/openshift/local-volume-provisioner/fix_local_volume_provisioner.sh to fix this issue.
openshift_persistentlocalstorage_enabled=True
openshift_persistentlocalstorage_classes=['local-hdd']
openshift_persistentlocalstorage_path=/mnt/local-storage
openshift_persistentlocalstorage_provisionner_image=quay.io/external_storage/local-volume-provisioner:v2.2.0
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (2 by maintainers)
@vrutkovs - thanks a lot! you made our day! it looks like this is cause of the issue we had.
I put:
docker_version="1.13.1-75.git8633870.el7.centos.x86_64"
to the openshift ansible inventory file and it made it work perfectly 😃ps. We should buy you these 😉 https://allegro.pl/zestaw-zoltych-kaczuszek-gumowych-do-kapieli-24-sz-i7066893024.html
Thanks!
It could be a docker issue - https://bugzilla.redhat.com/show_bug.cgi?id=1655214, make sure you avoid using -84 for now