deploying containerized ceph-ansible stable-4 failed in centos 7
See original GitHub issueBug Report
What happened: I’ve tryed to deploy containerized ceph-ansible stable-4 and it’s failing with this error : The conditional check ‘ceph_mon_container_stat.get(‘rc’) == 0’ failed. The error was: error while evaluating conditional (ceph_mon_container_stat.get(‘rc’) == 0): ‘ceph_mon_container_stat’ is undefined full logs: http://paste.openstack.org/show/788157/
How to reproduce it (minimal and precise):
Fresh install centos7 ansible-playbook site-container.yml -i inventory
Share your group_vars files, inventory
---
dummy:
ceph_origin: repository
ceph_repository: community
ceph_stable_release: nautilus
public_network: "192.168.56.0/24"
cluster_network: "192.168.57.0/24"
monitor_interface: enp0s8
osd_pool_default_pg_num: 8
osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
- /dev/sdb
- /dev/sdc
radosgw_interface: enp0s8
openstack_config: true
openstack_glance_pool:
name: "images"
pg_num: "{{ osd_pool_default_pg_num }}"
pgp_num: "{{ osd_pool_default_pg_num }}"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
openstack_cinder_pool:
name: "volumes"
pg_num: "{{ osd_pool_default_pg_num }}"
pgp_num: "{{ osd_pool_default_pg_num }}"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
openstack_nova_pool:
name: "vms"
pg_num: "{{ osd_pool_default_pg_num }}"
pgp_num: "{{ osd_pool_default_pg_num }}"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
openstack_cinder_backup_pool:
name: "backups"
pg_num: "{{ osd_pool_default_pg_num }}"
pgp_num: "{{ osd_pool_default_pg_num }}"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
openstack_pools:
- "{{ openstack_glance_pool }}"
- "{{ openstack_cinder_pool }}"
- "{{ openstack_nova_pool }}"
- "{{ openstack_cinder_backup_pool }}"
openstack_keys:
- { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.cinder-backup, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }
- { name: client.openstack, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }
inventory
[mons]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[mgrs]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[osds]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[rgws]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
[grafana-server]
192.168.56.130 ansible_user=root ansible_ssh_pass=msreddy become=true
Environment:
- OS (e.g. from /etc/os-release): NAME=“CentOS Linux” VERSION=“7 (Core)” ID=“centos” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“CentOS Linux 7 (Core)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:centos:centos:7” HOME_URL=“https://www.centos.org/” BUG_REPORT_URL=“https://bugs.centos.org/”
CENTOS_MANTISBT_PROJECT=“CentOS-7” CENTOS_MANTISBT_PROJECT_VERSION=“7” REDHAT_SUPPORT_PRODUCT=“centos” REDHAT_SUPPORT_PRODUCT_VERSION=“7”
-
Kernel (e.g.
uname -a
): 3.10.0-1062.4.1.el7.x86_64 -
Docker version if applicable (e.g.
docker version
): Docker version 1.13.1, build 7f2769b/1.13.1 -
Ansible version (e.g.
ansible-playbook --version
): ansible 2.8.7 -
ceph-ansible version (e.g.
git head or tag or stable branch
): stable-4.0 -
Ceph version (e.g.
ceph -v
):
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
It looks like you’re using the site-container.yml playbook but don’t use the right variables.
This is only for non containerized deployment and using rpm repository instead. Instead you should use something like:
Could you give it a try ?
Also I noticed :
There’s no osd_scenario anymore in stable-4.0 so this variable has ne effect.
https://docs.ceph.com/ceph-ansible/master/osds/scenarios.html#osd-scenario
thanks @dsavineau