question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

deploying containerized ceph-ansible stable-4 failed in centos 7

See original GitHub issue

Bug Report

What happened: I’ve tryed to deploy containerized ceph-ansible stable-4 and it’s failing with this error : The conditional check ‘ceph_mon_container_stat.get(‘rc’) == 0’ failed. The error was: error while evaluating conditional (ceph_mon_container_stat.get(‘rc’) == 0): ‘ceph_mon_container_stat’ is undefined full logs: http://paste.openstack.org/show/788157/

How to reproduce it (minimal and precise):

Fresh install centos7 ansible-playbook site-container.yml -i inventory

Share your group_vars files, inventory

---
dummy:
ceph_origin: repository
ceph_repository: community
ceph_stable_release: nautilus
public_network: "192.168.56.0/24"
cluster_network: "192.168.57.0/24"
monitor_interface: enp0s8
osd_pool_default_pg_num: 8
osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
  - /dev/sdb
  - /dev/sdc
radosgw_interface: enp0s8
openstack_config: true
openstack_glance_pool:
 name: "images"
 pg_num: "{{ osd_pool_default_pg_num }}"
 pgp_num: "{{ osd_pool_default_pg_num }}"
 rule_name: "replicated_rule"
 type: 1
 erasure_profile: ""
 expected_num_objects: ""
 application: "rbd"
 size: "{{ osd_pool_default_size }}"
 min_size: "{{ osd_pool_default_min_size }}"
openstack_cinder_pool:
 name: "volumes"
 pg_num: "{{ osd_pool_default_pg_num }}"
 pgp_num: "{{ osd_pool_default_pg_num }}"
 rule_name: "replicated_rule"
 type: 1
 erasure_profile: ""
 expected_num_objects: ""
 application: "rbd"
 size: "{{ osd_pool_default_size }}"
 min_size: "{{ osd_pool_default_min_size }}"
openstack_nova_pool:
 name: "vms"
 pg_num: "{{ osd_pool_default_pg_num }}"
 pgp_num: "{{ osd_pool_default_pg_num }}"
 rule_name: "replicated_rule"
 type: 1
 erasure_profile: ""
 expected_num_objects: ""
 application: "rbd"
 size: "{{ osd_pool_default_size }}"
 min_size: "{{ osd_pool_default_min_size }}"
openstack_cinder_backup_pool:
 name: "backups"
 pg_num: "{{ osd_pool_default_pg_num }}"
 pgp_num: "{{ osd_pool_default_pg_num }}"
 rule_name: "replicated_rule"
 type: 1
 erasure_profile: ""
 expected_num_objects: ""
 application: "rbd"
 size: "{{ osd_pool_default_size }}"
 min_size: "{{ osd_pool_default_min_size }}"
openstack_pools:
 - "{{ openstack_glance_pool }}"
 - "{{ openstack_cinder_pool }}"
 - "{{ openstack_nova_pool }}"
 - "{{ openstack_cinder_backup_pool }}"
openstack_keys:
 - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
 - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
 - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }
 - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }

inventory

[mons]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x  become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[mgrs]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[osds]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.131 ansible_user=root ansible_ssh_pass=x become=true
192.168.56.132 ansible_user=root ansible_ssh_pass=x become=true
[rgws]
192.168.56.130 ansible_user=root ansible_ssh_pass=x become=true
[grafana-server]
192.168.56.130 ansible_user=root ansible_ssh_pass=msreddy become=true

Environment:

  • OS (e.g. from /etc/os-release): NAME=“CentOS Linux” VERSION=“7 (Core)” ID=“centos” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“CentOS Linux 7 (Core)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:centos:centos:7” HOME_URL=“https://www.centos.org/” BUG_REPORT_URL=“https://bugs.centos.org/

CENTOS_MANTISBT_PROJECT=“CentOS-7” CENTOS_MANTISBT_PROJECT_VERSION=“7” REDHAT_SUPPORT_PRODUCT=“centos” REDHAT_SUPPORT_PRODUCT_VERSION=“7”

  • Kernel (e.g. uname -a): 3.10.0-1062.4.1.el7.x86_64

  • Docker version if applicable (e.g. docker version): Docker version 1.13.1, build 7f2769b/1.13.1

  • Ansible version (e.g. ansible-playbook --version): ansible 2.8.7

  • ceph-ansible version (e.g. git head or tag or stable branch): stable-4.0

  • Ceph version (e.g. ceph -v):

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
dsavineaucommented, Jan 10, 2020

Fresh install centos7 ansible-playbook site-container.yml -i inventory

It looks like you’re using the site-container.yml playbook but don’t use the right variables.

---
ceph_origin: repository
ceph_repository: community
ceph_stable_release: nautilus

This is only for non containerized deployment and using rpm repository instead. Instead you should use something like:

---
containerized_deployment: true
ceph_docker_image: ceph/daemon
ceph_docker_image_tag: latest-nautilus
ceph_docker_registry: docker.io

Could you give it a try ?

Also I noticed :

osd_scenario: non-collocated

There’s no osd_scenario anymore in stable-4.0 so this variable has ne effect.

https://docs.ceph.com/ceph-ansible/master/osds/scenarios.html#osd-scenario

0reactions
hmeNLEcommented, Mar 4, 2021

Fresh install centos7 ansible-playbook site-container.yml -i inventory

It looks like you’re using the site-container.yml playbook but don’t use the right variables.

---
ceph_origin: repository
ceph_repository: community
ceph_stable_release: nautilus

This is only for non containerized deployment and using rpm repository instead. Instead you should use something like:

---
containerized_deployment: true
ceph_docker_image: ceph/daemon
ceph_docker_image_tag: latest-nautilus
ceph_docker_registry: docker.io

Could you give it a try ?

Also I noticed :

osd_scenario: non-collocated

There’s no osd_scenario anymore in stable-4.0 so this variable has ne effect.

https://docs.ceph.com/ceph-ansible/master/osds/scenarios.html#osd-scenario

thanks @dsavineau

Read more comments on GitHub >

github_iconTop Results From Across the Web

Installation methods — ceph-ansible documentation
We support 3 main installation methods, all managed by the ceph_origin variable: repository : means that you will get Ceph installed through a...
Read more >
ceph-users - April 2022 - Mailing Lists
Hi, On a CentOS 7 VM with mainline kernel (5.11.2-1.el7.elrepo.x86_64 #1 SMP Fri Feb ... (ceph version 12.2.5 luminous stable) deployed with kolla-ansible....
Read more >
2021-December.txt - Mailing Lists - OpenStack
I don't see this in our ansible playbooks, nor in any of the config files in the RMQ container. What would this look...
Read more >
July 2020 - ceph-users - lists.ceph.io - Mailing Lists
I'm trying to deploy a ceph cluster with a cephadm tool. I've already successfully done all steps except adding OSDs. My testing equipment...
Read more >
Change log for 4.6.56
#141; Fix tuned install dependencies on RHEL and CentOS 7.x for s390x #143 ... Bug 1868630: Removes 'ceph block pool model' from the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found