run 'ceph-volume lvm batch --report fails
See original GitHub issueBug Report
What happened: run 'ceph-volume lvm batch --report fails with following error when we run playbook against a running cluster.
How to reproduce it (minimal and precise):
fatal: [dal13cephosd01]: FAILED! => changed=false
cmd:
- podman
- run
- --rm
- --privileged
- --net=host
- --ipc=host
- --ulimit
- nofile=1024:4096
- -v
- /run/lock/lvm:/run/lock/lvm:z
- -v
- /var/run/udev/:/var/run/udev/:z
- -v
- /dev:/dev
- -v
- /etc/ceph:/etc/ceph:z
- -v
- /run/lvm/:/run/lvm/
- -v
- /var/lib/ceph/:/var/lib/ceph/:z
- -v
- /var/log/ceph/:/var/log/ceph/:z
- --entrypoint=ceph-volume
- dal13cephdash01:443/rhceph/rhceph-4-rhel8:latest
- --cluster
- ceph
- lvm
- batch
- --bluestore
- --yes
- --prepare
- /dev/sdb
- /dev/sdc
- /dev/sdd
- /dev/sde
- /dev/sdf
- /dev/sdg
- /dev/sdh
- /dev/sdi
- /dev/sdj
- /dev/sdk
- /dev/nvme0n1
- /dev/nvme1n1
- --report
- --format=json
msg: non-zero return code
rc: 1
stderr: |-
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
--> Aborting because strategy changed from bluestore.MixedType to bluestore.SingleType after filtering
What you expected to happen: We expect playbook to continue and complete without any errors/skip this step as OSD’s are already configured and running.
Share your group_vars files, inventory
Environment:
-
OS (e.g. from /etc/os-release): Ceph Ansible running on RHEL 7 and Ceph Cluster running on 8.0
-
Kernel (e.g.
uname -a
): Ceph-Ansible machine: Linux dal13cephdash01.sdslab.net 3.10.0-1062.1.1.el7.x86_64 #1 SMP Tue Aug 13 18:39:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux Ceph cluster machiens: Linux dal13cephosd01 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux -
Docker version if applicable (e.g.
docker version
): [root@dal13cephosd01 ~]# docker version Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. Version: 1.4.2 RemoteAPI Version: 1 Go Version: go1.12.6 OS/Arch: linux/amd64 -
Ansible version (e.g.
ansible-playbook --version
): [root@dal13cephdash01 ceph-ansible]# ansible-playbook --version ansible-playbook 2.8.6 config file = /usr/share/ceph-ansible/ansible.cfg configured module search path = [u’/usr/share/ceph-ansible/library’] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] -
ceph-ansible version (e.g.
git head or tag or stable branch
): https://github.com/ceph/ceph-ansible -
Ceph version (e.g.
ceph -v
): Natilus 4.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (4 by maintainers)
Top GitHub Comments
@ashoksangee it isn’t that this is how ceph-ansible is designed. A storage administrator should try to manage a cluster in the most homogeneous way possible. It is unfeasible to expect tooling to support every combination possible, and that is a design choice of ceph-volume, which allows some automatic detection to ease the LVM handling at the cost of reducing support for every other non-compliant variation.
My recommendation would be to try and keep the same strategy. If you can’t, then don’t rely on the automatic behavior and pre-create the LVs. The ceph-volume tool will happily consume those and ceph-ansible has full support for this
@alfredodeza I think your suggestion makes lot of sense. Thank you so much for your time.