question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

run 'ceph-volume lvm batch --report fails

See original GitHub issue

Bug Report

What happened: run 'ceph-volume lvm batch --report fails with following error when we run playbook against a running cluster.

How to reproduce it (minimal and precise):

fatal: [dal13cephosd01]: FAILED! => changed=false
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - dal13cephdash01:443/rhceph/rhceph-4-rhel8:latest
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --prepare
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
  - /dev/sdh
  - /dev/sdi
  - /dev/sdj
  - /dev/sdk
  - /dev/nvme0n1
  - /dev/nvme1n1
  - --report
  - --format=json
  msg: non-zero return code
  rc: 1
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
 --> Aborting because strategy changed from bluestore.MixedType to bluestore.SingleType after filtering    

What you expected to happen: We expect playbook to continue and complete without any errors/skip this step as OSD’s are already configured and running.

Share your group_vars files, inventory

Environment:

  • OS (e.g. from /etc/os-release): Ceph Ansible running on RHEL 7 and Ceph Cluster running on 8.0

  • Kernel (e.g. uname -a): Ceph-Ansible machine: Linux dal13cephdash01.sdslab.net 3.10.0-1062.1.1.el7.x86_64 #1 SMP Tue Aug 13 18:39:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux Ceph cluster machiens: Linux dal13cephosd01 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  • Docker version if applicable (e.g. docker version): [root@dal13cephosd01 ~]# docker version Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. Version: 1.4.2 RemoteAPI Version: 1 Go Version: go1.12.6 OS/Arch: linux/amd64

  • Ansible version (e.g. ansible-playbook --version): [root@dal13cephdash01 ceph-ansible]# ansible-playbook --version ansible-playbook 2.8.6 config file = /usr/share/ceph-ansible/ansible.cfg configured module search path = [u’/usr/share/ceph-ansible/library’] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

  • ceph-ansible version (e.g. git head or tag or stable branch): https://github.com/ceph/ceph-ansible

  • Ceph version (e.g. ceph -v): Natilus 4.0

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
alfredodezacommented, Nov 27, 2019

@ashoksangee it isn’t that this is how ceph-ansible is designed. A storage administrator should try to manage a cluster in the most homogeneous way possible. It is unfeasible to expect tooling to support every combination possible, and that is a design choice of ceph-volume, which allows some automatic detection to ease the LVM handling at the cost of reducing support for every other non-compliant variation.

My recommendation would be to try and keep the same strategy. If you can’t, then don’t rely on the automatic behavior and pre-create the LVs. The ceph-volume tool will happily consume those and ceph-ansible has full support for this

0reactions
ashoksangeecommented, Nov 27, 2019

@alfredodeza I think your suggestion makes lot of sense. Thank you so much for your time.

Read more comments on GitHub >

github_iconTop Results From Across the Web

report /dev/vdd " fails on lvm OSDs set up with dm-cache
Deploy lvm scenario 2. create dm-cache on top of OSDs 3. run "ceph-volume lvm batch --report <disk>" Actual results: fail Expected results: same...
Read more >
Bug #39442: ceph-volume lvm batch wrong partitioning with ...
When using ceph-volume lvm batch on one device with separate db-device and multiple osd per device, ... 1) RUNNING WITH REPORT:.
Read more >
ceph-volume - Ceph OSD deployment and inspection tool
Logical volume name format is vg/lv. Fails if OSD has already got attached DB. ... LVM volumes are permitted for Target only, both...
Read more >
ceph-volume(8) — ceph-osd — Debian testing
ceph -volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc ... [--yes] Skip the report and prompt to continue provisioning; [--prepare] Only prepare OSDs, ...
Read more >
Nautilus 14.2.6 ceph-volume bluestore _read_fsid unparsable ...
Mind re-running you ceph-volume command with debug output enabled: CEPH_VOLUME_DEBUG=true ceph-volume --cluster ceph lvm batch --bluestore .
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found