question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Fails on task 'ceph-volume lvm batch --report' to see how many osds are to be created

See original GitHub issue

Bug Report

What happened: Ceph-Ansible on running: TASK [ceph-config : run ‘ceph-volume lvm batch --report’ to see how many osds are to be created]

Error Message: The full traceback is:

WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_ceph_volume_payload_imAspY/ansible_ceph_volume_payload.zip/ansible/module_utils/basic.py", line 2564, in run_command
    cmd = subprocess.Popen(args, **kwargs)
  File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
    errread, errwrite)
  File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
    raise child_exception

[WARNING]: The value 5120 (type int) in a string field was converted to u'5120' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

[WARNING]: The value -1 (type int) in a string field was converted to u'-1' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

[WARNING]: The value False (type bool) in a string field was converted to u'False' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
fatal: [ceph01]: FAILED! => changed=false 
  cmd: ceph-volume --cluster ceph lvm batch --bluestore --yes /dev/sda --report --format=json
  invocation:
    module_args:
      action: batch
      batch_devices:
      - /dev/sda
      block_db_devices: []
      block_db_size: '-1'
      cluster: ceph
      containerized: 'False'
      crush_device_class: null
      data: null
      data_vg: null
      db: null
      db_vg: null
      destroy: true
      dmcrypt: false
      journal: null
      journal_size: '5120'
      journal_vg: null
      objectstore: bluestore
      osd_fsid: null
      osds_per_device: 1
      report: true
      wal: null
      wal_devices: []
      wal_vg: null
  msg: '[Errno 2] No such file or directory'
  rc: 2

What you expected to happen: This is the output when running the command host without ansible, so I presume something similar

{
    "changed": true, 
    "osds": [
        {
            "block.db": {}, 
            "data": {
                "human_readable_size": "222.00 GB", 
                "parts": 1, 
                "path": "/dev/sda", 
                "percentage": 100, 
                "size": 238370684928
            }
        }
    ], 
    "vgs": []
}

How to reproduce it (minimal and precise): ansible-playbook site.yml -i hosts -vvv Share your group_vars files, inventory inventory (hosts):

[mons]
ceph03
ceph02
[mgrs]
ceph03
[osds]
ceph01

group_vars/osds.yml:

osd_auto_discovery: true
osd_scenario: lvm

group_vars/all.yml

configure_firewall: true
monitor_interface: p1p1
ceph_mon_firewall_zone: public
ceph_mgr_firewall_zone: public
ceph_osd_firewall_zone: public
ceph_origin: distro
monitor_interface: p1p1
public_network: bbb..aa.xxx.c/dd
cluster_network: bbb.aa..xxx.c/dd
ip_version: ipv4
containerized_deployment: False
dashboard_enabled: False

Environment:

  • OS: Red Hat Enterprise Linux Server release 7.7 (Maipo)
  • Kernel: 3.10.0-1062.9.1
  • Docker version if applicable: N/A
  • Ansible version: ansible 2.8.7
  • ceph-ansible version: stable-4.0
  • Ceph version: 14.2.5 nautilus (stable)

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
GMW99commented, Jun 30, 2020

@GMW99 Hi. I am facing a similar issue with ceph-volume --clsuter ceph inventory --format json. May i know how exactly this issue was solved. How can i append the appropriate paths?

Step 0) Edit library/ceph_volume.py heading from #!/usr/bin/python to #!/usr/bin/env python This makes sure the program uses the python interpreter in your environment.

Step 1) Find the ceph-volumes path is using: which ceph-volume In return I see /usr/sbin/ceph-volume, you may see different

Step 2) Within the ansible directory I did grep -r ceph_volume: this should then return a list of files where ceph_volume: occurs, i.e.

  • infrastructure-playbooks/filestore-to-bluestore.yml
  • roles/ceph-config/tasks/main.yml
  • roles/ceph-osd/tasks/scenarios/lvm.yml
  • library/ceph_volume.py Ignore this file we changed it in step 0

Step 3) using roles/ceph-osd/tasks/scenarios/lvm.yml as an example if we go to edit where ceph_volume: is used we will see

---
- name: "use ceph-volume to create {{ osd_objectstore }} osds"
  ceph_volume:
    cluster: "{{ cluster }}"
    objectstore: "{{ osd_objectstore }}"
    data: "{{ item.data }}"
    data_vg: "{{ item.data_vg|default(omit) }}"
    journal: "{{ item.journal|default(omit) }}"
    journal_vg: "{{ item.journal_vg|default(omit) }}"
    db: "{{ item.db|default(omit) }}"
    db_vg: "{{ item.db_vg|default(omit) }}"
    wal: "{{ item.wal|default(omit) }}"
    wal_vg: "{{ item.wal_vg|default(omit) }}"
    crush_device_class: "{{ item.crush_device_class|default(omit) }}"
    dmcrypt: "{{ dmcrypt|default(omit) }}"
    action: "{{ 'prepare' if containerized_deployment else 'create' }}"
  environment:
    CEPH_VOLUME_DEBUG: 1
    CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment else None }}"
    CEPH_CONTAINER_BINARY: "{{ container_binary }}"
    PYTHONIOENCODING: utf-8
  with_items: "{{ lvm_volumes }}"
  tags: prepare_osd

Now add to the environment the path, it will look something like:

 environment:
    CEPH_VOLUME_DEBUG: 1
    CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment else None }}"
    CEPH_CONTAINER_BINARY: "{{ container_binary }}"
    PYTHONIOENCODING: utf-8
    PATH: "{{ ansible_env.PATH }}:/your/ceph-volume/path"

The docs are here if you are not sure what I mean: https://docs.ansible.com/ansible/latest/reference_appendices/faq.html

Step 4)

Change all relevant file to have the correct path.

1reaction
SachinMaharanacommented, Jul 3, 2020

@GMW99 Hi Gabryel, Thanks a lot for taking time and detailing the solution for the issue and also acknowledging it in ceph-users list. It helped us to fix the issue quickly.

Read more comments on GitHub >

github_iconTop Results From Across the Web

report /dev/vdd " fails on lvm OSDs set up with dm-cache
Deploy lvm scenario 2. create dm-cache on top of OSDs 3. run "ceph-volume lvm batch --report <disk>" Actual results: fail Expected results: same...
Read more >
batch — Ceph Documentation
The subcommand allows to create multiple OSDs at the same time given an input of devices. The batch subcommand is closely related to...
Read more >
SES 7 | Troubleshooting Guide - SUSE Documentation
Report issues with the documentation at https://bugzilla.suse.com/. ... If you need to run ceph-volume on an OSD node, you need to prepend it...
Read more >
ceph-volume - Ceph OSD deployment and inspection tool
Example usage with three devices: ceph-volume lvm batch --bluestore ... Do not enable or create any systemd units • [--osds-per-device] Provision more than ......
Read more >
Using and Operating Ceph - CERN
ceph auth get-or-create-key client.bootstrap-osd mon 'allow profile ... automate this task you can pass the --yes parameter to the ceph-volume lvm batch
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found