Fails on task 'ceph-volume lvm batch --report' to see how many osds are to be created
See original GitHub issueBug Report
What happened: Ceph-Ansible on running: TASK [ceph-config : run ‘ceph-volume lvm batch --report’ to see how many osds are to be created]
Error Message: The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_ceph_volume_payload_imAspY/ansible_ceph_volume_payload.zip/ansible/module_utils/basic.py", line 2564, in run_command
cmd = subprocess.Popen(args, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
[WARNING]: The value 5120 (type int) in a string field was converted to u'5120' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
[WARNING]: The value -1 (type int) in a string field was converted to u'-1' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
[WARNING]: The value False (type bool) in a string field was converted to u'False' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
fatal: [ceph01]: FAILED! => changed=false
cmd: ceph-volume --cluster ceph lvm batch --bluestore --yes /dev/sda --report --format=json
invocation:
module_args:
action: batch
batch_devices:
- /dev/sda
block_db_devices: []
block_db_size: '-1'
cluster: ceph
containerized: 'False'
crush_device_class: null
data: null
data_vg: null
db: null
db_vg: null
destroy: true
dmcrypt: false
journal: null
journal_size: '5120'
journal_vg: null
objectstore: bluestore
osd_fsid: null
osds_per_device: 1
report: true
wal: null
wal_devices: []
wal_vg: null
msg: '[Errno 2] No such file or directory'
rc: 2
What you expected to happen: This is the output when running the command host without ansible, so I presume something similar
{
"changed": true,
"osds": [
{
"block.db": {},
"data": {
"human_readable_size": "222.00 GB",
"parts": 1,
"path": "/dev/sda",
"percentage": 100,
"size": 238370684928
}
}
],
"vgs": []
}
How to reproduce it (minimal and precise): ansible-playbook site.yml -i hosts -vvv Share your group_vars files, inventory inventory (hosts):
[mons]
ceph03
ceph02
[mgrs]
ceph03
[osds]
ceph01
group_vars/osds.yml:
osd_auto_discovery: true
osd_scenario: lvm
group_vars/all.yml
configure_firewall: true
monitor_interface: p1p1
ceph_mon_firewall_zone: public
ceph_mgr_firewall_zone: public
ceph_osd_firewall_zone: public
ceph_origin: distro
monitor_interface: p1p1
public_network: bbb..aa.xxx.c/dd
cluster_network: bbb.aa..xxx.c/dd
ip_version: ipv4
containerized_deployment: False
dashboard_enabled: False
Environment:
- OS: Red Hat Enterprise Linux Server release 7.7 (Maipo)
- Kernel: 3.10.0-1062.9.1
- Docker version if applicable: N/A
- Ansible version: ansible 2.8.7
- ceph-ansible version: stable-4.0
- Ceph version: 14.2.5 nautilus (stable)
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
report /dev/vdd " fails on lvm OSDs set up with dm-cache
Deploy lvm scenario 2. create dm-cache on top of OSDs 3. run "ceph-volume lvm batch --report <disk>" Actual results: fail Expected results: same...
Read more >batch — Ceph Documentation
The subcommand allows to create multiple OSDs at the same time given an input of devices. The batch subcommand is closely related to...
Read more >SES 7 | Troubleshooting Guide - SUSE Documentation
Report issues with the documentation at https://bugzilla.suse.com/. ... If you need to run ceph-volume on an OSD node, you need to prepend it...
Read more >ceph-volume - Ceph OSD deployment and inspection tool
Example usage with three devices: ceph-volume lvm batch --bluestore ... Do not enable or create any systemd units • [--osds-per-device] Provision more than ......
Read more >Using and Operating Ceph - CERN
ceph auth get-or-create-key client.bootstrap-osd mon 'allow profile ... automate this task you can pass the --yes parameter to the ceph-volume lvm batch
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Step 0) Edit library/ceph_volume.py heading from
#!/usr/bin/python
to#!/usr/bin/env python
This makes sure the program uses the python interpreter in your environment.Step 1) Find the ceph-volumes path is using:
which ceph-volume
In return I see /usr/sbin/ceph-volume, you may see differentStep 2) Within the ansible directory I did
grep -r ceph_volume:
this should then return a list of files where ceph_volume: occurs, i.e.Step 3) using roles/ceph-osd/tasks/scenarios/lvm.yml as an example if we go to edit where ceph_volume: is used we will see
Now add to the environment the path, it will look something like:
The docs are here if you are not sure what I mean: https://docs.ansible.com/ansible/latest/reference_appendices/faq.html
Step 4)
Change all relevant file to have the correct path.
@GMW99 Hi Gabryel, Thanks a lot for taking time and detailing the solution for the issue and also acknowledging it in ceph-users list. It helped us to fix the issue quickly.