Ceph OSD provisioning failure
See original GitHub issue@lae, the problem i am seeing when installing on a fresh bare-metal is
235: TASK [ansible-role-proxmox : Create Ceph OSDs] ********************************* 236: failed: [proxmox-test.corp.####.com] (item={u'device': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["pveceph", "osd", "create", "/dev/sdb"], "delta": "0:00:01.461373", "end": "2019-11-18 20:04:43.436298", "item": {"device": "/dev/sdb"}, "msg": "non-zero return code", "rc": 25, "start": "2019-11-18 20:04:41.974925", "stderr": "device '/dev/sdb' is already in use", "stderr_lines": ["device '/dev/sdb' is already in use"], "stdout": "", "stdout_lines": []} 237: failed: [proxmox-test.corp.####.com] (item={u'device': u'/dev/sdc'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["pveceph", "osd", "create", "/dev/sdc"], "delta": "0:00:00.968795", "end": "2019-11-18 20:04:44.735755", "item": {"device": "/dev/sdc"}, "msg": "non-zero return code", "rc": 25, "start": "2019-11-18 20:04:43.766960", "stderr": "device '/dev/sdc' is already in use", "stderr_lines": ["device '/dev/sdc' is already in use"], "stdout": "", "stdout_lines": []}
I have the following ansible parameters set:
pve_ceph_crush_rules:
- name: hdd
pve_ceph_enabled: true
pve_ceph_mds_group: all
pve_ceph_pools:
- name: vm-storage
pgs: 128
application: rbd
storage: true
- name: k8-storage
pgs: 64
application: rbd
pve_storages:
- name: vm-storage
type: rbd
content:
- images
- rootdir
pool: vm-storage
username: admin
monhost:
- proxmox-test.corp.####.com
pve_ceph_osds:
- device: "/dev/sdb"
- device: "/dev/sdc"
Any ideas what i am missing?
_Originally posted by @zenntrix in https://github.com/lae/ansible-role-proxmox/issues/73#issuecomment-555426425_
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (7 by maintainers)
Top GitHub Comments
Thanks for confirming that the role does not support that. Your help has been much appreciated! Its nice to see a gitrepo which has an active owner.
I think that question may be better suited for the pve-user mailing list. What you’re basically asking is, how do you reattach two OSDs if you reinstall Proxmox on a cluster node, right? That’s not a scenario that’s supported by this role, and it also possibly is not something that would be supported by the Proxmox team, either.