Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Ceph OSD provisioning failure

See original GitHub issue

@lae, the problem i am seeing when installing on a fresh bare-metal is

235: TASK [ansible-role-proxmox : Create Ceph OSDs] ********************************* 236: failed: [] (item={u'device': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["pveceph", "osd", "create", "/dev/sdb"], "delta": "0:00:01.461373", "end": "2019-11-18 20:04:43.436298", "item": {"device": "/dev/sdb"}, "msg": "non-zero return code", "rc": 25, "start": "2019-11-18 20:04:41.974925", "stderr": "device '/dev/sdb' is already in use", "stderr_lines": ["device '/dev/sdb' is already in use"], "stdout": "", "stdout_lines": []} 237: failed: [] (item={u'device': u'/dev/sdc'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["pveceph", "osd", "create", "/dev/sdc"], "delta": "0:00:00.968795", "end": "2019-11-18 20:04:44.735755", "item": {"device": "/dev/sdc"}, "msg": "non-zero return code", "rc": 25, "start": "2019-11-18 20:04:43.766960", "stderr": "device '/dev/sdc' is already in use", "stderr_lines": ["device '/dev/sdc' is already in use"], "stdout": "", "stdout_lines": []}

I have the following ansible parameters set:

  - name: hdd
pve_ceph_enabled: true
pve_ceph_mds_group: all
  - name: vm-storage
    pgs: 128
    application: rbd
    storage: true
  - name: k8-storage
    pgs: 64
    application: rbd
  - name: vm-storage
    type: rbd
      - images
      - rootdir
    pool: vm-storage
    username: admin
  - device: "/dev/sdb"
  - device: "/dev/sdc"

Any ideas what i am missing?

_Originally posted by @zenntrix in

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:13 (7 by maintainers)

github_iconTop GitHub Comments

zenntrixcommented, Nov 22, 2019

Thanks for confirming that the role does not support that. Your help has been much appreciated! Its nice to see a gitrepo which has an active owner.

laecommented, Nov 22, 2019

I think that question may be better suited for the pve-user mailing list. What you’re basically asking is, how do you reattach two OSDs if you reinstall Proxmox on a cluster node, right? That’s not a scenario that’s supported by this role, and it also possibly is not something that would be supported by the Proxmox team, either.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Steps to remove unwanted or failed Ceph OSD in Red Hat ...
How to remove a failed or bad Ceph OSD deployed using dynamic provisioning in an OCS/ODF 4.x cluster? Environment. Red Hat OpenShift Container ......
Read more >
OSD Service - Cephadm - Ceph Documentation
Ceph will not provision an OSD on a device that is not available. Creating New OSDs . There are a few ways...
Read more >
Ceph Common Issues -
One common case for failure is that you have re-deployed a test cluster and some state may remain from a previous deployment. If...
Read more >
Rook Ceph Provisioning issue - Stack Overflow
You should set accessModes to ReadWriteOnce when using rbd. ReadWriteMany is supported by cephfs. Also because your replica is 3 and the ...
Read more >
Deploy Hyper-Converged Ceph Cluster - Proxmox VE
More capacity allows you to increase storage density, but it also means that a single OSD failure forces Ceph to recover more data...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found