question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

deployment is successful but ceph osd tree shows no osds!

See original GitHub issue

Hi, deployment is successful but ceph osd tree shows no osds.

[root@kolla ceph-ansible]# ssh ceph-4-2
Last login: Tue Aug 22 18:16:15 2017 from 10.1.0.10
[root@ceph-4-2 ~]# ceph osd tree
ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1      0 root default
[root@ceph-4-2 ~]# ceph -s
    cluster 1439d592-555b-4ebc-89f7-4d93df2ae4b0
     health HEALTH_ERR
            72 pgs are stuck inactive for more than 300 seconds
            72 pgs stuck inactive
            72 pgs stuck unclean
            no osds
     monmap e2: 1 mons at {ceph-4-2=10.1.0.42:6789/0}
            election epoch 5, quorum 0 ceph-4-2
        mgr no daemons active
     osdmap e2: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v3: 72 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  72 creating

it is weird, looks like it didn’t formatted the disks properly:

[root@ceph-4-2 ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0 59.6G  0 disk
├─sda1        8:1    0    1G  0 part /boot
├─sda2        8:2    0    4G  0 part [SWAP]
└─sda3        8:3    0 54.6G  0 part /
nvme0n1     259:21   0  1.8T  0 disk
├─nvme0n1p1 259:22   0  1.8T  0 part /var/lib/ceph/osd/ceph-0
└─nvme0n1p2 259:23   0    5G  0 part
nvme1n1     259:6    0  1.8T  0 disk
├─nvme1n1p1 259:7    0  1.8T  0 part /var/lib/ceph/osd/ceph-1
└─nvme1n1p2 259:8    0    5G  0 part
nvme2n1     259:0    0  1.8T  0 disk
├─nvme2n1p1 259:1    0  1.8T  0 part /var/lib/ceph/osd/ceph-2
└─nvme2n1p2 259:2    0    5G  0 part
nvme3n1     259:24   0  1.8T  0 disk
├─nvme3n1p1 259:25   0  1.8T  0 part
└─nvme3n1p2 259:26   0    5G  0 part
nvme4n1     259:9    0  1.8T  0 disk
├─nvme4n1p1 259:10   0  1.8T  0 part /var/lib/ceph/osd/ceph-4
└─nvme4n1p2 259:11   0    5G  0 part
nvme5n1     259:3    0  1.8T  0 disk
├─nvme5n1p1 259:4    0  1.8T  0 part /var/lib/ceph/osd/ceph-5
└─nvme5n1p2 259:5    0    5G  0 part
nvme6n1     259:15   0  1.8T  0 disk
├─nvme6n1p1 259:16   0  1.8T  0 part /var/lib/ceph/osd/ceph-6
└─nvme6n1p2 259:17   0    5G  0 part
nvme7n1     259:27   0  1.8T  0 disk
├─nvme7n1p1 259:28   0  1.8T  0 part /var/lib/ceph/osd/ceph-7
└─nvme7n1p2 259:29   0    5G  0 part
nvme8n1     259:12   0  1.8T  0 disk
├─nvme8n1p1 259:13   0  1.8T  0 part /var/lib/ceph/osd/ceph-8
└─nvme8n1p2 259:14   0    5G  0 part
nvme9n1     259:18   0  1.8T  0 disk
├─nvme9n1p1 259:19   0  1.8T  0 part /var/lib/ceph/osd/ceph-9
└─nvme9n1p2 259:20   0    5G  0 part

playbook.zip

group_vars and hosts.zip

Any advise would be much appreciated

thank you very much

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:15 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
lesebcommented, Aug 24, 2017

Got it, working on a patch.

0reactions
lesebcommented, Aug 25, 2017

@Masber glad to hear it’s solved, I’ll merge #1803 soon, still trying to figure out why the CI is failing.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Chapter 5, Troubleshooting Ceph OSDs
Ceph OSD error messages. A table of common Ceph OSD error messages, and a potential fix. ... The OSDs are not balanced among...
Read more >
Troubleshooting OSDs - Ceph Documentation
Before troubleshooting your OSDs, first check your monitors and network. If you execute ceph health or ceph -s on the command line and...
Read more >
Ceph OSD Management - Rook.io
If all the PGs are active+clean and there are no warnings about being low on space, this means the data is fully replicated...
Read more >
Cluster Pools got marked read only, OSDs are near full. - SUSE
OSDs should never be full in theory and administrators should monitor how full OSDs are with "ceph osd df tree". If OSDs are...
Read more >
Replacing OSD disks - Ceph - Ubuntu
This section shows commands to run on a ceph-osd unit or a ceph-mon unit that will ... Map OSDs to machine hosts and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found