Vagrant: Osd failed with no such file or directory
See original GitHub issueBug Report
What happened: Task “ceph-osd : manually prepare ceph “bluestore” non-containerized osd disk(s) with collocated osd data and journal” is run in osd LVM scenario and failed with no such file or directory. See appended log file.
What you expected to happen: Expected the osds to come up without issues
How to reproduce it (minimal and precise):
- Checkout master
- Use group_var/config provided below
mv vagrant_variables.yml.sample vagrant_variables.yml
- Install requirements
pip install -r requirements.txt
- Run
vagrant up
Share your group_vars files, inventory
ceph_origin: repository
ceph_repository: dev
ceph_stable_release: mimic
public_network: "192.168.42.0/24"
cluster_network: "192.168.43.0/24"
monitor_interface: eth1
devices:
- '/dev/sdb'
- '/dev/sdc'
osd_scenario: lvm
The inventory was generated by Vagrant.
Environment:
- OS (e.g. from /etc/os-release): Ubuntu 16.04 from Vagrant box ceph/ubuntu-xenial
- Kernel (e.g.
uname -a
): Linux osd0 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux - Docker version if applicable (e.g.
docker version
): - Ansible version (e.g.
ansible-playbook --version
): ansible 2.7.5 - ceph-ansible version (e.g.
git head or tag or stable branch
): git master on commit 160090b441882daa7378b43b42fb110f1f6b5d64 - Ceph version (e.g.
ceph -v
): ceph version 14.0.1-2013-g0ccdc79 (0ccdc799384801faa728d5429e82b522ee3b618b) nautilus (dev) ansible.log
Issue Analytics
- State:
- Created 5 years ago
- Comments:12 (5 by maintainers)
Top Results From Across the Web
No such file or directory: '/var/lib/ceph' when creating initial ...
Description of problem: When trying to create a storage cluster following the official documentation [0], ceph-deploy fails to create ...
Read more >'/usr/bin/vagrant: No such file or directory' error in the terminal
Try hash -d vagrant to clear bash cache.
Read more >Bug #22720: Added an osd by ceph-volume,it got an error in ...
when I using ceph-volume to add an osd,prepare is ok, but, when I activate osd,I got an error" Failed to issue method call:...
Read more >ERROR: failed to open ID file '/home/ceph/.pub': No such file ...
ssh permissions are 700 and all files in folder are 600. The current user is the owner on all files. Share.
Read more >Bug listing with status UNCONFIRMED as at 2022/12/22 02 ...
... mangles CFLAGS ( x86_64-pc-linux-gnu-gcc: error: l1-cache-line-size=64: No such file or directory )" status:UNCONFIRMED resolution: severity:normal ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@rishabh-d-dave I think we should simply remove the defalut value for osd_scenario from Vagrantfile. The few default values in the Vagrantfile were there only so you could deploy a minimal cluster without having to set anything in group_vars. I’m unsure this is something really relevant, I would recommend to use ceph-nano for such use cases.
@guits What about moving
osd_scenario
(and other default values) fromVagrantfile
toceph-default
? This way these values would be override-able.