question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

rolling_update error expected an absolute path in /dev/ or /sys or a unit name: Invalid argument

See original GitHub issue

Bug Report

What happened:

TASK [scan ceph-disk osds with ceph-volume if deploying nautilus] *************************************************************************************************************************************************
task path: /home/ansible/ceph-ansible/infrastructure-playbooks/rolling_update.yml:396
Tuesday 22 September 2020  11:13:00 +0200 (0:00:00.027)       0:04:23.012 ***** 
fatal: [ceph1]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster=ceph
  - simple
  - scan
  - --force
  delta: '0:00:00.381884'
  end: '2020-09-22 11:13:01.367947'
  msg: non-zero return code
  rc: 1
  start: '2020-09-22 11:13:00.986063'
  stderr: |2-
     stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device
     stderr: Bad argument "/var/lib/ceph/osd/ceph-0", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
    Running command: /sbin/cryptsetup status tmpfs
     stderr: blkid: error: tmpfs: No such file or directory
     stderr: lsblk: tmpfs: not a block device
    Traceback (most recent call last):
      File "/usr/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 40, in __init__
        self.main(self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 151, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/simple/main.py", line 33, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/simple/scan.py", line 378, in main
        device = Device(self.encryption_metadata['device'])
      File "/usr/lib/python3/dist-packages/ceph_volume/util/device.py", line 92, in __init__
        self._parse()
      File "/usr/lib/python3/dist-packages/ceph_volume/util/device.py", line 138, in _parse
        vgname, lvname = self.path.split('/')
    ValueError: not enough values to unpack (expected 2, got 1)
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Already checked #5362, the playbook have the --force option.

What you expected to happen:

All OSDs working fine, and cluster updated to latest release.

How to reproduce it (minimal and precise):

Launched command : ansible-playbook -vv -i inventory infrastructure-playbooks/rolling_update.yml

Share your group_vars files, inventory and full ceph-ansibe log

group_vars/all.yml :

tp_service_enabled: true
ntp_daemon_type: chronyd
ceph_origin: repository
ceph_repository: community
ceph_mirror: http://download.ceph.com
ceph_stable_key: https://download.ceph.com/keys/release.asc
ceph_stable_release: octopus
ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"
node_exporter_container_image: "docker.io/prom/node-exporter:v0.17.0"
node_exporter_port: 9100
grafana_container_image: "docker.io/grafana/grafana:7.1.5"
prometheus_container_image: "docker.io/prom/prometheus:latest"
prometheus_container_cpu_period: 100000
prometheus_container_cpu_cores: 2
prometheus_container_memory: 4
prometheus_data_dir: /var/lib/prometheus
prometheus_conf_dir: /etc/prometheus
prometheus_port: 9092

group_vars/osds.yaml

---
devices:
  - /dev/sdb
  - /dev/sdc

inventory:

[mons]
ceph1
ceph2
ceph3
[osds]
ceph1
ceph2
ceph3
[mgrs]
ceph0
ceph1
[mdss]
ceph2
ceph3
[grafana-server]
ceph1

Environment:

  • OS : Debian GNU/Linux 10 Buster
  • Kernel : 4.19.0-10-amd64
  • Docker version if applicable : 19.03.12
  • Ansible version : 2.9.6
  • ceph-ansible version : stable-5.0
  • Ceph version (e.g. ceph -v): 15.2.4 octopus

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
dsavineaucommented, Sep 23, 2020

This is a ceph-volume issue with a regression introduced in the simple scan sub command by [1] in v15.2.5.

This has been fixed by [2] in the Octopus branch but not part of any Octopus releases. Note that this is affected Nautilus too [3].

Any ideas how to solve/workaround this?

Stay on 15.2.4 or use the Octopus devel packages.

If you want tu use the Octopus devel packages with ceph-ansible you can use something like:

ceph_origin: repository
ceph_repository: dev
ceph_dev_branch: octopus
ceph_dev_sha1: latest

This will use the build from shaman repository like https://shaman.ceph.com/api/repos/ceph/octopus/latest/ubuntu/bionic/repo or https://shaman.ceph.com/api/repos/ceph/octopus/latest/debian/buster/repo (I’ve not tested this but I’m sure this is working for CentOS at least)

[1] https://github.com/ceph/ceph/commit/b5fb55457c4800bbb50e578931171d109b9916df [2] https://github.com/ceph/ceph/commit/084b252dbf78e908408ef882de7f2e0af494652a [3] https://bugzilla.redhat.com/show_bug.cgi?id=1872983

1reaction
dsavineaucommented, Dec 1, 2020

@NamrataSitlani Because the fix isn’t included in 15.2.7

15.2.6 was only released for fixing a CVE for msgr v2 protocol [1] 15.2.7 was only released for fixing a data loss issue for RGW [2]

The ceph-volume simple scan issue should be fixed in the next Octopus release : 15.2.8

[1] https://lists.ceph.io/hyperkitty/list/ceph-announce@ceph.io/thread/5ZF7ZILHLAD6O6RHSP6Q2O56VBSPBZAI/ [2] https://lists.ceph.io/hyperkitty/list/ceph-announce@ceph.io/thread/Y267KT2TQJ3VT7UQCC2ES4ZZV2OTL46P/

Read more comments on GitHub >

github_iconTop Results From Across the Web

python - having problem with a username of "\robk\' in file path
I believe your error is caused by an invalid file path. You are currently using a relative path. Try:
Read more >
Extend the Kubernetes API with CustomResourceDefinitions
When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you ...
Read more >
Kustomize - The right way to do templating in Kubernetes
So, first of all, Kustomize is like Kubernetes, it is totally declarative ! You say what you want and the system provides it...
Read more >
kubernetes_deployment | Resources | hashicorp/kubernetes
Note that progress will not be estimated during the time a deployment is ... to StartTime before the system will actively try to...
Read more >
configure: error: expected an absolute directory name for --prefix
On May 28, 2006, at 1:58 PM, Manuel López-Ibáñez wrote: Dear all, Doesn't autotools support relative paths in configure? The reason why you ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found