[Feature] RAID Support
See original GitHub issueIs your feature request related to a problem? Please describe. The reported disk usage is simply a sum of all disks on the system, not real storage.
Describe the solution you’d like Storage should monitor a configurable list of volumes, not just block devices.
Additional context For example, I have a test server that has two 240 GB SSDs and two 480GB HDDs. Dashdot reports this as 1.4TB of storage with some tiny sliver as “used.” However, those two HDDs and one of the SSDs are in a ZFS pool together. So the actual state of storage on the server is one 240GB volume with 5% used and one 480GB volume with 33% used.
root@test:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 465.8G 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 238.5G 0 disk
└─sdb1 8:17 0 238.5G 0 part /
sdc 8:32 0 238.5G 0 disk
├─sdc1 8:33 0 238.5G 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 465.8G 0 part
└─sdd9 8:57 0 8M 0 part
root@test:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 234G 9.7G 213G 5% /
root@test:~# zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0B in 00:42:24 with 0 errors on Sun May 8 01:06:25 2022
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD5002ABYS-02B1B0_WD-WCASYA237797 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA2_WD-WMAYP6798572 ONLINE 0 0 0
cache
ata-Samsung_SSD_840_PRO_Series_S12RNEACC87965T ONLINE 0 0 0
errors: No known data errors
root@test:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 464G 156G 308G - - 17% 33% 1.00x ONLINE -
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:60 (33 by maintainers)
Top Results From Across the Web
What RAID means and why you might want one
Best answer: Redundant Array of Independent/Inexpensive Disks (RAID) is a technology that allows storing data across multiple hard drives.
Read more >RAID (redundant array of independent disks) By - TechTarget
RAID 5. This level is based on parity block-level striping. The parity information is striped across each drive, enabling the array to function, ......
Read more >RAID level 0, 1, 5, 6 and 10 | Advantage, disadvantage, use
RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16. Data...
Read more >RAID - Wikipedia
RAID is a data storage virtualization technology that combines multiple physical disk drive ... Apple's macOS and macOS Server support RAID 0, RAID...
Read more >The features of RAID technology : Fujitsu Global
These are typically divided into 6 levels ; RAID 0, RAID 1, RAID 2, RAID 3, RAID 4, RAID 5. They all differ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Indeed, ZFS doesn’t set a mount point for the blockDevices.
After investigating with sebhildebrant, the issue on my end, The labels and types weren’t fully erased and set when I switched from linux-raid to zfs.
Thank you for your time and help.
Technically there is an option for that, but I don’t think it will work for your setup as of right now.
Normally, every disk in
blockDeviceslists its partitions and mountpoints, but your mountpoint/mnt/host_datais not claimed by any partition, so it would get assigned to your overlay network, which would result in one big graph again.