Software Raid5 size smaller than expected

Hi,

I created a Raid5 array during instalation using 3 8TB devices and set it to /home. To be completelly honest I didn’t pay attention to the size of it and after the instalation I got the following info when running mdadm --detail:

/dev/md127:
Version : 1.2
Creation Time : Tue Jun 18 12:16:26 2024
Raid Level : raid5
Array Size : 7814023168 (7.28 TiB 8.00 TB)
Used Dev Size : 3907011584 (3.64 TiB 4.00 TB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Fri Jun 21 10:05:30 2024
         State : clean
Active Devices : 3

Working Devices : 3
Failed Devices : 0
Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 512K

Consistency Policy : bitmap

          Name : localhost.localdomain:home
          UUID : cc0b3fcb:32f59208:89283bf9:d43e0ea9
        Events : 3919

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1
   3       8       49        2      active sync   /dev/sdd1

I’ve tried mdadm --grow /dev/md127 -z max but the size remained unchanged.

What am I doing wrong? Wasn’t the expected size to be around 16TB?

Thaks in advance

I’ve already found the solution. Rhis topic can be closed.

Out of curiosity, what is it?

(There is a tiny chance that someone encounters the same issue and finds this thread. It would be nice for them to then find the solution here too.)

Lacking a reply from the OP, and curious, so in search of more information, i found this web page centos - raid10 in mdadm reports incorrect "Used Dev Size" - Server Fault which states that “all of the devices has to be the same size. If they are not, then whatever device is smallest is used as the baseline. Used Dev Size is that number.”

This suggests that the original post, stating all drives are 8 TB, is likely incorrect statement?
More likely three 4 TB drives, right?
(one of which would be used for the RAID5 parity, so 8 TB available in the array).

If OP were incorrect about having three 8 TB drives, all it would take is for one of those drives to be 4 TB and the RAID would function as if all the drives were 4 TB.

Yes Linde, exactly, thanks for confirming the rationale.

I configured the raid volume at instalation and failed to see that it defaults to the size of the smallest drive (in this case 8TB) and you have to manually resize it. I had to offline and resize the partition of each disk and then resize the array. I’ll make a detailed step by step as soon as I get to the office on monday (it’s saturday night down here).

The solution is to offline and resize the RAID partition on each disk. I did that using the following commands:

To offline disk sdb1

mdadm /dev/md127 --fail /dev/sdb1 --remove /dev/sdb1

To resize the partition

parted
(parted) select /dev/sdb
(parted) resizepart
Partition number? 1
End? [4000GB]? 8002GB

Then add back the device to the array

mdadm -a /dev/md127 /dev/sdb1

After this is done for every disk, just resize the array

mdadm --grow /dev/md127 -z max

Just for clarification, for anyone who visits this thread in the future,
you wrote that “defaults to the size of the smallest drive (in this case 8TB)”
but then show 4000GB (4TB) in your resizing details,
so I think you meant that the smallest was 4TB and you made 8TB to fix the problem?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.