Mdadm raid 5 speed issue

Hello. I am getting ready to start doing my Youtube videos in 4K, so I wanted more speed. I got a 2.5GB network cards and a switch. I have a Rocky 8.8 server with a mdadm RAID 5, with 4 2TB Seagate Barracuda’s in a RAID 5, using mdadm. I am sharing the files using Samba. On 1GB I got an average for 80 Mb/s, both directions. On 2.5 GB I get a solid 100Mb/s on writes but I get around 280 Mb/s on reads. I know that Raid 5 has more overhead on writes because of the parity stuff. My question is on the writes. Will faster disks (going ssd) speed this up, or is it more of a processor issue because it is having to do the parity stuff. I have a single WD 1TB rotational drive shared for my scratch volume and I get reads and writes to this about 170Mb/s, but it just a single disk with no RAID. I am running an older AMD FX-6100 6 Core Processor, with 16GB of ram, so it may be time for a new MB and a new processor, or it may be time to pony up the $$$ for some 4TB SSD’s (Because I am going to need more space soon)

Upgrading to SSD’s will definitely help improve the writes as they are faster than spindle drives. And I guess the drives are connected to SATA interface, if that is the case then current CPU is fine but the SATA interface would become the bottleneck for throughput.

Hello,
Up to 4 physical drives SoftwareRAID (mdadm) is OK.
Your problem is not in the processor or the memory.
I won’t comment on your choice of configuration!
I do not like Seagate SATA drives after v.9
Can you give status to this:
sysctl -A | grep dev.raid.speed_limit_max

And if it’s what I’m thinking (200 000), why not increase it to 95% of the buffer/cache value for a single disk?
Example: 2000000

reboot and check again :wink:

I just did this. I set the minimum to 500,000 and the maximum to 3,000,000, put them in /etc/sysctl.conf and did a reboot and then checked that they stuck with a sysctl -p

It still is writing a 100mb/s (Sometimes it will get up to 105). I ran top while doing a large copy and my raid /dev/md127 will used a maximum of about 25% of the CPU intermittently during the copy and samba will spike at about 12 percent now and then.

Would a dedicated RAID controller help? Would Raid 10 be better? Would SSDs fix the issue?

Feel free to comment on my configuration. I know this isn’t the most optimum configuration, but it has served me well for years of doing 1080p video editing with iMovie. Now I am moving to 4K with Adobe Premier. I know I am going to have to spend some money, just trying to determine where it needs spent.

I want to stay with a linux server, because I also have my mail server and web server running on this guy, and I rsync all my data to a backup server at my moms house every night (which is nice and simple), and rsync to a LUKs encrypted removable drive weekly and store it offsite.

I agree SATA would be a bottle neck, but I am running SATA 6Gb/s off of the motherboard for all the RAID drives. I would think that would be enough to handle a 2.5Gb/s network connection.

Are you seeing these speeds locally or through Samba? I have experience (with older version of EL) that Samba was not able to write as fast as the drives. If I tested locally, speed would exceed network speeds, with NFS, it would reach network speeds, and with Samba it could not reach network saturation.

I will not comment SAMBA on other services, because it is said that according to the user, the problem is in RAID5.
And I, as a disc hater Seagate SATA, will only pay attention to the following things:
lsblk
smartctl -x /dev/sdX where X is drive
check all 4 drive
See the exact model of the disk and check on the manufacturer’s website what the cache values are, transfer buffer/cache.
Check out the section - Vendor Specific SMART Attributes
Any ERROR ?
current: 6.0 Gb/s ?
If are drive is very old remove jumper!
After all this data has been read carefully, and analyzed, and you wish to share it, only then can we talk.
Do a simple test with the following commands:
cd /tmp
time dd if=/dev/urandom of=TEST bs=1M count=20000
time cat TEST > /dev/null
It is important for this elementary test that the size of the file is more than the total memory.
After which the speeds and times will be seen :wink:
And everyone draws their own conclusions!
It is a good idea to monitor CPU, memory, disk subsystem load on a second console.

Only then will we comment on whether there is anything else to be squeezed out of these discs.
I know there is nothing, especially in RAID5.
Before any results, I will tell you the maximum that such drives can do without even knowing the exact model.
Read Max ~ 420 megabytes
Write Max ~ 180 megabytes

I understand what the OP stated. I have run md RAID 5 on spinning drives and found the bottle neck to be Samba, not md. In my experience , md exceeded the network until the number of concurrent streams created enough contention that the nature of spinning drives became the issue. That’s why I asked how he tested the speed. If he tests locally via dd or fio, that will narrow in on the right issue rather than chasing the wrong thing.

Never use RAID 5, particularly not with large capacity disks. It is highly unreliable, & if you get disk failures, rebuilds take days, & in that time you risk other disks to fail, as the stress on them at that time is great. If disks fail during a rebuild, you can forget your data.

With SSD’s RAID 5 might still be good, as they should be fast enough for the rebuild process not to take that long.

If you need speed, then use RAID 10 or something like that.

In theory, yes. But that’s assuming absolute best-case performance of each and every drive connected to 6 Gb/s interfaces.

You will never be able to saturate a 6 Gb/s SATA interface with a traditional spinning hard drive, especially a consumer one. Even high-performance SATA SSDs tend to have sustained transfer rates of ~500-550 MB/sec = 4.0-4.4 Gb/s for read operations, and somewhat lower for writes. In contrast, a typical 7200-rpm 6 Gb/s SATA hard drive might be able to reach somewhere between 150-200 MB/s (1.2-1.6 Gb/s), although it might have limited bursts at higher rates, depending on cache hits.

Adding RAID to the mix complicates things a bit further, as different RAID levels/implementations perform differently, with some providing a boost to read operations, and others boosting write operations etc.

As Xino already has suggested, I would run those local dd tests to verify the speed of your RAID array, no network connections being involved. If the local array maxes out at near-Gigabit speeds (~110-120 MB/s) then faster network connections won’t help.

While I would recommend you perform some research on RAID in general, the WintelGuy RAID Performance Calculator is a handy web-based tool I’ve used over the years to make some quick, baseline configuration checks prior to purchasing new or redeploying existing equipment for storage in enterprise environments.

Note that this is a useful tool, but does not take into account more advanced details, such as hardware RAID controller and hard drive cache sizes, which also can have a pronounced effect on the end performance of the implemented system.

I can’t stress enough how unsafe RAID 5 generally is now, especially since most drives are larger than 1 TB. Sure, it protects you from a single drive failure–but if another drive fails before you can replace the first one then your data is GONE. The larger the capacity of the drives used = the longer the rebuild time is for a replaced drive – and it can take longer than you might expect!

One other thing to add – since you upgraded to 2.5 Gb NICs, have you tried enabling jumbo frames? This can speed things up a bit for Ethernet connections greater than 1 Gb.

Good luck!

I actually found the issue a while back. The 2.5Gbps card in my desktop was bad. I replaced it with a new one, a slightly more expensive one, and it runs full speed on read and writes now. It actually started making weird high pitched whiney sounds when I put it under a heavy load. Which is what caused me to try

That is why I have a daily backup and a weekly archive taken to offsite storage. I have had drives fail and yes it runs slower for a day while it is rebuilding, but it is better than being down.

Never EVER use RAID 5 (or 6)!!

As you found out it is SLOW. No Matter what hardware you use it is S-L-O-W.
It is also not safe.

These guys explain:

BAARF

RAID 10 pumps, if you want very good performance and also good redundancy.