Sequential Read & Write Patterns
IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of an array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed a disk array can achieve.
The controller boasts a high speed of sequential reading. The RAID10 arrays try to read even small data blocks from both disks of a mirror pair simultaneously, increasing the resulting performance. They cannot match the RAID0 arrays but the result is good overall: this part of the controller’s firmware is very efficient.
Take note that the degraded RAID10 coincides with the RAID0 built out of half the number of disks. It looks like the controller does not try to read data from mirror pairs simultaneously anymore irrespective of whether the pair is healthy or not.
The full-featured RAID5 and RAID6 draw nice-looking graphs whereas their degraded versions are good with small data blocks but slow with large blocks.
The RAID10 behave somewhat oddly at sequential writing. The 4-disk RAID10 is very slow. Its low speed in the test of writing large random-address data blocks must have been due to that. This is a very odd behavior. The 8-disk RAID10 acts up, too. Its speed is somewhat lower than that of the degraded RAID10, which in its turn is expectedly equal to the 4-disk RAID0.
The problems with writing in large data blocks can be observed again. Here, the 4-disk RAID6 is extremely slow. The other healthy arrays are good enough, producing neat graphs.
As for the degraded arrays, the RAID6 are almost as fast as their healthy counterpart even on rather small data blocks which are far smaller than a full stripe. The RAID5 slows down greatly when one of its disks fails.