Random Read & Write Patterns
Now we’ll see the dependence of the disk subsystems’ performance in random read and write modes on the data chunk size.
We will discuss the results of the disk subsystems at processing random-address data in two variants basing on our updated methodology. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical of databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second is more relevant for such load.
We will start out with reading.
The two RAID10 arrays are beyond competition when reading data in small portions. Even the eight-disk RAID0 finds it impossible to beat their very effective reading from the mirrors. The degraded RAID10 feels good, too. Its performance hit is very small.
Data are read from the checksum-based arrays in the same way: every array is somewhat slower than the single disk. The degraded arrays, even the RAID6 without two disks, are but slightly inferior to the normal ones. It means that the data recovery from checksums is done on them without losing much speed.
When the data chunk size is very big, the higher linear speed of the RAID0 arrays shows up, helping them beat the same-size RAID10 arrays. The controller seems to be able to read from both disks in each mirror: otherwise, the eight-disk RAID10 would not be able to outperform the four-disk RAID0. The degraded RAID10 “realizes” that one of its mirrors is defective and reads from only one disk in each mirror.
Linear speed is the decisive factor for the RAID5 and RAID 6 arrays, too, when they come to process very large data blocks. As a result, the eight-disk arrays go ahead. The degraded arrays are expectedly slower than the normal ones: the performance hit is smaller with RAID5 and bigger with RAID6.