Random Read & Write Patterns
Now we’ll see the dependence of the disk subsystems’ performance in random read and write modes on the data chunk size.
We will discuss the results of the disk subsystems at processing random-address data in two versions basing on our updated methodology. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
We will start out with reading.
The short depth of the queue lowers the speed of reading in small blocks: the four-disk arrays are somewhat worse than the single drive and the eight-disk arrays are much slower. Funnily enough, the RAID10 arrays are a little faster than the RAID0.
Four-disk arrays are better than eight-disk ones when it comes to the checksum-based array types, too. It is good that the RAID5 and RAID6 arrays with the same amount of disks deliver similar performance, though.
The sequential speed is important for processing large data blocks, and the multi-disk arrays go ahead. The RAID0 is now better than the RAID10.
The same goes for this group of arrays: the RAID6 perform slower because they have to process two checksums rather than one, and the amount of useful data read per a time unit is smaller than with the RAID5.