Random Read & Write Patterns
Now we’ll see the dependence of the disk subsystems’ performance in random read and write modes on the data chunk size.
We will discuss the results of the disk subsystems at processing random-address data in two versions basing on our updated methodology. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
We will start out with reading.
The HighPoint controller is somewhat faster when reading small data chunks from mirror arrays. But curiously enough, the Promise proves to be faster when reading very small data chunks from a single drive.
Random reading produces odd results with RAID0: the two-disk arrays prove to be the fastest on the Promise controller. On the HighPoint, it is the three-disk array that shows the highest performance. Well, the six arrays all have very similar results and the Promise controller enjoys but a small lead. The performance of each controller must be limited by its ability to process so many small-size data chunks.
The Promise is so much better with RAID5 that its degraded array is faster than the ordinary arrays on the HighPoint controller. Take note of the performance hit suffered by the degraded arrays!
The RAID6 results are similar to what we have seen with RAID5: the Promise controller is better again.
Well, the HighPoint has problems reading large data blocks, at least with mirror arrays. This controller’s RAID10 is far slower than the single drive as the consequence.
The Promise controller shows good, even though not ideal, scalability with RAID0, whereas the HighPoint has huge problems reading large data blocks – its performance is awful with every array of the three.
The Promise behaves oddly and inexplicably with RAID5: the degraded array turns to be faster than the three-disk array. The HighPoint controller still shows very low performance.
The Promise produces wonders with RAID6, too. The degraded arrays are faster than the normal array at very large data blocks! And there is a very small difference between the minus-one and minus-two arrays. The HighPoint is overall similar but easier to explain: it looks like the degradation of the arrays on this controller brings their performance up to a more or less acceptable level (but they cannot overtake the competing controller’s arrays, of course).