Random Read & Write Patterns
Now we will see the dependence between the controllers’ performance in random read and write modes on the size of the processed data block.
We will discuss the results in two ways. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
Let’s start with reading.
The controllers go neck and neck here. Indeed, we shouldn’t have expected a big difference with RAID0. On the other hand, the Areca and 3ware cope with small requests a little bit faster than the other controllers.
The LSI and HighPoint boast an indisputable advantage with RAID10. Their excellent algorithms of reading from mirror pairs leave no chance to their opponents. The 3ware is somewhat better than the rest of the controllers, getting further away from them as the size of the data block increases.
When it comes to RAID5 and RAID6, all the controllers are close to each other again. There is actually nothing to optimize here: just take data from the disks one by one, checksum calculations not being a serious issue anymore.
The controllers differ even with RAID0 when processing large data blocks as their performance is influenced by sequential read speed and look-ahead reading. The LSI is in the lead, followed by the 3ware. It is harder to see the losers. The HighPoint and Areca are poor with blocks the size of a few megabytes but speed up after that, outperforming the Promise and Adaptec.
The Areca acts up, reading faster with 512MB rather than 2GB of cache memory. We can offer only one explanation: the increased amount of cache has increased cache access latencies. We just cannot think of any other reason for this fact.
When reading rather large data blocks from RAID10, choosing the luckier disk is not a winning strategy anymore. The leaders change as the result. The 3ware is first and is followed by the LSI.
The Areca is downright disappointing. In both versions this controller suffers an inexplicable performance hit. There must be some flaws in its firmware and we see them now.
The standings with RAID5 are the same as with RAID0, which is quite all right.
The overall picture remains the same with RAID6 as well. The Adaptec is the only exception, not accelerating quickly enough on very large data blocks.