Sequential Read & Write Patterns
IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of the array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed the array can achieve.
Both controllers deliver queer performance with the mirror arrays. The HighPoint’s performance depends greatly on the size of the data chunk. Ideally, it would be as fast as the opponent but in practice it only manages this with data chunks of certain sizes. We are sure of one thing only: the HighPoint is obviously better with very small data chunks. It seems to be able to glue a few small-chunk requests up into one large request. The Promise controller has a different problem: it is surprisingly slow with one drive. On the other hand, you will hardly buy this controller to attach only one HDD to it.
It is simpler with RAID0: both controllers show good scalability on large data blocks and deliver similar speeds. Take note that the HighPoint is again far faster when processing very small data chunks.
The speed of the HighPoint controller’s RAID5 depends greatly on the data block size again, so it is only faster than its opponents on very small data chunks (it will probably carry on this feature of its driver through the entire test session). And it is far slower than the Promise controller elsewhere. Interestingly, the degraded array is almost as fast as the normal four-disk one on the Promise controller whereas the HighPoint’s degraded array is far slower than the normal ones.
Almost every RAID6 array shows a huge dependence of read speed on the data chunk size. The two-disk RAID6 on the Promise controller is the only exception.