Sequential Read & Write Patterns
IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of the array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed the array can achieve.
Well, the arrays boast nearly ideal scalability in terms of sequential reading. The eight-disk RAID0 has a speed of over 900MBps as a result! The graphs of the eight-disk RAID10 and four-disk RAID0 nearly coincide, which is another indication that it’s all right with linear operations.
Take note that the multi-disk arrays achieve their top speed on very large data blocks only. For example, the eight-disk RAID0 is almost no different from the four-disk one with standard 64KB data blocks – it just doesn’t work at its full speed then. Of course, you can’t expect your array to deliver maximum speed with small data blocks, but you should take this into account.
By the way, the controller is somewhat disappointing with very small data chunks, being slower than the single drive on the LSI controller.
Everything we’ve said in the previous paragraph is true for this group of arrays as well. That’s good because Promise’s controller was not so ideal in our last test session.
It is somewhat worse at writing. We have good scalability, yet the eight-disk RAID0 does not reach its top speed in this test: it would do so on even larger data chunks. It is somewhat odd that the eight-disk RAID10 is considerably slower than the four-disk RAID0.
The results of the four-disk RAID10 are poor. We’ve seen this thing above, but now we can be sure that it is due to low sequential speeds. We wonder what could be written into the firmware that it has performance hits on specific arrays only.
Here is the explanation of the second error we’ve seen above: the RAID6 arrays have problems now. By the way, the eight-disk arrays seem to need larger data chunks to reach their top speed.