Sequential Read & Write Patterns
IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of an array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed a disk array can achieve.
The controller can obviously read from both disks in each mirror as is indicated by the characteristic increase in speed on large data blocks after the flat stretches of the graphs. The eight-disk RAID0 is somewhat disappointing, though. While the Promise EX8650 delivered a speed of over 900MBps, the 3ware 9690SA cannot give more than 700MBps – and only with 128MB data chunks. On the other hand, it is all right with the four-disk RAID0 and the horizontal stretch of the eight-disk RAID10’s graph: 450MBps is close to the performance of the single disk multiplied by four. So, these must be the peculiarities of the controller’s architecture.
The four-disk RAID5 and RAID6 arrays have a performance peak on 128KB data blocks, too, and slow down afterwards. Interestingly, every eight-disk array, save for the RAID10, has the same speed at the peak and with the largest data blocks: 650MBps and 550MBps, respectively.
The degraded arrays show a dependence of read speed on the size of the requested data chunk. This dependence must be due to the necessity of restoring data from checksums. Curiously enough, the same dependence holds true for the degraded RAID6 without two disks. The controller’s architecture seems to be able to effectively restore data from two checksums simultaneously with almost zero performance loss.
The eight-disk RAID0 is insufficiently fast at sequential writing, too. However, the four-disk RAID0 is even worse: it is only half as fast as the single disk. The RAID10 arrays behave in a curious way. First, their performance fluctuates somewhat on large data blocks. And second, the degraded array is a little faster than the original one. We can suppose that the degraded array just does not have to wait for a write confirmation from the failed disk, but this can hardly have such a big effect.
The eight-disk arrays have the same and quite high sequential write speed. The controller has enough performance for every disk, and the RAID6 does not differ much from the RAID5, just as it should. Interestingly, the degraded arrays are again somewhat faster than the original ones. This does not refer to the degraded RAID6 without two disks: its speed of writing is terribly low.