Well, and here are the first problems. RAID 0 array of two hard disk drives performs almost twice as fast as a single HDD, which is quite acceptable. However, as we come to a three-disk RAID 0 array, its performance appears almost identical to that of a two-disk one. And the 4-HDD RAID 0 array even falls a little behind the others when we write data blocks bigger than 16KB. As you remember, we saw something of the kind during the Intel SRCS14L controller test session (see our Intel SRCS14L Four-Channel SerialATA RAID Controller Review). Although, it was the insufficient cache that was the one to blame then, while in case of our today’s FastTRAK S150 TX4 controller (it features a software cache) we should probably blame the driver.
RAID 1 array is always behind a single hard disk drive, and RAID 01 array working with 8KB data blocks starts falling behind all the other arrays, which is actually not at all surprising, since it has to do twice as much work.
Now let’s check how enabled WB-caching affects the performance of our system:
Just like in case of RandomRead enabled WB-caching allows speeding up small data blocks processing quite significantly.
The write speed reaches its maximum when we work with very small data blocks and doesn’t change even when they grow bigger. As we remember, it happens because the controller driver uses the CPU resources to “stick together” the requests for sequentially located data. So, the driver sends to the disk the requests of pretty big size, which is very convenient for both: the disk drive and the interface.
The curious thing about it is the fact that RAID 0 array of four drives is always lower than any other RAID 0 arrays.
RAID 01 array is the slowest with the data blocks of any size, and RAID 1 array runs as fast as a single drive in all cases except 1KB and 2KB data blocks processing.
Now we will have a look at those patterns, which are closer to real life situations.