Articles: Storage

Bookmark and Share

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Performance in Intel IOMeter Sequential Read & Write Patterns

Now let’s see how well the controller will cope with sequential reading and writing. And of course, it is also very interesting to find out if the caching algorithms (WB/WT) will affect the performance in this case, too.

With the help of Intel IOMeter program we sent a stream of reads/writes with queue depth equal to 4. Once per minute the data block size changes. As a result, we get the dependence of the linear read and write speeds on the data block size in the end of the test session. The obtained results are summed up in the tables below:

Unlike Promise FastTRAK TX4000 controller, we see very big difference between the graphs for WriteBack and WriteThrough that is why I suggest considering them separately. For a better picture let’s split the tested RAID arrays into two smaller groups.

As we remember, some manufacturers force requests interleaving between the two drives of the mirrored pair and do it even during linear reading. This way, RAID 1 array appears very similar to RAID 0 array and the read speed from the array can theoretically double! However, the tests show that Promise FastTRAK S150 TX4 controller uses this algorithm only in case of RandomRead (just like all other controllers from Promise, we reviewed).

The read speed from RAID 1 array is almost always lower than the read speed from a single hard disk drive, which is a very upsetting thing, I should say.

RAID 01 array is always behind RAID 0 of two drives and falls even behind JBOD when the data blocks are 2KB-8KB big. Besides, the graphs for RAID 01 and RAID 1 show a few performance drops when we work with 32KB and 256KB data blocks respectively.

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]


Comments currently: 0

Add your Comment