Articles: Storage
 

Bookmark and Share

(1) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Multithreaded Read & Write Patterns

The multithreaded tests simulate a situation when there are one to four clients accessing the virtual disk at the same time, the number of outstanding requests varying from 1 to 8. The clients’ address zones do not overlap. We’ll discuss diagrams for a request queue of 1 as the most illustrative ones. When the queue is 2 or more requests long, the speed doesn’t depend much on the number of applications.

The results of reading one thread are really shocking as all the arrays, excepting the degraded checksum-based ones, deliver the same speed of about 160 MBps. We had the same thing with the Promise EX8650: the controller would reach a certain limit (it was 210 MBps then) and deliver its maximum speeds only at request queues longer than 1. You can look at the tables above and see that the 3ware controller only achieves its maximum speeds at a queue of 3 or 4 requests or longer (by the way, the Promise controller often needed an even longer queue). Thus, you should not expect this controller to deliver a speed of much higher than 150MBps when reading even a very large file.

But let’s get back to the multithreaded load. When there are two threads to be processed, the RAID0, RAID5 and RAID6 arrays slow down somewhat. The controller generally copes with multithreading well. It is only the four-disk RAID5 that has a performance hit of over 25%. The controller can parallel the load effectively on the RAID10 arrays by reading different threads from different disks in the mirrors. The resulting speed is almost two times the speed of reading one thread. Surprisingly enough, the degraded RAID5 and RAID6 arrays speed up when processing two threads, too.

When there are three data threads to be read, the slow arrays improve their standings while the RAID10 arrays get somewhat worse. Judging by the results, the RAID10 arrays seem to forget how to read from both disks in a mirror. The controller must be confused as to which disk should get two threads and which, the remaining third thread. As a result, the number of disks becomes an important factor again, the four-disk arrays being the slowest. The eight-disk RAID6 breaks this rule somewhat as it should be faster than the four-disk RAID0.

When the number of threads is increased further to four, all the four-disk arrays, excepting the RAID10, accelerate. Unfortunately, the RAID10 does not provide a performance gain: the controller must have been unable to identify the load and send two threads to each disk in a mirror.

The speeds are higher when the arrays are writing one thread, yet the maximums of speed are achieved at longer request queue depths only. The degraded RAID6 with two failed disks is expectedly poor but why is the four-disk RAID0 so slow? We don’t know. When a second write thread is added, all the arrays, save for the mentioned problematic two, speed up. The eight-disk arrays speed up greatly. It is only the four-disk RAID5 and the RAID10 arrays that have a small performance growth, but it is no wonder with RAID10: each mirror works as a single disk when performing write operations.

Take note of the good behavior of the degraded arrays: as required, the controller just does not notice that a disk has failed. Judging by the higher speeds than those of the original arrays, the controller skips the calculation of checksum if the latter is to go to the failed disk.

The speeds are somewhat lower when there are more write threads to be processed.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Discussion

Comments currently: 1
Discussion started: 05/23/09 02:02:06 AM
Latest comment: 05/23/09 02:02:06 AM

View comments

Add your Comment