Articles: Storage
 

Bookmark and Share

(0) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Workload increase changes the performance picture. The mirror (RAID 1 array) is always faster than a single hard disk drive even in RandomWrite (100% writes). Since we have been obtaining the same results throughout all the experiments, we wouldn’t consider the performance difference in this case a measuring error.

RAID 01 array is faster than RAID 0 of two HDDs everywhere except RandomWrite. In case of low write requests share (less than 40%), it even manages to outperform RAID 0 array of four drives. Theoretically, RAID 0 is the fastest array type and RAID 01 should be slower during reads. However, there is a special rule according to which the requests interleaving for RAID 0 array takes place: if the request falls into the address space of a given hard disk drive, then this HDD is responsible for processing it. But in case of random requests it can happen that two or more requests in a row will be sent to one and the same drive. In this case, this HDD will be loaded with work while the other drives will be idling. As a result the array performance will be limited by the single HDD performance. For an array including mirrored pairs the reads (and maybe even writes) inside the pair will be strictly shared between the two drives, i.e. the drives will process these requests in turns independent of the data block size (in fact, the HDDs can take turns in a bit “smarter” way). The HDDs are loaded more evenly this way, which leads to the increase in the average array performance.

Having taken a look at the graphs for 256 requests workload I was really pleased, frankly speaking: here they are, the perfect performance rates! The graphs for RAID 0 array and a single hard disk drive are nearly parallel. The mirrored RAID 1 array is twice as fast as a single drive during RandomRead and then little by little slows down until the graphs merge at RandomWrite. RAID 01 and RAID 0 arrays of two drives behave just the same way.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Discussion

Comments currently: 0

Add your Comment