Articles: Storage
 

Bookmark and Share

(21) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 ]

Performance in Intel IOMeter Sequential Read and Write Patterns

IOMeter is sending a stream of read/write requests to the array with request queue depth = 4. Every minute the size of the data block changes, so we can see the dependence of the linear read/write speed on the data block size.

The dependence of the controller data read speed on the size of the data block is given in the table below:

Now let’s build the graphs for two groups of RAID arrays showing the dependence of their performance on the data block size:

The advantage of RAID0 array made of a lot of hard drives starts showing only when the data blocks are really big, i.e. when the controller can split large data blocks into a few smaller ones and use the hard disk drives in parallel. In this case RAID0 arrays proved pretty efficient. The arrays of two, three and four hard drives reach the maximum request speed at 16, 32 and 64KB data blocks respectively. Moreover, the scalability of the read speed on the number of HDDs in the array is almost ideal.

This group of arrays looks somewhat worse. In fact all arrays perform pretty well until the data blocks reach a certain size (the sizes are different for different arrays). The graphs for RAID1, RAID10 and RAID 5 of three drives and RAID5 of four drives are exactly the same as the graphs for a single hard disk drive and RAID0 of two and three HDDs respectively. However, when the data block size reaches 64, 128 or 256KB (this size is again the same for each type of array), the array speeds drop down quite rapidly.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 ]

Discussion

Comments currently: 21
Discussion started: 02/26/06 02:52:18 PM
Latest comment: 11/27/07 10:18:14 PM

View comments

Add your Comment