Articles: Storage
 

Bookmark and Share

(1) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Random Read & Write Patterns

Now we’ll see the dependence of the arrays’ performance in random read and write modes on the size of the processed data blocks.

We will discuss the results of the arrays at processing random-address data in two variants. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical of databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second is more relevant for such load.

We will start out with reading.

When the arrays are reading small random-address data blocks, the overall typical picture has one inexplicable element: the degraded RAID10 is ahead of all the others on 8KB data chunks.

There are no surprises among the RAID5 and RAID6 arrays. The arrays are all slower than the single HDD. The large (8-disk) arrays are the fastest while the degraded arrays are the slowest here.

The RAID10 betray some problems on large data blocks. For example, they are slower than the single HDD on 8MB blocks and obviously slower than expected on large blocks, too. This problem can be seen with every array irrespective of the number of disks. It means the controller’s firmware is not perfect.

When reading in large data blocks, the healthy RAID5 and RAID6 arrays show good performance whereas the degraded ones are all slow. The RAID5 is especially bad after losing one disk.

Random writing goes next.

Everything is all right when the RAID0 and RAID10 arrays are writing in small data blocks. They show ideal scalability of performance depending on the number of disks. The degraded array is no different from the healthy one.

The number of disks is still important when the controller is writing to RAID5 and RAID6 arrays but the checksum calculations affect the performance, too. Two checksums must be calculated for RAID6, so there is a small difference between the 8-disk and 4-disk RAID6s. The degraded arrays all slow down. In order to write a block, they have to read previously written data, some of which have to be restored from checksums.

This group of arrays would be perfect in this test if the 4-disk RAID10 did not slow down that much on very large blocks. As a result, this array falls behind the single HDD.

The controller successfully copes with writing large blocks to RAID5 and RAID6 arrays. The large blocks allow it to reduce the number of disk accesses. The controller does not have to read anything in order to write a full stripe. As a result, the degraded arrays are almost as fast as their healthy counterparts. The only exception is the 4-disk RAID6 that has a very low speed, just like the RAID10 in the previous diagram.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Discussion

Comments currently: 1
Discussion started: 07/30/09 08:37:34 PM
Latest comment: 07/30/09 08:37:34 PM

View comments

Add your Comment