Articles: Storage
 

Bookmark and Share

(1) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ]

As we know from the previous Promise controllers reviews, enabling of WB-caching increases a little bit the controller response time during requests processing. With write requests the reasons for this delay are evident: the controller driver spends some of the valuable processor time on caching strategy planning (it tries to find a request among the deferred ones, which could be “combined” with the ongoing requests). However, it is still a mystery why the controller slowed down when working with read requests…

Today however, we do not see any slowdown in RandomRead mode, but as soon as write requests pop up the controller loses speed. And it manages to regain speed only when the writes share increases and the WB-caching starts working, or when the requests queue depth gets bigger, i.e. when the controller driver has a rich choice in front of it. Does it sound logical to you?

Well, now that we have analyzed the results of this pattern, we see that enabled WB-caching affects only a few combinations of the array type and a number of HDDs in it. Of course, all caching algorithms are developed to work with a pair of drives in RAID 0 or RAID 1. These algorithms also worked in RAID 01 array, because the structure of this array type implies the mirroring of a stripe-group pair. One thing is absolutely clear though: WB-caching doesn’t seem to be of any value to RAID 01. Therefore, I would suggest not to enable it, just in case :)

The efficiency of WB-caching for RAID 0 and RAID 1 arrays depends a lot on the request type. For instance, RAID 0 array with enabled WB-caching works perfectly well when the reads share is big enough, but doesn’t feel quite at home with a lot of write requests and heavy workload. The situation with RAID 1 array is just the opposite.

Intel IOMeter: Sequential Read & Write

Well, let’s see how well the controller will cope with sequential reading/writing. Of course, we are also very curious to find out if the caching algorithms (WB/WT) affect the read/write speed in this case.

IOMeter send a stream of reads and writes to the array with a queue depth of 4. Once per minute the data block size changes, so after the test is complete, we can see the dependence of the linear read or write speed on the data block size. The obtained results (the dependence of the data transfer rate provided by the controller on the data block size) is summed up in a table below:

If we compare the controller performance in WB and WT modes, we will see that there is hardly any difference! Now come the graphs for read speed in WB mode:

Well, the scalablity of the performance depending on the number of drives in an array is very typical of this controller, but it can achieve the maximum read sped only when the request is super-big. Besides, the four-HDD array didn’t reach 160MB/sec. Although a little later we will see that the problem lies with the HDDs and not with the controller.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ]

Discussion

Comments currently: 1
Discussion started: 07/30/05 04:54:28 PM
Latest comment: 07/30/05 04:54:28 PM

View comments

Add your Comment