Articles: Storage

Bookmark and Share

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 ]

Performance in Intel IOMeter DataBase Pattern

Well, we will start with the DataBase IOMeter pattern. Why with this particular test, you will ask? The reason is its excellent ability to reveal the peculiarities of the HDDs firmware, such as tagged command queuing, efficient lazy writing, etc.

At first let’s check the scalability of the Adaptec 29160N + Fujitsu MAS3735NP combination depending on the queue depth. For this purpose we will show on one picture the performance graphs for processing requests with different reads/writes share under five types of workload, which were obtained with Adaptec 29160N controller card.

You can clearly see that the dependence of the requests processing speed on the queue depth is not linear! At first the results jump up, however, under heavy workloads the performance boost is not that great any more, even though the requests queue depth keeps increasing in geometric sequence.

As we have already pointed out in our article called Ultra320 SCSI Interface: Highs and Lows. Part II, the SCSI controller drivers have two parameters, which has a direct influence on the HDD performance. They are MAXTAGS and NumberOfRequests. The first parameter determines the size of the requests pack, which can be transferred to the hard disk drive for further processing. Within this pack the HDD can change the order of requests processing. The second parameter is the maximum number of requests, which can be sent to s SCSI controller. By the way, when I am talking about the requests queue depth during the discussion of the benchmarks results, I imply the requests queue going into the controller (I cannot address any requests to the HDD directly, I cannot avoid the controller here).

So, what do we see on the diagram? While the requests queue depth going into the controller is smaller than the maximum TCQ depth supported by the drive, any increase in the workload results into a significant performance boost. Well, this is absolutely correct, because the larger is the requests pack sent to the HDD by the controller, the more opportunities it has to re-arrange these requests into the optimal queue from the performance point of view. When the requests queue exceeds the size of its internal buffer, the performance can still increase only if the controller driver does all the sorting and rearranging itself, before sending the requests to the HDD. In other words, if the controller driver creates the “optimal” N-request queues for the drive, where N=MAXTAGS. Since the controller is unfamiliar with the internal geometry of the HDD, the requests queue optimization performed by the controller is significantly slower than the requests optimization carried out by the HDD itself. That is why the performance grows “slower” under heavy workloads, because all the opportunities for requests queue optimization have already been used.

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 ]


Comments currently: 0

Add your Comment