Articles: Storage
 

Bookmark and Share

(31) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 ]

To make the test even more illustrative, I used two different controllers:

As you can see, the graphs are similar irrespective of the status of NCQ. So I think this proves that NCQ doesn’t work for write operations. But maybe it doesn’t work in this particular drive only? No. We tested a few SATA300 drives from different manufacturers and they were always behaving like the WD1500AHFD at high percentages of write operations.

But whence comes this dislike towards random-address writes? Why are they not put into the common queue? I think the answer is very simple. It isn’t profitable. I mean you don’t get any performance gains by that.

And well, why should you be pushing write requests into the narrow bottleneck of the 32-commands-big buffer when there is a huge cache buffer at your disposal!

It means that when the drive is receiving write requests, it just stacks them down in the cache and reports a completed operation without yet writing anything to the platter! It is going to be a very, very fast drive from an outside observer’s point of view.

Generally speaking, deferred writing is a very helpful feature of all modern hard drives for desktop computers, and it is largely due to the ever-improving algorithms of adaptive deferred writing that the performance of hard drives is getting higher year after year. A modern HDD already comes equipped with as much as 16 megabytes of cache memory and may call for even more in the future. Even half that size (supposing one half of the cache is allocated to look-ahead reading and auxiliary tables) can store as many as 16384 sectors if necessary.

Of course, no one is going to create so many cache segments. The more segments there are, the bigger the overhead is. Suppose we know where the drive’s heads are right now and we are to find out the processing order for the deferred write requests. The more segments there are in the buffer, the more time it is going to take to calculate the optimal order.

Anyway, the number of buffer segments allotted for deferred writing is over 32 in modern hard disk drives, so there is no sense for them to use the Write FPDMA Queued mechanism. And they just do not use it! :)

I can show you the results of a Maxtor drive with a 16MB buffer as an illustration of one funny side effect of this. The graph below shows the dependence of the Maxtor’s random read and write speeds on the data block size:

Do you see that strange hump at the beginning of the write graph? It’s because the drive found out that it was being bombarded with small random-address requests and tried to withstand the DoS attack by collecting the requests into its cache. I don’t know how many cache lines it opens up at that, but the solution looks very elegant. Such a logical caching strategy of Maxtor’s drives has even misled some reviewers into confusing access time with seek time :).

But let’s get back to the subject of our review again. In this section we’ve found out that the WD1500AHFD-00RAR0 hard disk drive does support NCQ technology.

At the same time, the speed characteristics of the Raptor X at high loads are worse than those of the previous-generation drive from WD. I should also acknowledge the fantastically high speed of the WD740GD-FLC0 drive (this must be a special, server-oriented version of that drive model…)

I don’t claim yet that NCQ hasn’t proved its superiority over TCQ. After all, I’ve only tested one drive so far.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 ]

Discussion

Comments currently: 31
Discussion started: 05/05/06 05:21:29 PM
Latest comment: 05/26/08 11:35:13 AM

View comments

Add your Comment