by Nikita Nikolaichev
05/05/2006 | 03:03 PM
It’s not easy to get back to business again, but I can’t help that because Western Digital at last released its new Raptor! I personally have long been waiting for it. Hard drives with a spindle rotation speed of 7200rpm were increasing their storage capacity at a rapid rate to 200, 300, 400 and now to as much as 500 gigabytes! And the problem of choice between a Raptor and an ordinary drive but with five times the capacity was getting ever harder to solve.
But now Western Digital reminds us about itself once again with its Raptor 150GB. To be exact, there are two new drives with 150GB storage capacity. One of them is quite an ordinary but larger Raptor, while the other… Well, the other can store 150GB of data too, but it has a striking appearance due to a transparent window in its case through which you can watch the read/write heads working. I don’t recommend doing that for more than 15 minutes per day. It’s just hypnotizing!
So this model with a transparent window has got a unique name, Raptor X, and even a personal website where you can learn about its development and read its tech specs, see some photos, etc.
And so it happened that it was a Raptor X that dropped in on our labs first.
So, here it is:
The window that you can have a peep inside the drive through is shaped irregularly, but the fastening of the platters block and the actuator and even, partially, the top read/write head can be seen quite well.
And here are the standard bottom and top views:
Western Digital remains true to its tradition of placing the electronics board with the chips facing inward. You may note one detail missing here – there is no ATA-SATA converter chip on the visible side of the board. But it’s all right. The chip shouldn’t be there because the drive features a native SATA interface.
As you see, the PCB has only three microchips: processor, memory chip and motor controller chip.
Also you can see two acceleration sensors in the top corners of the PCB. Accelerometers of the drive measure the vibration levels and corrects the current disk operations.
By the way, about that window… Some three years ago a hard disk drive with a window was fabricated in our laboratories. I do not claim it was the world’s first device of such a kind (I’m old enough to remember the transparent cases with a pack of 5MB platters as the disk subsystem of an IBM/360), but we seem to have made it ahead of Western Digital:
In our quarrelsome times this may be enough for a legal suit, but we are not sure if we ourselves are to bring an action against WD or should be preparing against one instead. :)
But let’s get closer to serious matters now. Hard disk drives of the Raptor series have to meet tough requirements in the market sector Western Digital positions them into. HDDs that are to replace SCSI drives in workstations and entry-level servers must be fast, reliable and cheaper. The price factor isn’t a problem at all. Raptor drives cost less than SCSI ones and do not require an expensive controller (in fact, the controller is usually integrated into the mainboard and is absolutely free). The speed characteristics of the new Raptor are what we are going to explore throughout this review. Reliability is a difficult question of course, but the second generation of Raptor drives seemed to me more reliable than the first. I hope the third generation is going to be even better in this respect.
And now let’s take a look at the specs:
The new model boasts two times the storage capacity of the older one (so it can store 150GB now), a cache buffer for 16 megabytes, and support for NCQ technology. The speed of data transfers from the drive’s buffer to the host controller has remained the same at 150MB/s. That’s not a cause for grumbling since this parameter is not at all crucial for the drive’s performance.
The HDD supports Western Digital’s exclusive technologies called TLER (Time-Limited Error Recovery) and RAFF (Rotary Acceleration Feed Forward). The former is meant to improve reliability of the error-correction algorithms in RAID arrays, while the latter improves the performance of the drive when under vibration.
It’s the first time I am going to use our new testbed for hard disk drive tests. Everything’s changing and we can’t resist the progress.
We got dissatisfied with the old testbed due to the fact that we had built it exactly like an average computer system of that time. And our testbed has remained intact all through the following time, through the change of chipsets, processor sockets, memory types, etc. On one hand, that was good because we could accumulate a large database of results for comparison and analysis, but on the other hand, we were getting each day farther from the average computer configuration of our readers (or rather, our readers’ computers were getting more and more advanced in comparison with our testbed) and the results of our tests were beginning to lose touch with reality.
When we were selecting benchmarks to be used in tests of hard drives on our site, we did try to stick to those of them which did not depend much on the CPU performance or the amount of system memory. And our trial runs of the new platform have showed that the hardware configuration has a much smaller effect on the test results than the operating system used.
Yes, the main innovation of the new testbed is the operating system installed on it – Windows XP SP2. Unfortunately, we didn’t make it to Vista because the implementation of the new WinFS file system in the OS known under the codename of Vista has been postponed for an indefinite period of time.
And this is how our new testbed is configured:
The hard drives were tested on a Promise SATA-II 150 TX2 Plus controller:
This way we put hard disk drives with ATA and SATA interfaces under the same conditions since the employed controller supports both interfaces. The reason why we took an external controller at all is quite simple. Thanks to Intel, there’s only one connector for PATA drives on mainboards based on the i945+ICH7R chipset and this connector has to be occupied by the system HDD.
The following benchmarks are used in our tests:
The tested drives are formatted in FAT32 and NTFS as one partition with the default cluster size. In some cases mentioned below we used 32GB partitions formatted in FAT32 and NTFS with the default cluster size.
In a majority of tests in this review three hard disk drives are going to be compared. It’s just because WD’s Raptors are much faster than ordinary 3.5” drives with a spindle rotation speed of 7200rpm. It is a long-established fact and I’m not going to waste my and your time trying to prove it once again.
On the other hand, it would be interesting to see if the performance of the new generation of Raptors is closer to that of modern SCSI drives. But as I am perfectly aware of all the strong and weak aspects of SCSI drives, I will only add them into the tests that imitate the typical disk subsystem load of file and web servers.
I mentioned three discs to be compared in this review and here they are: WD1500AHFD aka Raptor X, WD740GD (a second-generation Raptor), and… one more WD740GD.
Yes, it would be logical to compare Raptors of all the three generations – WD360GD, WD740GD and WD1500AHFD – but unfortunately only 36GB models of the second generation are still available in shops while the original Raptor is not selling anymore (but you can still check out our review called WD Raptor: First ATA Hard Disk Drive with 10,000rpm Speed).
But why there are two WD740GD drives in this test session? It’s all thanks to HighPoint Technologies, particularly to the company’s PR department.
Soon after the publication of the HighPoint RocketRAID 2320 Controller Review, they hinted me that the controller would only show its full might with many hard drives.
I’m quite a skeptical person generally, so I took a few more drives in addition to the set of four WD740GD that we use for testing RAID controllers (and I confess I thus violated the unwritten rule that disks with different firmware versions mustn’t be used to test controllers).
I carried out some tests and obtained quite interesting results. The controller’s performance with RAID5 arrays was indeed suddenly faster as soon as there were more than four drives in the array. But two facts made me suspicious. First, the controller disabled the option of choosing the stripe block size at that. Second, the controller’s performance was higher only with requests to write sequential data blocks. All this makes me think that HighPoint’s programmers have just implemented something like 3Ware’s R5 Fusion technology.
But let’s get back to our Raptors… And so I found myself with quite recently manufactured samples of the WD740GD drive and had quite a lot of time and one of the testbeds was unoccupied. And I soon had test results for two WD740GD drives differing greatly in their manufacture date and firmware version. And when I compared the results – I was actually shocked…
So I’m in fact feeling obliged to HighPoint – I couldn’t have come up with a bigger intrigue for this review.
Just in case I would like to provide full model names of the HDDs and their firmware versions:
The first test in this review is meant to check the average response time of a hard disk drive at reading/writing random sectors. The goal of the test is to measure the disk access time at reading and see how aggressive the drives’ deferred write policy is (roughly speaking, we are going to see how many segments are allocated in the drive’s cache for storing write requests).
So we make IOMeter send a stream of requests to read and write 512-byte data blocks with the request queue depth set at 1 for 10 minutes. The total number of requests processed by the drive is over 60 thousand. This ensures we get a sustained disk response time in the end, which doesn’t depend on the size of the drive’s cache buffer.
Here are the results of the drives in IOps (the number of requests processed per second). The more requests the drive manages to process, the faster the drive is, as you can easily guess.
The two WD740GD clearly use different read-ahead policies. More interesting is that they also take different approaches to processing write requests! The WD1500AHFD is in between them in reading speed, but is far faster than both the WD740GD at processing write requests. Well, it would have been strange if it had done otherwise having twice the amount of cache memory.
The results can be translated from IOps into traditional parameters – read and write response time.
The results themselves do not change, of course. They are just represented in a different way.
I’ll soon get back to examining the drives’ behavior at processing random read and write requests. Right now let’s check the speed of reading from and writing into the drive’s cache buffer.
There are a lot of benchmarks to measure the speed of reading from the drive’s buffer, but unfortunately few of them are free from certain deficiencies. For example, the burst rate may be measured using small-size data blocks (in compliance with the old ATA/ATAPI specifications). Or the number of iterations may be dubious, too.
That’s why I can do no better than to use a test whose operation is well known to me. Moreover, IOMark is capable of measuring the speed of writing into the buffer which is perhaps an even more interesting parameter than the read speed.
The first test was performed with the drive attached to a Promise S150 II TX2+ controller.
The speed seems to be limited by the controller’s bandwidth (the controller is plugged into an ordinary PCI slot). Let’s try to attach the drive through the chipset then.
This looks true to life indeed. At least, the burst rate is very near the theoretical maximum of the Serial ATA-150 interface.
And now let’s see what we have with the previous-generation Raptors.
The two samples behave like twin brothers in this test and the write-into-buffer speed is lower than the read-from-the buffer speed with both of them. As you’ve seen above, these two speeds are almost identical with the WD1500AHFD.
Here’s a summary for this test – the maximum read and write speeds are compared:
The Raptor X is obviously better than the previous-generation drives in this parameter, especially when it comes to writing into the buffer.
I used the good old WinBench 99 to draw the following linear read graphs (the pictures are clickable):
So, the new Raptor X is almost 15MB/s faster at linear reading. This is a very big step forward! On the other hand, the common notion that a drive’s performance depends directly on its linear read speed is not exactly true. It doesn’t, and you are going to see that right in the next section of the review.
After all is said and done, the main and most intriguing feature of this review is the new hard disk drive’s support for Native Command Queuing technology.
Western Digital’s Raptor drives used to support only the so-called legacy ATA queuing. And WD’s market opponents would regard the word legacy as synonymous with obsolete or archaic , promoting NCQ instead. But you may remember that even that old and archaic technology did quite well in our tests.
But now that the Raptors have acquired support for the most progressive commands-reordering technology, the competitors cannot really blame Western Digital (except for the company’s growing profits :)).
The WD740GD drives support Tagged Command Queuing (TCQ) whereas the Raptor X features NCQ. The Promise S150 II TX2+ controller supports both the versions of this technology, thus giving an opportunity of an all-around comparison.
As usual, I use IOMeter to bombard the drive with a stream of requests to read random-address sectors while steadily increasing the length of the request queue. Thus, the controller has an opportunity to send independent commands to the drive (commands whose result doesn’t affect the queuing or processing of other commands).
The drive receives the controller’s command to read a random-address sector and puts it in the queue (a special buffer that can store a maximum of 32 commands) and then processes the queued commands in the order it thinks most advantageous for the user. The drive may be trying to reduce the average time of response to a request or to make the maximum response time lower, but this is the know-how of the programmers that are developing the secret algorithms to be written in the drive’s firmware.
My task is to send requests to the drive’s input and have some output from it. And then I’ll explain the output as best as I can. So, the next diagram shows the results of the three tested drives (two WD740GD and one Raptor X).
And what do we have here? Even the old WD740GD-FLA1 is far faster than the new Raptor from WD than the length of the outstanding request queue is over 16. And the WD740GD-FLC0 is even better than that – this one is just a wild drive, so fast it is!
Can it be that NCQ just doesn’t work in the new drive? Or maybe the controller doesn’t send it the “magic” commands? On the other hand, there’s a definite performance growth as the outstanding requests queue is getting longer and the graph has a characteristic shape at loads of up to 32 requests, too.
Should I use the old trick with disabling the controller’s use of Read FPDMA Queued commands?
Using the PDCM utility supplied with the Promise controller, I can prohibit the queuing of commands. I will test the drive with commands queuing on and off to see how useful NCQ is for the WD1500AHFD drive.
For the results not to depend on a controller from only one manufacturer, I will also use the controller integrated in Intel’s ICH7R South Bridge in which case I disable NCQ support from the mainboard’s BIOS (by disabling the AHCI option).
There are four graphs in the next diagram, two for each controller. The graphs illustrate the reaction of the controller and drive to the increase of the request queue length.
So, NCQ does work in the new drive, that’s certain. And the drive’s performance with enabled NCQ is almost the same on both the controllers. When NCQ is disabled, the graphs differ – and the graph I got on the Intel controller looks somewhat better. I’m not giving a rebuke to the Promise controller – I guess only a tester would take this controller and attach an NCQ-supporting disk to it only to disable then that very NCQ!
So, the Raptor X does support Native Command Queuing. Let’s now see how this hard drive is going to handle a stream of commands with requests to read as well as to write data. I created a few IOMeter patterns with a different percentage of write operations and ran them on the drive (which was connected via the Promise controller).
The graphs are quite interesting. You can see they are leveling out in the area of low loads as the share of write operations increases. At the same time, the graphs with a big share of writes degenerate almost into a flat line in the area of high loads.
There is only one explanation that suggests itself – that this is all due to the smaller share of read requests. Can it be then that NCQ works only for read requests? At least it looks so. Let’s check it out right now, then.
How? I’m just going to run the test with 100% of write requests with the controller’s support for NCQ turned on and off. If NCQ works, the resulting graphs will differ. If the graphs are similar, then my supposition is true.
To make the test even more illustrative, I used two different controllers:
As you can see, the graphs are similar irrespective of the status of NCQ. So I think this proves that NCQ doesn’t work for write operations. But maybe it doesn’t work in this particular drive only? No. We tested a few SATA300 drives from different manufacturers and they were always behaving like the WD1500AHFD at high percentages of write operations.
But whence comes this dislike towards random-address writes? Why are they not put into the common queue? I think the answer is very simple. It isn’t profitable. I mean you don’t get any performance gains by that.
And well, why should you be pushing write requests into the narrow bottleneck of the 32-commands-big buffer when there is a huge cache buffer at your disposal!
It means that when the drive is receiving write requests, it just stacks them down in the cache and reports a completed operation without yet writing anything to the platter! It is going to be a very, very fast drive from an outside observer’s point of view.
Generally speaking, deferred writing is a very helpful feature of all modern hard drives for desktop computers, and it is largely due to the ever-improving algorithms of adaptive deferred writing that the performance of hard drives is getting higher year after year. A modern HDD already comes equipped with as much as 16 megabytes of cache memory and may call for even more in the future. Even half that size (supposing one half of the cache is allocated to look-ahead reading and auxiliary tables) can store as many as 16384 sectors if necessary.
Of course, no one is going to create so many cache segments. The more segments there are, the bigger the overhead is. Suppose we know where the drive’s heads are right now and we are to find out the processing order for the deferred write requests. The more segments there are in the buffer, the more time it is going to take to calculate the optimal order.
Anyway, the number of buffer segments allotted for deferred writing is over 32 in modern hard disk drives, so there is no sense for them to use the Write FPDMA Queued mechanism. And they just do not use it! :)
I can show you the results of a Maxtor drive with a 16MB buffer as an illustration of one funny side effect of this. The graph below shows the dependence of the Maxtor’s random read and write speeds on the data block size:
Do you see that strange hump at the beginning of the write graph? It’s because the drive found out that it was being bombarded with small random-address requests and tried to withstand the DoS attack by collecting the requests into its cache. I don’t know how many cache lines it opens up at that, but the solution looks very elegant. Such a logical caching strategy of Maxtor’s drives has even misled some reviewers into confusing access time with seek time :).
But let’s get back to the subject of our review again. In this section we’ve found out that the WD1500AHFD-00RAR0 hard disk drive does support NCQ technology.
At the same time, the speed characteristics of the Raptor X at high loads are worse than those of the previous-generation drive from WD. I should also acknowledge the fantastically high speed of the WD740GD-FLC0 drive (this must be a special, server-oriented version of that drive model…)
I don’t claim yet that NCQ hasn’t proved its superiority over TCQ. After all, I’ve only tested one drive so far.
As I promised above, I’m going to discuss the drives’ speeds at processing random read/write requests at more length now.
At first I showed you how the new Raptor reacts to random requests the size of a sector, i.e. 512 bytes. But what if the data block is larger? We may see some anomaly or, if there is none, we just get the dependence of the random read/write speed on the size of the data block. The drives may even end up doing linear reading.
So, there’s a lot of thinking matter again. These three hard disk drives are manufactured by the same company, yet how different they are! The read-ahead policy of the WD740GD-FLA1 seems to be very conservative in the sense that it doesn’t seem to be trying to read anything in advance. The two WD740GD drives have identical platters, so the difference in their random read speeds can only be explained by different read-ahead algorithms.
The WD1500AHFD seems to be inclined towards look-ahead reading as is indicated by the small performance loss on medium-size data blocks.
And now let’s see what we have at random writing.
And here’s a surprise to you: the WD740GD drives are faster at processing 1 to 32KB data blocks than at processing blocks the size of a sector (512 bytes). It looks like deferred writing doesn’t work for 512-byte blocks or works very lazily (not the maximum number of cache segments is created – perhaps in an attempt to save some cache space?)
A general observation from the two diagrams above is like follows: random reading/writing does not transform into linear operations even when 32MB data blocks are processed.
For linear operations – proceed to the next section!
It’s all simple here. The drive is receiving a stream of requests to read and write data blocks. The addresses of the data blocks are sequentially increasing. Once in a minute the size of the data block is changed so that we could see how the linear read/write speed depends on the data block size.
There’s only one thing to be said: the WD740GD drives are no match for the Raptor X when it comes to sequential reading/writing. Its advantage over them at writing small data blocks is most impressive (remember the Write Burst Rate graphs?)
Next goes a very interesting test that measures the drive’s performance in multi-threaded environments.
Unlike our colleagues, we do not use NBench for explorations of that kind. Since modern hard drives come with as much as 16 megabytes of cache memory, we need a means to force as big an amount of data through the drive’s cache as to minimize the measurement error due to deferred writing. NBench would be always reporting a write speed higher than the linear read speed because the drive reports an end of a write operation before the tail of the file in the cache is actually written on the platter.
So, we use IOMeter instead and run it as long as to exceed the drive’s cache buffer in two times at least. We create four simultaneously running threads (an individual worker , each with its own portion of disk space to operate in, is responsible for each of the threads). The disk is accessed by 64KB blocks and the outstanding requests queue depth is steadily changing from 1 to 8.
The diagrams below show the results for the queue length of 1 as representative of the typical load on the disk subsystem of a desktop computer.
The case with one thread is in fact synonymous with linear reading from the disk. It’s no wonder then that the two WD740GD have identical results and the WD1500AHFD is much faster than them.
It’s different when there are two threads to be processed. The WD1500AHFD is still in the lead while the two WD740GD models have split up. The “wild” WD740GD-FLC0 is suddenly much slower than its brother. This is the consequence of its optimization for random operations and of its reluctant look-ahead reading.
I usually take the results of this test in by the ear. If the drive starts to click with its heads running between two work zones after switching from one thread to two, its speed is going to be low. But if the drive is almost silent, it is going to be fast in the test…
Here, the WD740GD-FLC0 drive didn’t believe that we’d continue read operations in either of the work zones and was receiving each read command without taking the statistics of previous requests into account.
The WD740GD-FLA1 is the fastest drive in the test at processing three threads; the WD740GD-FLC0 has accelerated and caught up with the WD1500AHFD.
The WD1500AHFD is unrivalled in the writing test. The WD740GD are delivering the same performance when processing one and two threads, but then the WD740GD-FLC0 suddenly leaves its opponent far behind. Or the WD740GD-FLA1 can be said to have fallen far behind its brother.
It’s hard to tell why the WD1500AHFD is superior in this test. It may be its larger cache or better firmware algorithms, but it is clearly faster than the previous-generation Raptors.
This test simulates the workload typical of a database server.
In this IOMeter pattern we are sending requests to read and write 8KB data blocks; the request queue length is steadily changing, as is the ratio of reads to writes.
The three graphs below show you the dependence of the drives’ performance on the percentage of write requests for three load modes (1, 16, and 256 outstanding requests).
The WD1500AHFD drive is competitor-less under linear load. It’s only at high percentages of write requests that the WD740GD drives can overtake it.
Let’s see what we have at a higher load (remember the graphs from the NCQ test!)
The WD740GD are both faster than the WD1500AHFD when there are more reads than writes to be done. At high percentages of writes the WD1500AHFD is quite competitive against the WD740GD.
And now, the maximum load.
No comments are necessary here.
Except for linear load, the new Raptor X cannot compete with the previous-generation drives in such operating modes. And once again I should acknowledge the very high performance of the WD740GD-FLC0 drive. This drive seems to be set to win all of our server tests.
And I’m going to check the WD740GD-FLC0 right away in patterns that simulate the load on a file and web server. As I promised at the beginning of the article, I will add two SCSI drives into this section of the review, the fastest and slowest of 10,000rpm models: Maxtor Atlas 10K V and Seagate Cheetah 10K.7.
It’s all obvious here: the SCSI drives are beyond competition. Even the slowest of the new generation of 10,000rpm drives, Seagate Cheetah 10K.7, easily outperforms the “wild” WD740GD-FLC0, not to mention the WD1500AHFD. The latter did well under small loads, but didn’t feel like speeding up after that.
The Web Server results cannot surprise us: the WD1500AHFD is slower than the previous-generation Raptors as well as the SCSI drives. It looks well under small loads, but is not generally server-friendly.
Perhaps the WD1500AHFD can do well as a hard drive for a workstation? Why not? The disk subsystem of a workstation is characterized by small loads and a somewhat bigger share of writes. That’s why the new Raptor can show its best here.
At first I ran the test on the whole storage capacity of the drive (assuming that the useful data are spread through the whole disk).
The WD1500AHFD is really in the lead under small loads, but much slower than its opponents under big loads.
Next I will limit the operating zone of the test to 32 gigabytes and run it once again (this simulates the operation of a drive with a 32GB system partition; we can thus compare drives of different capacities and estimate the value of higher areal density).
The results of the WD1500AHFD are considerably better in this test. I wonder how this is going to affect its overall performance rating which we calculate by the following formula:
Performance = Total I/O (queue=1)/1 + Total I/O (queue=2)/2 + Total I/O (queue=4)/4 + Total I/O (queue=8)/8 + Total I/O (queue=16)/16 + Total I/O (queue=32)/32
As you can see from the formula, smaller loads have greater weights in the overall result because they are more typical of a workstation.
And here’re the ratings we’ve got:
Thanks to its higher speed under low loads, the WD1500AHFD has got higher ratings irrespective of how much of the drive’s storage space is in use.
And now I will try to evaluate the performance of the drives using Futuremark’s PCMark05. Besides everything else, this benchmark includes a set of hard disk drive tests. The tests are simple: the user can run the drive along a prefabricated trace. A trace is in fact a log of access to some “default” hard disk drive which was mercilessly tested in Futuremark’s laboratories.
The benchmark offers five traces in total:
The tests can be run separately or all in a batch. I chose the second option, perhaps wrongly.
The fact is each decent hard disk drive wants to be a “black box”. And it wants to be a block box that reacts to a request on its input depending on the entire history of earlier requests. By running the tests in a batch, we pass the drive through all the traces one by one. It doesn’t have time enough to get used to one trace, but we already start another.
On the other hand, it would be strange to run the XP Startup trace a dozen of times because it is assumed that the booting of the OS is done after a cold or warm restart of the computer, and at this moment the hard drive is reinitialized, i.e. the accumulated access statistics is cleared and the drive is reset into some “initial” state.
Anyway, the test conditions were the same for all the participating devices, so we can compare the results.
So, the WD1500AHFD is again on top. The “wild” WD740GD can provide some competition to it on the XP Startup trace, but on all the other traces the Raptor X is unrivalled.
I’m rather doubtful about the results of the drives on the Virus Scan and File Write traces because the measured speed of the drives is higher than they can physically do.
But we’ve got a test that can measure the files processing speed somewhat better than PCMark does :). I mean FC-Test, of course (for details see our article called X-bit's FC-Test 1.0 or "System Rebooted").
It’s simple to use FC-Test: two 32GB partitions are created on the tested hard disk drive and are formatted in NTFS and then in FAT32. A file-set is created on the first partition and is then read from it, then copied into a folder on the same partition (Copy Near) and then into a folder on the second partition (Copy Far).
The computer is rebooted before each test to avoid the effect of the OS’s file caching on the results. Five file-sets are used:
The results of the drives in each test action are discussed below:
The WD1500AHFD is the best at writing large files, while the WD740GD-FLC0 is in the lead on smaller files, even though not by much. The WD740GD-FLA1 disappoints me as it is far behind the two other drives. Let’s now see how fast the drives are at reading the file-sets.
The Raptor X again has the highest read speed, although the WD740GD-FLC0 is very close in some cases. What’s interesting, the WD740GD-FLA1 is quite far behind its brother in all the patterns, except for ISO. The reading of the files is not done at random, so the difference seems to be due to better look-ahead read algorithms of the WD740GD-FLC0 model (you may also recall the results of the Random Read test, too).
The Raptor X is beyond competition at copying files within the same partition. Its large cache and high read/write speeds are the winning factors here. The WD740GD-FLC0 model wins the duel of the two WD740GD by leaving the opponent far behind in four patterns out of five.
Copying from one partition to another doesn’t change anything. The Raptor X is confidently on top, and the WD740GD-FLC0 is much faster than the WD740GD-FLA1.
There’s no need to discuss the results of the drives in FAT32 as they don’t differ much from those in NTFS. Yet I built the following diagrams if you are interested in FAT32 file system:
So, the Raptor X has done very well in the files processing test. It is faster than its predecessors at reading and writing and copying files.
As I said above, files in our FC-Test scripts aren’t processed randomly. The files are written to the disk sequentially (because the disk is empty before the test and is formatted between the cycles of work with the pattern) and are read from the disk in a sequential order, too. Thus, the drive works under almost ideal conditions as is clearly indicated by the results in the ISO pattern where it almost reaches its linear read speed.
This ideal can be seldom met in reality due to the so-called fragmentation. That is, a file may be split in several fragments rather than stored as a single whole on the disk. And of course a fragmented file is read at a somewhat lower speed than a file that is not fragmented.
We can’t imitate file fragmentation on the sector level, but we can imitate fragmentation on the file level. That is, we can make up a situation when the files are read in an order other than in which they are stored on the disk.
In other words, we’d want to read the files at random. It could be done with FC-Test 1.0 by preparing file-lists with a predetermined sequence of files processing and then using them to read the files from the disk. But this would be inconvenient and not quite illustrative.
Imitating a random reading of the files in IOMeter is not a task I would try to solve. Writing a correct pattern for such a test would take a math1ematic genius. And this genius would spend the rest of his life explaining that his pattern is indeed correct.
I’ll better use the new test utility, FC-Test 2, which is being beta-tested in our labs right now.
Besides other things, this test can read files with a specified degree of randomness and localness. That is, it is possible to specify the maximum length of the file chain to be read in one pass and the maximum length of the jump in the file-list.
I simplified the work of the script for this test. The length of the jump was limited to 100 files and the length of the file chain was varied from 1 to 10, and the length was fixed (the random-number generator was blocked).
The result is the dependence of the speed of random reading on the length of the file chain read in a single pass.
And these are the results I’ve got.
With rare exceptions the WD1500AHFD is always in the lead. The WD740GD-FLC0 wins the second place, and the WD740GD-FLA1 is third.
The fluctuation of the test results can be explained easily. We took the standard Programs pattern from FC-Test 1 and this pattern includes files of different sizes. So, the total amount of data in a chain of files doesn’t strictly depend on the number of files in the chain. The random read speed depends on the heads movement speed as well as on the amount of data that is read in a pass. It’s all clear with the first parameter – the higher the areal density is (and the more platters the drive has!), the less tracks the data of the pattern files occupy. The narrowing of the zone in which the heads are operating has a direct effect on the performance.
It’s more complex with the second parameter – the total size of the files read in one pass depends on what exactly files are selected.
Of course we could use same-size files to get perfectly scalable results, but would they have any practical sense?
In this review we have thoroughly explored the new hard disk that Western Digital targets at computer enthusiasts, the people who want to have the most advanced and fastest components in their systems. It’s none of my business whether they actually need that speed. My goal was to see how well the Raptor X suits the market niche WD is promoting it into.
And here’s my verdict – by all 100 percent and more! Each and every of our tests shows that the Raptor X is sharpened for work in a single-user environment in which it really looks much better than its closest opponents. Yes, it didn’t have too many opponents in this review, but not because I was too lazy to test them. I just wanted to save the reputation of the companies who don’t yet have products similar to WD’s Raptor! :)
WD’s decision to personalize the drive by opening to the user’s eyes the hitherto secret part of it is a very lucky one. The transparent window makes the drive memorable, a gem in the showcase of any computer shop. So I have no doubts about the commercial success of the Raptor X – you can’t stop getting the best after you’ve tasted it.
P.S. And yet I would very much like to get a server-oriented version of the new Raptor.