Mission "Defragmentation"

In this article we’ll discuss if defragmentation can be used as a performance test for hard disk drives. We will also see how the time it takes to perform a defragmentation procedure depends on the Native Command Queuing technology support. 16 hard disk drives participated in our test session.

by Aleksey Meyev
06/20/2007 | 11:22 PM

A hardware reviewer can be best characterized as “ever-searching”. He is constantly searching for new exciting devices to test as well as for new test methods that would reveal the capabilities of a particular device better. That’s why we added FC-Test to our synthetic benchmarks (IOMeter, WinBench, PCMark) to compare hard disk drives under real-life conditions. And now we want to compare HDDs using the defragmentation tool integrated into Windows XP. Why defragmentation? Because this application puts your hard disk under stress by copying large amounts of data. It ensures acceptable repeatability of results, yields visually comprehensible output and is employed by almost every PC user (we mean those users who care about how fast their PC is).

 

We also had to check out the influence of NCQ on the defragmentation speed after we had seen the results obtained by our colleagues. Their numbers were so shocking that we wanted to have them ourselves. :)

So, what is defragmentation, by the way? When saved to the hard disk, files are not always written into contiguous clusters. More often than not, they occupy a few strings of adjacent clusters in different parts of the platter. This occurs when a file stored on the hard disk gets larger after your processing it or when large files are written into an almost full hard disk or when there is no string of free adjacent clusters on the disk to save the current file. The oftener your files are modified, the more fragmented they come to be (i.e. a file is broken down into more fragments stored in different sections of the disk). As a consequence, the reading of the file takes longer since the HDD has to move its heads a longer way to collect the file fragments from the platters. The more fragmented your files are, the slower your PC works. Don’t you ever get annoyed at a game level or a heavy application (like Adobe Photoshop) loading up too slowly? You know the possible reason now.

So, file fragmentation is bad. It is an evil you have to fight. How? Programs called defragmenters are to be your main weapon. There are a lot of them available, but they all follow the same principle. The hard disk is analyzed to create a file distribution map. Then, fragmented files are moved into free disk space so that each file occupied adjacent clusters.

Defragmentation in Windows XP is supposed to be done with a standard integrated tool based on the commercial Diskeeper from Executive Software. Like most third-party defragmentation utilities, this program uses Microsoft’s API, the FSCTL command set (follow this link to learn more about it). As opposed to earlier implementations, e.g. in Windows NT and Windows 2000, the version integrated into Windows XP can defragment the Master File Table (MFT) and supports clusters larger than 4KB that are created when disk partitions larger than 4GB are formatted with the system default formatting tool. This application had proved so successful that it was transferred into Microsoft’s newest OS, Windows Vista, without modifications.

It is necessary to note, Microsoft recommends that you have at least 15% of free disk space on the disk volumes you want to defragment.

In this article we’ll see if defragmentation can be used as a performance test for hard disk drives. We will also see how the time it takes to perform a defragmentation procedure depends on the HDD’s support of Native Command Queuing technology.

Testing Methodology

The conditions of the test should be created first. We did it by creating a greatly fragmented file structure with a total size of 23.83GB on a 32GB disk partition with a 4KB cluster. In other words, the partition was occupied by 75%:


Partition fragmentation

Partition parameters prior to defragmentation:

We used GetSmart tool to transfer this partition sector by sector to the tested HDDs. Thus, we made sure the data was absolutely identical on each HDD because the per-sector transfer maintained the original structure of files within the partition.

After that, the tested HDD was attached to a SATA port of the following computer:

Then, the following FC-Test script was launched on the PC:

Reboot
Pause 120
Comment beginning
System defrag d: -f
Comment end

According to this script, the PC reboots, takes 120 seconds to load up everything necessary for the OS to run, puts a timestamp, launches the integrated Windows XP defragmenter in command prompt mode, and puts a second timestamp when the defragmenter is finished. Subtracting the first timestamp from the second one we find the duration of the defragmentation procedure on the tested HDD.

The disk map looked much better after defragmentation:


After defragmentation

It’s impossible to achieve an ideal disk structure, but it’s clear that the file structure improves greatly. Of course, you don’t often get such a horrible picture as our test partition on ordinary PCs, yet don’t forget to defragment your own HDD regularly, especially if you are often installing new software and your hard disk is over 50% full.

To check out the influence of NCQ technology on defragmentation speed, we performed our tests with AHCI support both enabled and disabled in the mainboard’s BIOS. We also tried to see the influence of the HDD capacity on defragmentation speed (large-capacity HDDs should theoretically perform better since the operating zone of the test is narrower on them due to the fixed size of the test partition and their heads have to travel a shorter distance). And finally, we checked out how the HDD’s quiet operation mode affected the speed of the defragmentation procedure.

Tested HDDs

We were interested to check out different HDDs, so we included a wide range of Seagate’s models, from simple 250GB ones to a 750MB monster. Server and workstation-oriented HDDs were represented by a Barracuda ES ST3500630NS. Samsung was represented by a three-platter 500GB model and Hitachi had a 500GB model in this test, too. A 500GB 7H500F0 was the reminder of the late Maxtor. Western Digital participated with its 500GB desktop HDDs as well as with new and old 10000rpm models.

You can compare the characteristics of the HDDs using the following table:

Defragmentation without NCQ

We’ll first show you the results of the HDDs without NCQ, i.e. when the AHCI mode is disabled for the HDD in the mainboard’s BIOS. The numbers are shown in one diagram for better readability. Note that we measure the speed of defragmenting a 32GB partition. If you are defragmenting larger partitions, the procedure takes longer, of course.

As you can see, the new 10000rpm models from WD with increased areal density (WD740ADFD and WD1500ADFD) are the winners of this test. The Samsung HD501LJ has the best result among 7200rpm models and is closely followed by the Seagate ST3500630AS.

The worst result, rather surprisingly, comes from WD’s first 10000rpm model, the first revision WD740GD. The high rotation speed didn’t help it much as it spent about twice the amount of time spent by the others. The Maxtor 7H500F0 has poor results due to its much lower areal density in comparison with its opponents. The WD5000AAKS and ST3250820AS aren’t good, either.

The results of the Seagate HDDs show that the amount of platters does not have a big effect here: the HDDs from one series with a 16MB cache have almost the same speed irrespective of their capacity (250, 320, 400GB). But the other 250GB model is in the slow group due to its small cache size (8MB) – it is only ahead of the Maxtor if we compare within the 7200rpm models. The server-oriented ES ST35006630NS is slower than the ordinary desktop version. Is it the tradeoff for its increased reliability? On the other hand, comparing Western Digital’s 500GB models, the WD5000YS from the corporate RE2 class is as much as 5.5% faster than the desktop WD5000AAKS.

Defragmentation with NCQ

Next we tested each HDD once again, this time with enabled NCQ, i.e. with AHCI support enabled in the mainboard’s BIOS.

The overall picture didn’t change much except for the dash of the Samsung drive which managed to overtake WD’s 10,000rpm models due to enabled NCQ.

The following diagram combines the results of the HDDs in the two tests so that we could easily see which HDD profited (or lost) most from enabled NCQ.

This diagram looks discouraging.

NCQ doesn’t improve the HDDs’ performance much. Moreover, most of our HDDs performed worse when NCQ was enabled than otherwise. Those few HDDs that profited somehow from this technology include the Hitachi HDT725050VLA360, Samsung HD501LJ and… the WD740GD for which NCQ support is not declared! The rest of the HDDs are slower by 10-30 seconds compared with their own results with disabled NCQ. This is very odd because this technology should have showed its best under such conditions, when data is copied within the same disk.

Perhaps the defragmenter we use does not generate disk requests that can be enqueued? On the other hand, this defragmenter is the most popular one because an overwhelming majority of users prefer Windows’ integrated tools to third-party alternatives.

Quiet Mode on Hitachi HDDs

Additionally, we tested a Hitachi drive with enabled quiet seek mode to see how its performance would differ in comparison with its standard operation mode. So, we copied the test partition to the HDT725050VLA360, changed the operation mode using IOMeter and performed the test.

The speed reduction is quite clear, amounting to 5.8%. On the other hand, it is not too big. Every HDD from Seagate notches the same 24 minutes in its normal, not quiet, mode. Mark this if you want to keep your computer as quiet as possible.

Conclusion

Areal density and firmware algorithms are the factors that determine the time it takes to defragment a hard disk. The amount of cache memory and the spindle rotation speed influence the speed of the process, too, but in a lesser degree since most products available today have the same amount of cache (16MB) and the same spindle speed (7200rpm). Considering the current prices, there’s no sense in buying 8MB models unless you are searching for the cheapest HDD for an office computer or as a backup device, i.e. when the price factor is of primary importance.

The number of platters, to our surprise, has almost no effect on the defragmentation process.

The winners of our today’s test are the 500GB Samsung HD501LJ and Seagate ST3500630AS and the representatives of the more serious and speedy category, Western Digital’s WD740ADFD and WD1500ADFD.

We hope Seagate will make its other drives as fast as the leader among them while Western Digital should take note that its drives are among outsiders when it comes to defragmentation.

To our surprise, the defragmentation process took longer with enabled NCQ, even though not by much. We are going to return to this problem later on, but we’d want to ask for your opinion – which defragmenter should we use next? Write to our forum, please.

If you try to keep your home computer quiet, take a look at Hitachi’s HDDs. You can reduce their noise greatly by sacrificing some 5% of performance.