by Aleksey Meyev
08/09/2007 | 01:01 PM
In our previous article devoted to hard disk drives tests with the defragmentation tool built into the Windows XP OS we promised to return to this topic again. This time we are going to check out defragmentation using one of the third-party software applications. We are going to compare the hard disk drives in PerfectDisk 8.0 from Raxco Software.
Before we start, let’s recall what file fragmentation actually is and why we always try to avoid it. When saved to the hard disk, files are not always written into contiguous clusters. More often than not, they occupy a few strings of adjacent clusters in different parts of the platter. This occurs when a file stored on the hard disk gets larger after your processing it or when large files are written into an almost full hard disk when there is no string of free adjacent clusters on the disk to save the current file. As a consequence, the reading of a file like this takes longer since the HDD has to move its heads a longer way to collect the file fragments from the platters. So, the more fragmented your files are, the slower your PC works. It is file fragmentation that affects how fast the new game levels are being loaded and how quickly heavy applications start.
Programs called defragmenters are your main weapon against file fragmentation. All of them create a file distribution map over clusters. Then, each program uses its own algorithm to move fragmented files into free disk space so that each file occupies adjacent clusters. Various programs like that differ by special “skills” and features, such as the ability to defragment service files of the operating system (file allocation table, paging file, standby file), performance, resource consumptions, ability to work with a few storage devices at a time, defragmentation management options.
The PerfectDisk 8.0 program that we picked for our today’s tests following our readers’ recommendations can be considered one of the most advanced tools. Besides the above mentioned service files defragmentation, it also knows to defragment NTFS meta-data, combines all the free space on the hard disk drive into largest possible blocks of adjacent clusters, supports fully functional management from the command prompt, and requires only 5% of the hard disk drive free space to operate. The latter feature makes its positively different from the defragmentation tool built into the Windows XP, for instance, which requires at least 15% of the hard disk drive storage capacity to be free for proper operation. This feature will definitely be very important for those users who are forced to use up almost entire storage capacity of their hard disk drives.
To estimate how much time the defragmentation took, we prepared the hard disk drives in exactly the same way as for our previous article. We did it by creating a greatly fragmented file structure with a total size of 23.83GB on a 32GB disk partition with a 4KB cluster. In other words, 75% of the hard drive capacity was filled.
Partition parameters prior to defragmentation:
We used GetSmart tool to transfer this partition sector by sector to the tested HDDs. Thus, we made sure the data was absolutely identical on each HDD because the per-sector transfer maintained the original structure of files within the partition.
After that, the tested HDD was attached to a SATA port of the following computer:
Then, the following FC-Test script was launched on the PC:
System С:\Program Files\RAXCO\Perfect Disk\PDCmd.exe D: /sp /w
According to this script, the PC places a timestamp, starts defragmentation using PerfectDisk in command prompt mode, and then places a second timestamp when the defragmentation application completes its work. By subtracting the first timestamp from the second one we find the duration of the defragmentation procedure on the tested HDD.
The disk map of files distribution over clusters looked almost ideal after defragmentation:
There are no fragmented files left at all, the entire free space consists of adjacent sectors, which demonstrates a definite advantage of the PerfectDisk application over the defragmentation tool built into Windows XP OS. However, our today’s goal is not to compare the results of different defragmentation tools, but to compare the defragmentation efficiency on different hard disk drives. So let’s no go too far into discussing these results: knowing the PerfectDisk uses different defragmentation algorithms is more than enough for our today’s tests.
I would like to say a few words about the recurring test results within the interval where they fall. We performed 10 measurements on one of the tested hard disk drives, namely Samsung HD501LJ (without NCQ):
As you can see, the interval where the measurements belong, namely the difference between the minimum and the maximum values, is pretty small: you should really look closely to notice it on the diagram above. And in numeric values it equals 13 seconds, which is less than 1% of the measurement. We believe that these results are quite trustworthy.
Most of our today’s testing participants are the same hard disk drives that we used before: a wide range of Seagate’s models, and a few representatives from a number of other 3.5” HDD makers. Only the oldest models were eliminated from this test session. They are Western Digital WD740GD and Seagate ST3500641AS, which performance is too low for them to compete successfully with the rest of the race. However, we included a few large-capacity hard disk drives from Western Digital and Hitachi, and a few new Seagate models as well.
You can compare the characteristics of the HDDs using the following table:
We’ll first show you the results of the HDDs without NCQ, i.e. when the AHCI mode is disabled for the HDD in the mainboard’s BIOS. The numbers are shown in one diagram for better readability. Note that we measure the speed of defragmenting a 32GB partition on all HDDs. If you are defragmenting the entire capacity of the drive, the time will be proportional to the HDD size.
The situation has changed dramatically. WD1500ADFD HDD that used to be the defragmentation leader when we tested with the built in Windows XP tool, dropped back to the fourth place and even its 10,000rpm spindle rotation speed didn’t help. The winner’s laurels went to the 750GB newcomer from Hitachi. The second prize was won by Samsung HD501LJ, and the proud third place belongs to 500GB Seagate hard drive from the 7200.10 series. The very last ones to finish the tests were Maxtor drives, which suffered from the much lower data density per platter and WD5000AAKS. Note that the result of the latter is much lower than that of other Western Digital hard disk drives with 7,200rpm spindle rotation speed. For example, the 400GB WD4000AAKS HDD that boasts newer firmware version than its 500GB counterpart, performed almost 375 seconds faster. The server version of the 500GB model – WD5000ABYS – also turned out faster, although just a little bit. By the way, it completed defragmentation over 1.5 minutes sooner than its predecessor – WD5000YS with lower per-platter data density and as a result featuring one platter more.
It is interesting that in the Seagate camp the server ST3500630NS model lost over 40 seconds to the “regular” model from the same generation – the ST3500630AS.
Now let’s take a look at the Seagate hard drives with identical data density and identical storage capacity, but with different cache-buffer. ST3250620AS HDD on two platters and with 16MB cache-buffer is 206 seconds faster than ST3250820AS with the cache-buffer of half the size. The same situation happens to new 250GB models consisting of a single platter: ST3250410AS with 16MB cache outperformed ST3250310AS drive with an 8MB cache by 187 seconds.
Let’s take another look at the same four Seagate hard drives, but this time we will compare with one another pairs featuring identical cache-buffer but differing by the per-platter data density. Newer single-platter ST3250410AS and its younger brother - ST3250310AS are far ahead their older dual-platter relatives: ST3250620AS and ST3250620AS. The defragmentation time difference equaled 110 seconds for models with 16MB cache, and 129 seconds for models with 8MB cache.
So, what is actually better: higher data density or larger cache? The answer to this question depends on the type of tasks the HDD is performing most. But if we try to draw a conclusion from the results of this test, then larger cache will win. If we compare the results of a Seagate ST3250310AS with higher data density and ST3250630AS with lower data density but twice as big a cache, we will see that the latter completes the test 73 seconds faster. However, it was a pretty easy guess, because in this type of tests the hard disk drive doesn’t perform any streaming file copy operations, when higher data density could be the key to success. However, larger cache-buffer helps a lot in efficient file fragments distribution over the clusters.
I would like to draw your attention to the fact that in all comparisons we have just discussed, the difference is much larger than the results delta.
Next we tested each HDD once again, this time with enabled NCQ, i.e. with AHCI support enabled in the mainboard BIOS. I would like to point out specifically, that we don’t claim NCQ support should affect defragmentation speed: everything depends on the defragmentation algorithms implementation. However, enabling NCQ may affect the results (and sometimes quite significantly), so we should absolutely test the hard disk drives in question with this option enabled.
You can see very clearly that the positioning of the testing participants on the chart has changed. Just like during the tests with the built in Windows XP defragmentation tool, Samsung HD501LJ became the winner having ousted Hitachi HDD from that leading position. 500GB Seagate drives that used to be in the third and fifth spots, now require much more time and hence have moved down closer towards the middle of the list. The outsiders are again the same: two old Maxtor drives and WD5000AAKS.
In general, the conclusions we have made in the previous article remain the same here, although the overall results chart has been shuffled a little bit: the HDDs seem to have changed their positions in families.
Let’s introduce an auxiliary Delta parameter that equals the difference between defragmentation times obtained without AHCI activation (with NCQ disabled) and with it (with NCQ enabled):
The results are pretty ambiguous. On the one hand, the difference in defragmentation times during defragmentation with PerfectDisk with or without NCQ is too small for most hard disk drives: it is about the same half a minute that we saw in the previous test session. However, some hard drives go so far beyond these limits that we can no longer explain it by the deviating results.
At first, let’s talk about those hard drives where the Delta is a positive value, i.e. defragmentation with enabled AHCI went on faster than without it. Hitachi HDT725050VLA360 HDD with 750GB storage capacity has improved its results quite significantly: by more than 3 minutes, which pushed it about one third of the list up. All three new leaders have also performed very well, especially Samsung HD501LJ, which performed the tests 43 seconds faster.
Now a few words about those testing participants that were harmed by enabled AHCI. Seagate ST3500630AS and ST3500630NS are the first ones on the list here. "Regular" and server modification of these 500GB HDDs performed 4 and 2 minutes slower, correspondingly. Strange as it might seem, the second Hitachi HDD, HDS721075KLA330, has also performed slower: its result is 52 seconds longer.
So, what conclusions can we draw from the results of HDDs defragmentation in PerfectDisk? As we have seen, hard disk drive performance during this type of work depends not only on the size of the cache buffer (that is 16MB by most models these days) and spindle rotation speed (that is 7,200rpm by majority of HDDs), but also on the per-platter data density and on the algorithms in the firmware of each particular drive. Unfortunately, you can only learn something about the HDD firmware during the tests. Even the per-platter data density may be pretty hard to track down for a mainstream user who is not following the news from the storage solutions market and doesn’t know the marking abbreviations very well.
Summing up the test results I would like to mention once again the hard drives that performed best of all. According to the results of two test sessions, the best ones are Samsung HD501LJ, Hitachi HDS721075KLA330 and Western Digital WD1500GD. We would also like to specifically point out new Seagate hard drives with higher per-platter data density - ST3250410AS with 16MB cache-buffer, performed very well. However, we are going to go deeper into details about these drives in one of our ongoing articles.
Unfortunately, Maxtor solutions demonstrated pretty weak results (this trade mark still exists, but belongs to Seagate). Even excellent algorithms that we have already discussed in the previous articles cannot make up for the low data density. I also have to point out very low results of Western Digital WD5000AAKS and Hitachi HDT725050VLA360. I would strongly recommend the manufacturers to improve the HDD quality so that they all are up to the mark, especially since they proved that they can do it: look at WD4000AAKS and HDS721075KLA330.
We were slightly puzzled by the results of our comparison of defragmentation times with AHCI support and without it. of course, we can observe some logic there, and we can see that the similar hard drives from the same maker react identically to the AHCI mode enabled or disabled. However, we still couldn’t explain a significant results difference between the two Hitachi solutions, one of which improves its performance by 3 minutes, while the other works 1 minute longer. Maybe the reasons lie with the effect the AHCI drivers have on the firmware versions of particular models.
All in all, we were very pleased with the results of our test session, and from now on we will make sure that PerfectDisk program becomes out constant tool for defragmentation time tests of all HDDs in our lab.