HDTach 2.7: New Essence or New Exterior?

This short article is devoted to a new version of a well-known utility for HDDs benchmarking: HDTach 2.7, which has been recently released. Have all the bugs we discovered in the previous version of the test been eliminated? Find out if the new HDTach is accurate enough to be used for testing purposes!

by Nikita Nikolaichev
01/28/2004 | 11:46 PM

Despite the fact that I decided to remove HDTach benchmark from the list of tests used for hard disk drives (see the details that led to this decision in the following articles: WD Raptor: First ATA Hard Disk Drive with 10,000rpm Speed and Seagate Barracuda Serial ATA V Hard Disk Drive Review), we will still need to deal with this test once again. So, today we are going to return to discussing whether HDTach is suitable for HDD testing or not.

 

One day all over the Web there appeared announcements about a new HDTach 2.7. It was curious that the new test version was released by a well-known Simpli Software Inc., and not by TCDLabs, which has always been known as the developer of HDTach test. The tcdlabs.com domain name now leads to simplisoftware.com, so I dare suppose that there is some relationship between TCDLabs and Simpli Software Companies. Unfortunately, simplisoftware.com site doesn’t offer any explanations to this news.

However, there is the new curious program: HDTach 2.7!

Closer Look at HDTach 2.7

Right under the huge button saying: “Download Me” we can find a brief description of this test:

"HD Tach is a low level hardware benchmark for random access read/write storage devices such as hard drives, removable drives (ZIP/JAZZ), flash devices, and RAID arrays. HD Tach uses custom device drivers and other low level Windows interfaces to bypass as many layers of software as possible and get as close to the physical performance of the device possible.

Wow, this is a truly universal benchmark, it is even suitable for RAID arrays!

Ok, I will not keep you waiting for long, here is a screenshot of the working window:

HDTach 2.7

Click to enlarge

Hm, it looks just like the good old HDTach 2.61, doesn’t it?

HDTach 2.61

Click to enlarge

Well, I should say that the top picture looks nicer to me. But when I tell you that it took the new HDTach version about 20 seconds to work through a 74GB hard disk drive, you will probably agree that the upper screenshot of the HDTach 2.7 deserves a closer look.

If I am not mistaken the new program version performed much fewer iterations during the read and write speed measurements than the previous version 2.61.

Let’s open the log-file to make sure that our suppositions are correct. What do we see there?

HD Tach version 2.70
Drive: PhysicalDrive1 74.3gb
Access time: 7.5ms
CPU utilization: 2.1%
64 zones to be tested (1123120kb zones).

Look at the very last line. From the entire HDD surface the new benchmark version actually tested the read and write speed in only 64 spots! Isn’t it too little for a test? Let’s check what the situation with the previous benchmark version looked like.

HD Tach version 2.61
Drive: WDC WD740GD-00FLX0 20.0
Access time: 7.7ms
CPU utilization: 0.0%
1107 zones to be tested (65536kb zones).

It turned out that HDTach 2.61 measured the read and write speeds 17 times more thoroughly. Besides, it recognized the HDD model correctly :)

I wonder how the old and the new HDTach versions measure the average access time, read burst speed and read and write speeds? Can we actually trust the new benchmark version?

To check this out I ran the old and the new HDTach benchmark versions on the same testbed and saved the log-files. The results appeared simply shocking!

It turned out that to measure the average random access time the test sends only 256 requests to the hard disk drive. Moreover, the number of requests doesn’t depend on the storage capacity of the HDD tested. The only thing I could find “to witness” for the new test: the 256-erquest sample is generated basing on the info about the hard disk drive storage capacity. Anyway, you need at least 4096 requests with random addresses to measure the AAT correctly, according to IBM’s specification (which is more than 10 times bigger, than what we saw).

To measure the Burst Read Speed we use the following algorithm:

Note that the entire procedure involves only 35 iterations. And it is typical that the procedure is the same for both benchmark versions. I would also like to point out that Burst Read Speed is measured with the help of requests of the same size: 128 sectors (the 4-sector request in the beginning is just a “warm-up” thing, I suppose).

If we take another glance at the screenshot for HDTach 2.7 test, we will see that the Burst Read Speed scale again failed to display the performance of our “wild horse”. :)

The read speed measurements include the following steps:

So, within the run of this test there were 33 requests for reading 32KB data blocks with sequential addresses sent to the drive. In other words the total size of the data requested to be read is slightly above 1MB. Then the test shifted along the HDD address space a few sectors ahead and repeats the reading of 33 32KB blocks.

You can clearly see that the only difference between the units of the two benchmark versions responsible for speed measurements is the size of this “shift” in sectors, when the HDD moves from the zone with completed measurements to another one (marked in bold). The old benchmark version shifts along the HDD address space by 64KB, and the new version jumps 1GB ahead. Well, HDTach 2.7 is really a fast runner. :)

Now let’s check what’s going on in HDTach 2.61 and HDTach 2.7 during write speed measurements:

And we see almost the same thing as during reading. Both benchmark versions send a pack of 33 requests 32KB big with sequentially increasing address and finish the pack with one read request. Probably this very last read request should make the HDD perform all the previous write requests, i.e. perform the lazy writing immediately.

If this supposition is correct, then measuring the HDD write speed is in no way a true measurement of this parameter, because as we have just seen, there is one read request in the queue.

I wonder why they forced this single read request in order to execute all writes? Anyway all the writes will sooner or later be pushed out by the new incoming write requests. Besides, there is the good old Flush Cache command, which could also be of some help, I suppose… :)

Well, it’s high time we made some conclusions.

Conclusion

Well, the “autopsy” revealed that the test algorithms in HDTach version 2.7, which have been inadequate for contemporary controllers and hard disk drives, remained unchanged since the times of HDTach 2.61. In fact the new version of this benchmark differs from the old one only by a more advanced interface design and much shorter operation time. Unfortunately, the read and write speeds measurement precision has been sacrificed for the sake of faster testing. Although I wouldn’t claim that the previous HDTach 2.61 was very precise here, too.

So, I don’t think it makes any sense to use HDTach 2.7 as it is now for hard disk drive testing. Moreover, it can be used for flash-drives testing only if we do not find anything better than that.

Now all we can do is wait patiently for the absolutely modified new HDTach version, which was promised by the developer a while ago (see this link for details), and hope that they will really take care of a proper HDD testing algorithm.