Let’s open the log-file to make sure that our suppositions are correct. What do we see there?
HD Tach version 2.70
Drive: PhysicalDrive1 74.3gb
Access time: 7.5ms
CPU utilization: 2.1%
64 zones to be tested (1123120kb zones).
Look at the very last line. From the entire HDD surface the new benchmark version actually tested the read and write speed in only 64 spots! Isn’t it too little for a test? Let’s check what the situation with the previous benchmark version looked like.
HD Tach version 2.61
Drive: WDC WD740GD-00FLX0 20.0
Access time: 7.7ms
CPU utilization: 0.0%
1107 zones to be tested (65536kb zones).
It turned out that HDTach 2.61 measured the read and write speeds 17 times more thoroughly. Besides, it recognized the HDD model correctly :)
I wonder how the old and the new HDTach versions measure the average access time, read burst speed and read and write speeds? Can we actually trust the new benchmark version?
To check this out I ran the old and the new HDTach benchmark versions on the same testbed and saved the log-files. The results appeared simply shocking!
It turned out that to measure the average random access time the test sends only 256 requests to the hard disk drive. Moreover, the number of requests doesn’t depend on the storage capacity of the HDD tested. The only thing I could find “to witness” for the new test: the 256-erquest sample is generated basing on the info about the hard disk drive storage capacity. Anyway, you need at least 4096 requests with random addresses to measure the AAT correctly, according to IBM’s specification (which is more than 10 times bigger, than what we saw).
To measure the Burst Read Speed we use the following algorithm:
Note that the entire procedure involves only 35 iterations. And it is typical that the procedure is the same for both benchmark versions. I would also like to point out that Burst Read Speed is measured with the help of requests of the same size: 128 sectors (the 4-sector request in the beginning is just a “warm-up” thing, I suppose).
If we take another glance at the screenshot for HDTach 2.7 test, we will see that the Burst Read Speed scale again failed to display the performance of our “wild horse”. :)
The read speed measurements include the following steps:
So, within the run of this test there were 33 requests for reading 32KB data blocks with sequential addresses sent to the drive. In other words the total size of the data requested to be read is slightly above 1MB. Then the test shifted along the HDD address space a few sectors ahead and repeats the reading of 33 32KB blocks.
You can clearly see that the only difference between the units of the two benchmark versions responsible for speed measurements is the size of this “shift” in sectors, when the HDD moves from the zone with completed measurements to another one (marked in bold). The old benchmark version shifts along the HDD address space by 64KB, and the new version jumps 1GB ahead. Well, HDTach 2.7 is really a fast runner. :)