Let’s do some math1, access our own memories and correlate the memories to the calculations:
- 60GXP vs. 75GXP: The increase of the capacity by a third was accompanied with a 20-percent growth of the amount and density of tracks. The amount of SIDs grew by 11 percent only – the number of sectors per track grew by the same percent, too. As the result, the new model was just a little bit faster than its predecessor.
- 120GXP vs. 60GXP: The capacity doubled, but the track density grew up by 66 percent. The amount of SIDs increased by 66% and, as we remember, the Vancouver was faster than the Ericsson.
- 180GXP vs. 120GXP: The capacity grew up by a half, the track density was increased by 27 percent, but the SIDs accrued by 9 percent only. You may remember the outcome of the threefold misbalance of these values – the 180GXP was slower than its predecessor in many applications.
- 7K250 vs. 180GXP: the track density increase accounts for the biggest share in the total 40-percent capacity growth. The 30-percent increase of the amount of tracks is accompanied with the 29-percent increase of the number of SIDs. That’s why we can expect the 7K250 to perform perceptibly faster. Well, we’ll see soon if that’s really so.
But why don’t the manufacturers put as many of those important servo identifiers on the platter as possible, with a reserve? Because each “strip” of servo data reduces the useful area of the platter (where the user data are actually stored). The more SIDs there are on a track, the fewer sectors can fit into it. A reduction of the number of sectors per track leads to a lower linear speed, the spindle rotation speed being constant. So, the developers are always facing the dilemma: either to increase the number of sectors at the expense of the data seek time or to increase the number of SIDs at the expense of the linear speed, or to simply increase the number of tracks without changing the number of sectors and SIDs, getting a low-performance drive in the end.
Testbed and Methods
Hard disk drives with the ATA interface were tested on a Promise Ultra133 TX2 controller (BIOS 126.96.36.199, driver 188.8.131.52), and drives with the Serial ATA interface were attached to a Promise SATA150 TX2 controller (BIOS 1.00.033, driver 184.108.40.206).
The testbed was configured as follows:
- Albatron PX865PE Pro II mainboard;
- Intel Pentium 4 2400 CPU (533MHz FSB);
- 256MB PC2700 DDR SDRAM, CL2;
- IBM DTLA 307015 system hard disk drive;
- ATI RADEON VE graphics card;
- Windows 2000 Pro SP4.
We used the following benchmarking applications:
- WinBench 99 2.0;
- Intel IOMeter 2003.02.15;
- FC-Test v.0.5.3.
We partitioned the drives in FAT32 (with the help of Paragon Partition Manager) and NTFS for the WinBench tests, leaving the cluster size default. Each test was run seven times, and the best result was chalked up. The hard disk drives didn’t cool down between the tests.
For FC-Test we created two logical volumes of 32GB capacity on the drives.
With IOMeter we used Sequential Read, Sequential Write, Database, Workstation, File Server and Web Server patterns. The last two patterns coincide with those used by StorageReview, and the Workstation pattern was created by us basing on the disk access statistics for an NTFS5 partition as given in the StorageReview methodology. This pattern has a smaller range of loads and a higher percent of write operations compared to the server patterns because a serious workstation is supposed to have a big amount of memory.