Hitachi Deskstar 7K250: Vancouver 3 HDD Review

Today we would like to introduce to you a new third generation of IBM/Hitachi Vancouver hard disk drives. We will test and review the entire HDD family including the models with storage capacity of 250GB, 200GB, 160GB, 120GB, 80GB, 60GB and 40GB, featuring 8MB or 2MB buffer and supporting two types of interface: SATA and PATA. Find out more about these new Deskstar solutions from our detailed review!

by Sergey Romanov
11/19/2004 | 11:13 AM

Introduction: The Return of the Vancouver

When Norton had glimpsed Rama for the last time,
a tiny star hurtling outwards beyond Venus, he knew
that part of his life was over.
And on far-off Earth, Dr Carlisle Perera had as yet told no one
how he had woken from a restless sleep with the message
from his subconscious still echoing in his brain:
The Ramans do everything in threes.

Arthur C Clark, Randezvous with Rama.

 

Back in the September of 2002 when IBM announced a hard disk drive codenamed Vancouver 2, we had suspicions that it wasn’t the last time we heard that name. A successful product shouldn’t leave the scene too hastily, as the development of a new design is a costly and even dangerous business as concerns market competition.

Besides that, the 180GXP series didn’t include any Serial ATA models, but they were sure to appear, just because of the same competition. But as IBM handed over its HDD business to Hitachi, the future of the Vancouver became less certain. The Deskstar 180GXP was continued under the Hitachi brand, but time was passing by and Hitachi Global Storage kept silent about new products. Not only Maxtor and Seagate, but also Samsung announced their new series with 80GB platters and Serial ATA interfaces, but there was still no news from Hitachi.

This long delay made us think that against all common and marketing sense Hitachi had started to develop a new design for the Deskstar series from scratch. But on June 25, 2003, they at last issued a press release announcing the new Deskstar 7K250 series. The nomenclature of the models changed completely, but the intrigue persisted – what had Hitachi done in all the passed time? The answer came in August when first samples of the new models appeared in retail shops. Hitachi couldn’t help completing the cycle. So, the Vancouver is here again – welcome its third reincarnation!

To anticipate your obvious question, here is the proof the 7K250 does belong to the Vancouver family.

The family affiliation is marked in red and the indication of the number of the operational surfaces is marked in blue. Now that we have no doubts about the relations between the 180GXP and the 7K250, it’s time to find the differences between them.

What’s New?

First of all, the model numbering system has changed. Instead of the hereditary IBM nomenclature, Hitachi Global Storage has started to use a unified naming system for all its products:

The letter V in the Generation Code field most probably corresponds to the product’s codename – Vancouver.

The new series is declared to include models of 40, 80, 120, 160 and 250GB capacities with 2MB or 8MB buffers and two interfaces (ATA and Serial ATA). The Deskstar 7K250 inherited all the traits and features of its ancestor (aluminum rather than glass platters, fluid-dynamic motors), but has a higher platter capacity and support of the Serial ATA interface.

The series still includes “low-profile” models with a slightly slower seek time. Curiously enough, the seek time info in the press release (8.5 and 8.8 milliseconds) differs somewhat from the numbers in the specs. We think that the press release is closer to reality, but we’re going to check that out soon.

This time Hitachi didn’t limit itself with two track density variants (the Deskstar 180GXP series featured low-TPI versions of the drives), adding one more. The maximum density is only employed in the flagship models of 250GB capacity as it requires platters with a capacity of slightly over 80GB (83.4GB, to be precise). Platters with a reduced track density are used in single-platter models designed in the “LP” case.

  

For comparison I offer you snapshots of the senior Serial ATA model in the ordinary case. To avoid any misunderstanding I want to emphasize the fact that the type of the case has no connection to the interface type (ATA or Serial ATA), but only to the number of the platters in use.

  

So, the LP case looks more fragile than the ordinary one – it doesn’t even have a heat-spreading plate. We can’t call this fact anything else but “extreme economy on junior models”.

The electronics PCB of the Serial ATA model carries the well-known “serializer” from Marvell, i.e. this protocol isn’t native, but is artificially implanted.

The declared acoustic characteristics exactly coincide with the data on the Deskstar 180GXP.

The fluid dynamic bearings have reduced the noise of the single-platter models most of all whereas the noise reduction is negligible with the three-platter drives. This is true in practice: the single-platter Deskstar 7K250 is among the best in the idle noise parameter, but I can’t say the same about the 80GB and bigger models. The seek noise from their actuators is strong, sharp and “clicking” in the performance mode. Moreover, the typical special effects of IBM drives, unheard of by owners of HDDs of other brands, are still here. It’s hard to describe them in words. Some people call these sounds “meowing”, others – “shrieking”. These sounds occur quite often, they are oscillatory and very annoying, but they don’t affect the operability of the hard disk drive. Interestingly, these sounds are much louder in the LP case.

What’s new in terms of power consumption? Almost nothing, save for a much bigger appetite of the Serial ATA models on the +5v line.

One parameter looks strange in the specification: 128 segments in the read buffer are superfluously high, so we really doubt this number. Well, we’ll check this number out in practice soon.

There’s only one characteristic left to discuss. It is not emphasized by the majority of the manufacturers, but it does seriously affect the performance of a hard disk drive. We mean the number of servo identifiers per track.

Servo-Sectors and Performance

We first encountered this parameter when investigating the WD2500JB model from Western Digital (see our review called Western Digital WD2500JB HDD: More than Drivezilla?!). We couldn’t understand then why the new drive had much better parameters compared to its predecessor, as they but slightly differed in the linear speed and other characteristics. We found an explanation, though, with the help of Mikhail Mavritsyn. The WD2500 just had considerably more servo identifiers or SIDs. To clear out what these identifiers are, we should delve deeper into the principles of operation of current hard disk drives.

Quite a long time ago (for the specific dates refer to our article called The Last IBM Drive: Deskstar 180GXP HDD Details), the growth of areal density had led to the abandonment of a separate surface for storing “navigational” data needed by the heads positioning mechanism. They created the so-called embedded servo instead. The problem read as follows: the width of the data tracks was so small that it became impossible to seek for and stay on a track using the remote servo control. Even the thermal expansion of materials could make it impossible to find a track where expected, but there were also such things as assembly inaccuracy and mechanism wear. That’s why they developed a control system based on servo information built directly into the data storage area. The tracks were divided into groups of sectors, with servo identifiers in between. The magnetic head chosen at the moment is always reading and trying to identify servo information among data proper. Finding it, the head learns its present bearings and calculates the way to the necessary data, relative to its current position. If the data are not on the same track with the head, a precisely dosed impulse moves the head closer to the required track. If the data are on the same track, it’s only necessary to wait a little for the platters to rotate by the necessary degree.

It is here that a strong correlation between the number of SIDs and the HDD’s speed is observed. The higher the track density is, the more tracks pass by the heads per a time unit during the positioning operations. Since the speed of the electronics isn’t infinite, and the accuracy of the mechanics isn’t absolute, the drive doesn’t always hit precisely on the necessary track. The electronics of course doesn’t know where the heads have landed until a strip of servo data have passed beneath them, and thus cannot start to read/write data or to correct the position of the heads further. In other words, the growth of the track density should result in a proportional increase of the number of SIDs, if we don’t want the product to perform worse. Let’s see how this theoretical premise correlates with the practice. The information on the amount of servo data per track for several models of hard disk drives is again courtesy of Mikhail Mavritsyn (by the way, Maxtor sometimes puts this number in the documentation).

Let’s start with the above-mentioned case study of discs from Western Digital.

The total density growth of 25 percent was mainly achieved through the increase of the number of tracks rather than of the number of sectors per track. But the increase of the number of SIDs covered the track density growth with interest, and we enjoyed a clearly higher performance of the new model. But what about the currently tested product, the Deskstar 7K250? Let’s examine its closest ancestors:

Let’s do some math1, access our own memories and correlate the memories to the calculations:

But why don’t the manufacturers put as many of those important servo identifiers on the platter as possible, with a reserve? Because each “strip” of servo data reduces the useful area of the platter (where the user data are actually stored). The more SIDs there are on a track, the fewer sectors can fit into it. A reduction of the number of sectors per track leads to a lower linear speed, the spindle rotation speed being constant. So, the developers are always facing the dilemma: either to increase the number of sectors at the expense of the data seek time or to increase the number of SIDs at the expense of the linear speed, or to simply increase the number of tracks without changing the number of sectors and SIDs, getting a low-performance drive in the end.

Testbed and Methods

Hard disk drives with the ATA interface were tested on a Promise Ultra133 TX2 controller (BIOS 2.20.0.14, driver 2.0.0.29), and drives with the Serial ATA interface were attached to a Promise SATA150 TX2 controller (BIOS 1.00.033, driver 1.0.0.27).

The testbed was configured as follows:

We used the following benchmarking applications:

We partitioned the drives in FAT32 (with the help of Paragon Partition Manager) and NTFS for the WinBench tests, leaving the cluster size default. Each test was run seven times, and the best result was chalked up. The hard disk drives didn’t cool down between the tests.

For FC-Test we created two logical volumes of 32GB capacity on the drives.

With IOMeter we used Sequential Read, Sequential Write, Database, Workstation, File Server and Web Server patterns. The last two patterns coincide with those used by StorageReview, and the Workstation pattern was created by us basing on the disk access statistics for an NTFS5 partition as given in the StorageReview methodology. This pattern has a smaller range of loads and a higher percent of write operations compared to the server patterns because a serious workstation is supposed to have a big amount of memory.

Low-Level Characteristics

It’s our pleasure to introduce to you another development of ours (besides FC-Test) – a test that allows gathering precise information about the most interesting characteristics of hard disk drives. The utility isn’t yet finally released, but we can already use it to get some results. For example, we can measure the average seek time.

The seek time is calculated basing on the average time it takes to access a random sector and the spindle rotation speed, which are measured experimentally. The “Measured Buffer” column contains our measurements of the read buffer size, which sometimes doesn’t correspond to the size reported by the device itself. Segmentation is the number of independent data sequences which are stored in the HDD’s read buffer. Leak-ahead shows the maximum size of the data blocks the device can perform anticipatory reading with; a dash in this column means we couldn’t measure this parameter.

Reading this table carefully you can notice the record-breaking average access time of the two senior models with the classic ATA interface. The measurements are half a millisecond better than the specification, which is an unprecedented thing, because the measured time also includes the command overhead Hitachi specifies to be about 0.5 milliseconds for seek and read operations! None of the manufacturers hasn’t yet achieved such a low seek time, and this record also belongs to the three-platter models with the heaviest actuator! On the other hand, the Serial ATA version of the same drive turns to be somewhat slower. That’s strange. The dual-platter models have a typical speed, but also better than their own specs say. The junior models, like the manufacturer declares, are slower yet.

We should give an explanation of the results marked with an asterisk. The truth is that the first sample of the HDS722580VLAT20 didn’t pass the full test cycle – its performance began to degenerate:

After its data-transfer graph took this shape, the participant dropped out of the race. The substitute was first of all passed through the WinBench tests, but its average access time was much worse than the norm. Moreover, this second sample didn’t endure the whole marathon, either. Thus, the “7K250 80GB PATA 2MB” model we will be talking about is a collective image of two samples of this hard disk drive, each of which couldn’t make it through all of our tests.

Now, let’s do the promised check of the read buffer segmentation. The measurements correlate well with the reference data on the previous models: where 12 segments are specified, 11 are measured; where 21 are specified, 20 are measured. For the new series, however, the specified value is as high as 128, but we don’t see anything close to that in our measurements. Yes, there are more buffer segments in the senior models, but not much more, while the number of the segments in the junior models remained the same!

But how do we check out the availability of the look-ahead feature in our test? If after reading a sector with the address N and a short pause, the drive returns the next, N+1, sector in a time interval shorter than the time it takes the platter to make one revolution, then read look-ahead is really supported by the drive. And here we notice one curious detail about the 7K250 models with an 8MB buffer: they do not perform forced look-ahead reading at each access of the disk. They behave more like SCSI drives which emphasize streaming operations rather than a fast response to a random request. Since a modern hard disk drive just cannot live without anticipatory reading, we can only suppose that the firmware of the models with an 8MB buffer contains some new adaptive look-ahead algorithm – something like the hardware prefetch logics in the Pentium 4.

Although the measurements of the look-ahead distance are not ideally precise (well, the measurements produce stable results with small values, like with the Deskstar 120GXP), we can state that the Vancouver 2 (180GXP) and the Vancouver 3 (7K250) have a more aggressive read look-ahead as they read 6 times more data per access that the Deskstar 120GXP.

Now let’s examine the data-transfer graphs and say a few words on the zone distribution. The models of the highest capacity come first.


HDS722525VLSA80


HDS722525VLAT80


HDS722525VLAT80 200GB

Now, that’s the answer! The capacity of the senior Serial ATA model is higher than of its ATA analog. To be exact, the ATA model was cut down to a round number of 250 billion bits – Hitachi has never behaved like that! Like IBM before it, Hitachi used to give the user a little bit more and its drives used to be a few gigabytes larger than the competitors. This time around, however, the two senior models are shortened, but the rest of them are not. That’s where the record-breaking average access time comes from – the cutting-down of some of the tracks reduces the distance of the actuator’s movements, i.e. it leads to a smaller average access time. We’ll keep this in mind when discussing the test results below.

To all appearances, the 200GB model is made by disabling one of the six surfaces of the 250GB drive. The resulting capacity was also truncated to a more or less round number of sectors. The graphs of the rest of the 7K250 series drives didn’t bring any surprises, save for one. Just take a look at THAT:


HDS722580VLAT20 60GB

The stretched shape of the graph suggests that this model is made by cutting down one fourth of the available tracks of the HDS722580VLAT20, and its name also bears an inkling of that. The very low speed makes us suspect that the longest tracks were dismissed, but the comparison of the zone distribution doesn’t reveal similar areas, and the average access time of this model doesn’t differ greatly from the others. Thus, the 60GB model (by the way, it is not mentioned in the specs) has an individual zone distribution. But why did Hitachi choose such a low sector density? For comparison, here’s the read graph of the first Vancouver with an areal density of 40GB/platter.


IC35L120AVVA07-0

The age of this model is almost 3 years, but it has a higher linear speed than the mutant called HDS722580VLAT20 of 60GB capacity! By the way, due to certain confusion in the nomenclature of the drives, we will call the devices not by their model numbers, but by collective names like follows:

So, let’s start our traditional cycle of tests to determine the operational properties of the hard disk drives.

Performance in Sequential Read/Write Operations

This is a simple test, but sometimes it yields surprises. We’ll see how well the drives’ electronics handles processing of data blocks of a varying size. Here are the summed up results in Intel IOMeter Sequential Table.

Since the table includes a lot of same-type devices that differ in the capacity only, we took a few typical results and put them into our diagrams. Reading comes first:

The first notable thing is the lag of all the 7K250 series models from their predecessors on small data blocks. That’s rather a surprising start, isn’t it? By the way, we see a clear trend of the results worsening with each new model. Fortunately, reading in small blocks rather seldom occurs in real applications, and the engineers must have considered this fact when sacrificing the efficiency to… to what? It’s hard to say…

Note the behavior of the 7K250 models with a 2MB buffer – they are stably faster than their own 8MB-buffer mates! Do you remember the supposition we made in the previous section that the latter devices have no forced read look-ahead? Here’s indirect evidence to that. However, their showings are similar overall, so we can say that Hitachi has made a good read look-ahead algorithm.

The red line in the diagram is drawn by the model of 60GB capacity – it is the worst of all. To all appearances, we can dismiss it altogether as we can’t hope it’s going to deliver a good performance in any test.

Sequential writing brings less discouraging results, although we see the same trend here – every new generation of the Vancouvers performs worse. Can it be that the engineers are slowly but steadily replacing the electronics? This is also indirectly proved by the measurements of the response of the electronics to various commands. But why? You may remember that the central controller chip in IBM’s hard disk drives used to be very hot at work, so it’s possible that Hitachi is thus keeping the heat generation low until it masters the production of the new controller chip using a more progressive technology.

Performance in Intel IOMeter: Average Access Time

Now let’s discuss the average read and write access time, taking two extreme cases of the Database pattern (which will be discussed later on) under a constant load of 1 request. Click here for Intel IOMeter Random Data Access Table.

The results are presented as a diagram, and we can compare these measurements to the numbers we got with the help of our own utility (see above):

It would be silly to expect an exact coincidence of the results, since our utility works with 1-sector-long data blocks while IOMeter uses 8KB blocks. Still, the order of the results is very similar, only the four single-platter models in the middle of the diagram exchanged their seats. That’s a good beginning for an early version of our benchmark, don’t you agree?

Now that we’ve ascertained the similarity between the two different measurements of the access time, we can spend some time analyzing the results. As we said above, all Deskstar 7K250 models can be divided into three groups by the access time parameter, according to the number of the data platters. The single-platter models confirm their own specifications by having the worst access time. The three-platter Vancouver 2 also joins this group – we learned in one of our previous reviews that it was the slowest in seek operations among all the Deskstar 180GXP family. Unlike in the previous generations, the three-platter models are the fastest in the Vancouver 3 series, but only due to having some tracks cut down, as we learned above. Overall, we can assume that the average seek time didn’t change since the first Vancouver.

Here we can also see the confirmation of the theory that the number of servo sectors affects the data access speed. At least, the Deskstar 180GXP, which has the lowest ratio of the amount of servo sectors to the number of tracks, is just a little bit better than the single-platter 7K250 models with a seek time slowed down by 0.3 milliseconds.

The performance when writing data to random addresses depends on the average seek time, but also on the number of requests the firmware can store in the buffer and on the efficient sorting of the requests. Again, we can divide the devices into single-, dual- and three-platter groups, with only one result falling out of the order. Well, we can’t actually give an explanation to the behavior of the 250GB SATA model.

The ratio of the average response time at read operations to that at write operations yields the efficiency of the lazy write implementation:

So, we see that the deferred write algorithm has been modernized in the Vancouver 3, just like the read look-ahead. But this is only true for the senior models on two or more platters.

The HDS722525VLAT80 model broke the record set by a Western Digital long ago. The two-platter models showed stable and high results, but the single-platter ones couldn’t even surpass the previous generations, which took their places right in the middle of the diagram. Here’s Hitachi’s policy to you – senior models have the best, and junior models are left on their own.

By the way, the Serial ATA versions are always slower than their ATA analogs. Is this a tradeoff for the reduced speed of the electronics (the Vancouver 3 has no native support of the Serial ATA interface and has to use a translator chip)? The SATA model of the highest capacity is especially slow at writing, which makes us suspect that it just has raw, experimental firmware.

Performance in Intel IOMeter: DataBase Pattern

It’s time to run the Database pattern which will help us examine the drives’ capability to combine write and read operations. Click here for Intel IOMeter Database Table.

The sequential load when the command requests depth equals zero is the typical scenario of interaction between your applications and the hard disk drive, so it would be most useful to watch the drives under such circumstances. The diagrams show you one member of each group of drives, whose results are typical for its group.

The newly-baked record-holder, the HDS722525VLAT80 is the fastest at each read/write combination. In fact, all the Vancouvers have similar characters, like relatives often do, and only the 7K250 models with a 2MB buffer differ somewhat. They have a smaller amount of cache memory compared to the first Vancouver, but have much more aggressive read look-ahead algorithms. As the result, the size of the cache buffer cannot accumulate and optimize requests, and lazy write becomes efficient only when there are 100% write requests in the queue.

The junior 7K250 models with an 8MB cache buffer and the previous generations of the Vancouvers show similar results. It’s good when a new product bears some traces of its origin, but it’s a pity that models of different capacities are deliberately put under unequal conditions by the manufacturer. By the way, what about the scalability depending on the load? Until now, products from Hitachi (and earlier, from IBM) used to have an excellent appetite:

It’s all right at 100% reading – just like in theory. The gaps between the participants are negligibly small.

At 100% writing the gaps have become obvious, but there are still no deviations from theory: all the drives behave in the same way as the load is growing up, while the leaders and outsiders were determined by us in the previous section already.

A more curious picture can be observed in the combo mode (the averaged results for all the loads, except the two above-described extreme cases). The very first Vancouver boasts the best scalability. The 7K250 models equipped with 2 megabytes of cache memory have a graph of the same shape, but going lower. It seems like the difference between them is due to the difference in the average seek time. The Serial ATA 7K250 drive resembles its ATA counterparts of the same buffer size (8MB), while the Vancouver 2 finds itself somewhere between the past and the future.

It transpires that the 7K250 models with an 8MB buffer increase their performance under higher loads less willingly. When measuring the low-level characteristics of the devices, we noted that they used an improved read look-ahead algorithm. Besides that, we can note that their read buffer has become smaller by a third. It may be the outcome of the employed adaptive algorithms or it may be not – anyway, this has led to a certain worsening of the scalability. On the other hand, the difference is negligibly small until four requests, and bigger loads are only typical for hard-working servers.

Let’s now see how the drives pass the patterns that simulate servers and workstations.

Performance in Emulating Patterns

Click here for Intel IOMeter Workload-Emulating Patterns Table.

Workstations don’t usually deal with long request queue depths, so we ran the Workstation pattern with small loads. Although some of our colleagues criticize this test, we are still using it, as a typical workstation doesn’t work under the ideal conditions imitated in the majority of other tests. Particularly, 1) several applications, rather than one, are usually accessing the hard disk drive at the same time, 2) the disc is usually filled with data, and 3) these data are often fragmented. This extreme case is emulated by our pattern.

As expected, both variants of the HDS722525VLAT80 (of 250 and 200GB capacities) are in the lead. The smaller access time allows them to have better results. Considering this fact, we should name the first Vancouver the winner and only then place the ATA variants of the 7K250 with an 8MB buffer. Most probably, their adaptive anticipatory reading allowed them to outperform slightly the rest of their family, which formed a dense group with the Deskstar 180GXP in the head. The Serial ATA variants are again slower than their ATA counterparts, and we again suspect the negative influence of the translator chips they use.

The single-platter models with a 2MB buffer are at the end of the line, right where Hitachi had intended them to be. One model is out of the order, though. We mean the strange and undocumented 60GB model we have already discussed in the Low-Level Characteristics section. This is not a great event, though, so let’s move on to our server patterns.

The disk subsystem of a server can meet much bigger loads than the one of an ordinary PC, so the loads range is different here.

Strangely enough, the situation is quite different here. First, all the Serial ATA models are down at the bottom of the list. This can be explained, though. When analyzing the results of the Database pattern we marked the worse scalability of the senior 7K250 models depending on the load in mixed modes, and this affected the results of the File Server pattern, too. The leader is the Deskstar 120GXP, which is closely followed by the dual-platter 7K250 models with a 2MB buffer. The rest of the participants are behind them.

Theoretically, if there were no writes at all, we should have another picture of performance. The Web Server pattern differs from other patterns in having no write requests.

And really, when there’re no writes, the senior 250GB models are in the lead, followed by the Deskstar 120GXP (what a vigorous oldie!) This time the Deskstar 180GXP rolls down to the end of the list, being only faster than the budget 7K250 models. On the other hand, the gaps between the models are very small here to be taken seriously. Moreover, they are almost fully explained by the differences in the average seek time of the different models.

Performance in WinBench

Here comes the old-timer of our reviews, Ziff Davis WinBench 99. Click here for extensive WinBench 99 Table.

As you know from our previous articles, WinBench 99 quite honestly reflects the specifics of a hard disk drive’s firmware, if you take into account the strong dependence of this benchmark on the disk capacity. Thus, in order to make a correct analysis, we need examine each of the Disk Winmarks tests independently to get quite a truthful representation of the qualities of modern hard disk drives.

As usual, the Business Winmark subtest favors devices with 8 megabytes of cache memory – such drives have a bonus of about 27 percent to their results. Otherwise, it’s all quite natural, save for the surprising victory the Deskstar 120GXP achieved in the race with its much younger analogs. Having nominally the same-size buffer (recalling the low-level tests, this is not exactly so), but a much higher data density, the 7K250 manages to lose. We don’t have a ready explanation to this outcome.

Advanced Visualization Studio already shows the above-mentioned dependence of the results on the capacity of the disk. This dependence is less clear in FAT32 than in NTFS. Interestingly, the 7K250 of 120GB capacity once again couldn’t outperform its ancestor, while the Deskstar 180GXP successfully resists the pressure of the younger generation. It seems like Hitachi didn’t change anything in the firmware algorithms, but we already know for certain that it’s not so. Well, at least, they didn’t spoil anything :).

As usual, FrontPage produces the most contradicting results. The dependence on the buffer size seems clear, but the Deskstar 120GXP wins. The relation between the capacity of the disk and the read/write speed seems clear, but there’s no orderly law. This is really a very strange test.

It’s simpler with Microstation, but minor surprises like the sudden rush of the Serial ATA 80GB model or the surprisingly low results of the dual-platter 7K250 models in FAT32 can happen. These are, however, accidents mostly, while the regularities remain clear: the bigger size of the cache buffer prevails over the capacity dependence, providing a performance boost of up to 25 percent in FAT32. The Serial ATA interface is a little slower than the classic ATA.

And once again the previous generations are not inferior to the new one! This is rather a dubious tendency.

Adobe Photoshop likes high linear speeds as we learned long ago and it breaks the above-said tendency somewhat. But the advantage of the 7K250 drives is rather illusory here, since the Deskstar 180GXP is competing with them as an equal. The “cripple” of 60GB capacity fell far behind here.

Moreover, the Deskstar 180GXP becomes the leader in Adobe Premier in the NTFS file system. The 60GB model is farther behind, while 8 megabytes of the cache memory are no guarantee of a higher performance here. That’s all quite mysterious.

This is quite a different test, usually very sensitive to deferred write operations, but it can’t help us differentiate between the three Vancouver generations! The Deskstar 120GXP is persistent in its desire to be as fast as the 7K250 whereas the Deskstar 180GXP is still aiming to win! We must have been too hasty to suppose that Hitachi didn’t spoil anything.

At last we have some order! The bigger cache buffer provides a performance bonus of up to 20 percent in FAT32, and less in NTFS. The Deskstar 120GXP and the 180GXP don’t give up still, and now we can say that this is no accident.

Summing up the WinBench section of the review, we show you the integral High-End Disk Winmarks result. So what do we have here?

Our hypothesis we expressed in the Average Access Time section that the electronics had been slowed down in the Deskstar 7K250 is confirmed in all the tests of WinBench. Having better physical characteristics, the Vancouver 3 finds it very hard to outperform its predecessors. The situation within the series looks like follows:

Performance in FC-Test

Our last test analyses the speed of real-life file operations. Click here for FC-Test, FAT32 and FC-Test, NTFS Tables.

To limit ourselves with just the basic facts, we don’t put the results of all the patterns in the diagrams, but only those that are remarkable or useful.

However strange, the previous generation of the Vancouvers is the best in the write speed (the first generation looks good, too), while the new devices show nothing impressive. In fact, we can’t characterize the write speed of the 7K250 as low, but we can’t call it high, either. Among curious things there are: the stable advantage of the 250GB models over the other representatives of the 7K250 series and the strangely low results of the 200GB model. Frankly speaking, we couldn’t find the reason for that – the difference in their firmware? Notice also that the increased size of the buffer doesn’t practically affect the write speed.

The read speed mostly depends on the linear data density, rather than on the firmware and other factors, so the new Vancouver generation leaves their predecessors behind in this test, although the gap is diminishing as the size of the files becomes smaller. The sequential read tests of IOMeter showed us that the efficiency of the electronics has degenerated on small-size clusters, and we see a confirmation to that in the real test. The 8MB buffer is of little profit here, probably because IBM’s drives have always been able to effectively use the buffer of a smaller size.

The copying test brings somewhat better results. The senior models of the 7K250 series are faster in FAT32 than the Deskstar 180GXP, while the difference between the 8MB and 2MB buffer became apparent in NTFS. On the other hand, the old Deskstar 120GXP doesn’t yield to the junior 7K250 models and sometimes outperforms them! The Serial ATA drives are always behind their ATA analogs, which is natural considering that the new interface is realized with the help of a “serializer” chip.

The copying of files from one partition to another gave us a more or less stable picture: the bigger buffer increases the copying speed by more than 50 percent, while the 7K250 models with a 2MB buffer cannot outperform the Deskstar 120GXP! The Deskstar 180GXP has a considerably lower performance in comparison to the senior 7K250 models. We also understand the reason for the Serial ATA models to be slower: the HDS722516VLSA80 was the only drive to be tested on the new version of the driver for the Promise SATA150 TX2 controller and, as you see, it is even faster than its ATA analog.

We can also see the biggest gaps within the same series as the 250GB model copies files from one partition into another at twice the speed of the 80GB model!

Conclusion

Seems like the conclusion based on the results of this test session won’t be too bright. To all appearances, the era of a rapid growth of hard disk drive performance when each new generation easily left the older behind in any applications, has come to its end. HDDs owed their speed boost to the constant improvement of their firmware, but there’s a limit to the inventiveness of the programmers, and all the manufacturers have reached the same level of performance for their products. Further improvements are achieved through purely quantitative methods: higher data densities, larger buffers, faster spindle rotation speeds. A sudden jump may come only with new technologies and a perfected protocol of interaction between the operating system and the hard disk drive. Yes, we mean the upcoming Serial ATA II and its main dainty – Native Command Queuing. Let’s just wait for a little.

But what about the subject of the review – the Hitachi Deskstar 7K250 series?

The main and paradoxical discovery we made was that the three Vancouver generations have roughly the same performance across the majority of our tests! IBM seems to have set the bar so high that neither IBM itself nor Hitachi can overcome it for almost three years now. Of course, there’s some progress, but it is always accompanied with regress.

The first difference of the 7K250 from the predecessors is open to the look of any observer – the transition of the central controller chip to Infineon’s production facilities. So far, the transition has been made without a serious modification of the logics, without a change of the processor core – the command set of the 7K250 fully repeats the command set of IBM’s drives, well-known to all service technicians. This change of the manufacturer of the controller may account for the certain slowdown of the 7K250 electronics which has led to a worse performance on small data blocks. Time will show if this affected the stability in any way. We can say as yet that many single-platter 7K250 models suffer from the same problems as we had with early samples of the Deskstar 180GXP – we mean the reduced performance due to instable data reading that occurs after the drive has been tested under a full load. All the senior models passed through our tests without problems.

Hitachi also intentionally put the senior and junior models into unequal conditions, introducing a number of limitations on the firmware level. As a result, the topmost 250GB models look favorable in this series, and they are among the world’s fastest HDDs today. The other models of the series are more or less slower, so comparing the 7K250 to the competitors we should always specify the capacity of the model in question.

The model with the lowest capacity seems to be a loathed child. It has always been far behind the others in all the tests, so we advise you to disregard it when you’re shopping for a HDD.

The main innovation in the 7K250 series are the Serial ATA models, but nothing very interesting happened here, either – such solutions have the same performance as the ATA ones.