Articles: Storage
 

Bookmark and Share

(1) 

Table of Contents

Pages: [ 1 ]

Some time ago my colleagues from the Ukrainian weekly "ITC" shared with me the results of their tests of ATA/100 RAID controllers (among others there were Promise FastTRAK100 TX2 and FastTRAK100 TX4). The results were the more interesting to me as Promise controllers did much better by my colleagues than in my own tests. I put aside the creeping thought of my "clumsiness" and got to finding out the cause of this rapid performance growth. And I did find it! :).

It turned out the new drivers for TX2 and TX4 controllers released by Promise after I completed the tests, which were boosting performance (by the way, first turbo-drivers were 2.00. build 18 version).

So what am I leading to? The controller we'll review today, Promise FastTrak TX2000, was given as a birthday present drivers that follow the path of FastTrak100 TX2 and TX4. It means a whole lot of surprises are prepared in this review.

Closer Look

Let's start from the exterior:

As the saying goes: if you saw a single Promise controller -you saw them all :).

Host side interface 32bit 33/66 MHz PCI
Device side interface ATA/133 (IDE)
RAID controller IC Promise PDC20271
Number of IDE channels 2 Channels
Maximum number of drives 4 hard disk drives
Supported Hard drives Up to ATA/133
Supported RAID Levels RAID 0 (2-4 disks), RAID 1 (2 disks), 0/1 (4 disks), JBOD (2-4 disks)
Supported OSs Windows 98/ME, Windows NT4.0, Win2000/XP, Novell Netware 4.1x/5.x RedHat Linux 7.0/7.1/7.2, TurboLinux Server 6.5, TurboLinux Workstation 7, SuSe Linux 7.2, OpenLinux 3.1
Additional Features: Promise Array Management Software
Advanced Monitoring
FastBuild BIOS Auto-Menu

So, the main difference between FastTrak TX2000 and FastTrak100 TX2 is ATA/133 support. Promise claims it should boost RAID1 and RAID01 arrays performance.

I won't dwell upon the Promise software as you're sure to have seen the screenshots in previous Promise product reviews.

Testbed and Methods

The testbed remained the same as usual:

  • SuperMicro 370DLE mainboard;
  • Intel Pentium III (Coppermine) 600MHz CPU;
  • 2 x 128MB Registered PC133 ECC SDRAM by Micron;
  • Quantum FB EL 10GB HDD;
  • Matrox Millennium 4MB graphics card;
  • Windows 2000 Pro SP2.

Four Maxtor D740X-6L (6L020J1) drives were combined into an array. We tested the controller with 2.00.0.22 BIOS and 2.00 build 24 drivers.

The stripe block size was set to 64KB. For WinBench tests we used FAT32 and NTFS file systems to format each of the hard disk drives as one logical drive of the maximum size with the default cluster. All the tests were run 4 times and then the average results were taken for the diagrams. The HDDs didn't rest for cooling down between the tests.

The benchmarks used include:

  • WinBench 99 1.2
  • Intel IOMeter 1999.10.20

To evaluate the controller performance in different RAID arrays with IOMeter, the new StorageReview patterns were used. They were introduced in the third edition of the HDD testing methodology.

These patterns are intended for testing the performance of disk subsystem under workload typical of file- and web-servers.

Basing upon the analysis conducted by StorageReview of the type of the disk subsystem workload in ordinary Windows applications, our colleague, Sergey Romanov aka GReY, created a pattern for IOMeter:

We'll use the pattern to find out if the HDDs and RAID controllers are attractive or not for an ordinary Windows user.

We also checked the controller performance in different RAID arrays when the read-to-write ratio changed. In the pattern we created, 100% random 8KB data blocks were used and the read-to-write ratio was changing from 100/0 to 0/100 with -10/+10 step.

Well, and the last thing checked was the controller's ability to work with read and write Sequential requests of the changing size in different types of RAID arrays.

Performance

WinBench99 1.2 (FAT32)

WinBench99 will make clear the benefits of the RAID controller in ordinary Windows applications.

We won't take the trouble of analyzing the table, let's turn to the diagrams instead:

For Business test RAID0 array of two HDDs proved to be the most effective, while for High-End test the more HDDs - the better. Of course, the performance grows not by "times", but there's a little growth still.

The comparison of RAID0 and RAID1 arrays built via one and two controller channels shows that dual-channel arrays are always better. We saw the same picture in the HighPoint RocketRAID133 Controller Review. And just like then, RAID01 array didn't boast high performance.

WinBench99 1.2 (NTFS)

The following diagrams serve to prove the above mentioned things:


Well, WinBench99 tests show that RAID0 array of two HDDs is most efficient. Further increase in the number of drives leads to bigger storage capacity rather than higher performance. As for RAID1 arrays, it should be noted that they work at least with the speed of a single HDD whether built on one or two controller channels.

Intel IOMeter

So let's begin with the results of the new pattern:

Note, that the workload variants (requests queue depth) for this pattern differ a little from the standard set of 1, 4, 16, 64, 256 outcoming requests. As this pattern emulates the work of a single user, the workload range we're interested in rests basically with the smaller numbers.

Considering the obtained results we can state:

  1. Controller performance in RAID0 is growing nonlinearly - RAID0 on three hard disks is doing worse than RAID0 on two HDDs.
  2. RAID0 array built of two HDDs via separate channels is faster than the array built on one controller channel.
  3. RAID1 array built on one channel doesn't perform any worse than its dual-channel analog in case of linear workload (queue=1), but falls behind the latter as the workload increases.
  4. RAID01 array is a little faster than a single HDD under linear workload, but is significantly faster as the workload gets serious. And RAID01 array only loses in speed to RAID0 array of four HDDs.

Let's get to the tests in patterns that emulate server workload:

Look at the controller performance growth in every mode (excepting JBOD, of course) as the requests queue depth increases! That's what I call "good" job! :).

As well as in the new WorkStation pattern, RAID0 array of three HDDs may be less performing than the two-disk configuration. This is most likely to be because of the third HDD connected to the "ideal" two-drive configuration (two channels - two drives) via the same cable than one of the array members. Two HDDs on one cable, as we have seen a number of times, hinder each other and cannot provide maximum performance. You can see it especially well in WebServer pattern in case of 2-HDD RAID0 array where the drives are connected to a single channel. In this pattern only reads are performed that is why we see that this array appeared slower than a single hard disk drive.

Note that RAID1 arrays are faster in case of small workload than RAID0 arrays. this is most likely to be achieved thanks to the read operations interleaved to both HDDs of the mirror pait in turns.

It's time to look at the results of the most mysterious of our patterns (the new incarnation of DataBase). I remind that in this pattern absolutely random 8KB data blocks are used and the read/write ratio is changing from 100/0 to 0/100 with -10/+10 step. So we can see the controllers (or single HDDs) behavior under different workloads. In the old DataBase pattern the write operations share was fixed - 33%. If someone is missing the old DataBase pattern, its results can be found among those of the new pattern. Just draw a vertical line, corresponding to 33% of write operations, and look for the crossing points with the graphs.

If you didn't break your eyes reading the table, let's look at the funny pictures:

Looks familiar, doesn't it? We saw the same (or nearly the same) picture in HighPoint RocketRAID133 Controller Review. Under small workload, RAID0 array speed is only increasing when the write operations share is 30% or more. With smaller share of write operations, the array speed depends but slightly upon the number of HDDs in it.

But the picture is quite different when the workload is increased (as you remember, there was no such thing in RocketRAID)! RAID0 arrays are always faster than the single HDD, RAID0 array of three HDDs performs worse than RAID0 of two HDDs when the write operations share is below 50%, but better - when the share is above 50%.

It is interesting that on further increase of the requests queue depth the graphs clearly fall into two groups: the first group comprises the graphs of RAID0 arrays of three and four HDDs, the other - JBOD and RAID0 of two HDDs. Note that the speed of arrays that have pairs of HDDs on one loop is increasing along with the write operations share. At the same time two-disk JBOD and RAID0 graphs have the inverse decline, that is, the speed of these arrays is lowered when the write operations share is big. Well, it seems that the drivers were "optimizing" write requests "within the loop" only, right?

Let's watch the work of RAID0 and RAID1 arrays built on one and two controller channels.

Well, well, well… With small write operations share RAID1 array was faster than two-disk RAID0! The number of channels we built the array on was quite unimportant at that as the results of single- or dual-channel RAID1 differ a lot only in RandomWrite mode.

As the requests queue depth is increased, the picture gets surrealistic, but only at the first consideration. Dual-channel RAID0 array shows linear performance growth depending on the write operations share and loses the speed abruptly in RandomWrite mode. Single-channel RAID0 is more sensitive to the growth of write operations share and its speed grows much faster.

The real mystery is RAID1 array: as write operations share grows, the speed of its dual-channel variant lowers, but its single-channel variant, on the opposite, is doing better!

As we increase the requests queue depth up to 256, the picture gets more mysterious than ever…

The maximum difference between single- and dual-channel variants of RAID0 and RAID1 is seen in RandomRead mode, while in RandomWrite all the arrays perform about the same.

Let's see how TX2000 is coping with linear read/write:


Well, the second controller in a row cannot squeeze out of Maxtor D740X-6L more than 86MB/sec. On the one hand, 86MB/sec is great, but on the other hand, I'd like to see the clear benefits of ATA/133 protocol.

Conclusion

The Promise TX2000 controller showed good results in WinBench tests and just excellent in all Intel IOMeter patterns. Most impressive is the controller's work in "hard" modes, that is, under big workload onto the disk subsystem. We'd like to give credit to the work of Promise software developers on driver optimization. As we could see, they did a great job to optimize the work of two HDDs on one loop, which is a bottleneck of all dual-channel IDE RAID controllers.

But our adventures are not over yet! Be ready to meet one more ATA/133 RAID controller - SiI 0680 from Silicon Image.

Then we'll check the performance of HighPoint RocketRAID 133 (with new drivers) and Promise TX2000 controllers in the PCI32/66MHz slot and compare Promise TX2000 with Promise FastTrak100 TX2 in order to understand the benefits of new drivers and ATA/133.

Well, of course, there are four-channel Highpoint and Promise controllers comparison ahead, and a lot of other interesting things. So, stand tuned! :). 

Pages: [ 1 ]

Discussion

Comments currently: 1
Discussion started: 11/10/05 08:28:52 PM
Latest comment: 11/10/05 08:28:53 PM

View comments

Add your Comment




Latest materials in Storage section

Article Rating

Article Rating: 8.8462 out of 10
 
Rate this article:
Excellent
Average
Poor