Articles: Storage
 

Bookmark and Share

(3) 

Table of Contents

Pages: [ 1 ]

To begin with I would like to say that FastTRAK100 TX4 controller has been available in the market for quite a long time already (if I am not mistaken, it has been out for about a year now). Why did it happen so that I am writing about this controller only now? The story is really very exciting. When I tried this controller for the first time (it was last year), I faced some weird problems, which didn't comply with any logics. Firstly, the controller didn't show any "correct" results, and secondly, it didn't work with some mainboards, such as ASUS CUBX-E, for instance, which I used at that time to test the controller cards. With SuperMicro 370DLE mainboard the controller worked, but didn't perform as fast as it was supposed to. The driver installation also was full of surprises. Since I have already tested FastTRAK100 TX2 controller in the same testbed by then, its drivers stayed in Windows as a sign of its former presence there. As soon as the new FastTRAK100 TX4 controller got installed into this no longer "virgin" system, Windows couldn't tell the difference between the newly discovered TX4 and the formerly used TX2 (these devices use similar chips and belong to one family) and froze irrevocably…

When I attempted to install newer driver version instead of those 2.00 b11 ones going with the controller, the system froze even more eagerly. All in all, I didn't feel like sharing my boiling emotions with you, guys, so I decided to take a break to cool down a bit…

Some time passed, I pulled myself together and decided to repeat the experiment, and… Thank goodness, it worked! By that time we already had HighPoint controller at hand, so…

Anyway, let me introduce the today's buddy: Promise FastTRAK100 TX4. The traditionally colored Promise's box contained the following baby:

Note that Promise controller is built of three chips, unlike HighPoint RocketRAID 404, which is built of one single chip. Promise PDC20270 chip is a dual-channel ATA/100 RAID controller, but here it stands for an ATA-chip. Each of the two Promise PDC20270 chips works with two IDE-channels, but each of them allows connecting only one hard disk drive.

Promise chip communicate with the bus via PCI-to-PCI Bridge from Intel (which actually inherited it from the DEC Company they absorbed). It is really hard not to notice this chip, as it the largest chip on the controller PCB.

The official specification of this chip claims that it supports 66MHz PCI bus:

And in fact, this is the second feature distinguishing this solution from HighPoint controller (the first distinguishing feature was a multi-chip configuration, of you have already forgotten it).

Besides the controller card, the package also includes floppy disks with the drivers, ATA-cables, a user's manual and a booklet :)

Testbed and Methods

We tested the four-channel Promise FastTRAK100 TX4 controller the same way as we did with the dual-channel controllers (even the testbed configuration remained the same). Read about it in our Three ATA/133 IDE RAID Controllers Comparison. As a result, we will be able to compare the performance of four-channel and dual-channel controllers.

Before we pass over to the tests and results, we would like to draw your attention to two things:

  1. FastTRAK100 TX4 is an ATA/100 RAID controller, so Maxtor D740X-6L HDDs used during the tests worked in ATA/100 mode (which hardly influenced the results, by the way).
  2. FastTRAK100 TX4 was tested with 33MHz PCI bus (we decided to save 66MHz battles for future experiments :)

Performance

WinBench99 1.2

The first test to take is WinBench99. We will not dwell on the results that much, as they are pretty typical:

Take a look at the average access time value for RAID1 and RAID01 arrays. Like for any array with the mirroring involved, the average access time to the random sector can be reduced due to interleaving reads requests sent to both drives of the array.

The rest of the picture is very familiar to us already. The performance in Business test is hardly dependent of the array type, while the results in High-End test are on the contrary very sensitive to the array type and the number of hard disk drives used.

If we compare the arrays performance in two integral tests:

We will see that RAID0 array has no competitors!

RAID1 array performed as fast as a single HDD, while RAID01 array yielded to RAID0 of 2 HDDs in High-End Disk WinMark.

Well, we see something very similar in NTFS:



If you are fond of nice pictures, you will enjoy the beautiful linear read graphs obtained in WinBench given below:

JBOD (Graph)
RAID1 (Graph)
RAID0 2 HDD (Graph)
RAID0 3 HDD (Graph)
RAID0 3 HDD (Graph)
RAID01 (Graph)

Intel IOMeter: DataBase

In this semi-synthetic pattern we will study the reaction of the controller drivers to the varying share of reads and writes under different workloads. The data block will be taken equal to 8KB with the random address.

In the table below the horizontal scale indicates the writes share and the vertical - queue depth.

Well, and now get ready to enjoy the diagrams:

In case of queue depth equal to 1, the performance of RAID0 arrays made of different number of HDDs remains nearly the same if the writes share is not very big. As the writes get more numerous, the arrays very quickly occupy some fixed positions in the race. The more drives are involved in the array, the faster it runs in the mode close to RandomWrite (I wonder what could happen, if w disabled the lazy write by all the drives).

In fact the unusual behavior of RAID1 and RAID01 catches our eye immediately. In other words, we could foresee that they would perform faster than a single HDD or a RAID0 array of 2 drives in RandomRead mode, but the performance of RAID1 was equal to that of RAID01 in all modes with the writes share below 40% at the same time being higher than the performance of 2-HDD RAID0. The performance of RAID01 array is higher than that of RAID0 of two hard disk drives in the modes with the writes share below 50%; then their speeds level out.

Wow! What a stability! What a perfection!

It is not the "parallel" character of the RAID0 graphs that you should pay attention to here, although it would be absolutely correct. Look at the RAID01 array! We have never seen anything like that before. In RandomRead mode Raid01 proves faster than RAID0 of 4 hard disk drives. True that due to forced interleaving of the read requests to both elements of the mirrored pair, which are stripe pairs in this case, we could theoretically squeeze higher performance than that of 4-HDD RAID0 array in RandomRead mode. And now we see what the practice is!

As the queue depth keeps growing, the convex graphs turn concave. It is most probably because the controller can select the most efficient processing order in case of big queue depth (by the way, as the queue increases, the performance also grows up significantly). However, it is only the graphs appearance that has changed, but not their order. JBOD and RAID0 graphs are still parallel to one another (almost :)) and RAID1 and RAID01 graphs outperform RAID0 arrays of 2 and 4 HDDs respectively in RandomRead mode. In case of big writes share, you can clearly see that RAID1 and RAID01 arrays are faster than RAID0 arrays only due to the fact that there are some read requests in the queue (and these requests are sent to both elements of the mirrored pair in RAID1 and RAID01 arrays).

Intel IOMeter: SequentialRead

The second synthetic pattern is SequentialRead. Here the controller receives read requests (queue=4) of the varying size (from 512Byte to 1MB).

Let's split the results obtained in two groups. The first group will include the results shown by JBOD and RAID0 arrays:

Despite all our expectations the controller didn't show any super-performance. Although this is dual-chip controller and each hard drive has its own IDE channel (which eliminates the situation when two HDDs clash against one another being connected to a single cable), TX4 controller performance in SequentialRead pattern appeared just a little bit higher than that of any dual-channel controller.

And in the next diagram we can see the graphs for RAID1 and RAID01 arrays:

Here we can see the following regularity. You can notice that the graphs start having "problems" when the requested data block reaches 64KB for RAID1 (64KB is a stripe block size and at the same time it is the biggest data block to be addressed by a single ATA-command) and 128KB for RAID01 (i.e. again 64KB for each HDD of the stripe pair).

It is quite possible that the interleaving of writes to each element of the mirrored pair suggested by Promise works "inversely" here. If it is true, there shouldn't be any problems with the writes…

Intel IOMeter: SequentialWrite



As we see, there are no real problems here, but the write speed onto 2-, 3- and 4-HDD RAID0 arrays cannot make up for the effort and time spent on creating them.

The same thing happens to RAID1 and RAID01: the performance is low. It looks as if the bridge was limiting the controller performance in tasks requiring intensive data transfer.

Intel IOMeter: WorkStation

This patterns emulates the user's work with different applications in NTFS5:



What we see here contradicts with the widely spread opinion that RAID is not needed for work in Windows apps. However, the bigger is the queue depth (read: the bigger is the workload), the more evident is the efficiency of the RAID0 array. RAID1 array competes with 2-HDD RAID0, and RAID01 struggles with 3-HDD RAID0.

Intel IOMeter: StorageReview Patterns 2002

So, now we have finally come to server patterns:



Note that as soon as the workload becomes more or less serious, RAID0 arrays become much more efficient than the single HDD. The performance of 4-HDD RAID0 appeared at least three times as high as that of a single hard disk drive. And do you remember the tests of IDE RAID controllers we carried out a year ago? Could we even dream of speeds like that then?

RAID01 performance is noticeably higher than that of RAID0 of 3 HDDs and it is also a great move forward. Remember what Armstrong said: "it is a small step for a person but a huge move for the whole mankind".

As we remember, WebServer pattern is very interesting because is includes only reads. Combined with Promise's alternating reading from the two elements of the mirrored pair these produced very interesting results.

Look: RAID1 array easily outpaces RAID0 array of 2 drives under all workloads. The same thing is valid for RAID01 and RAID0 of 4 HDDs. Unbelievable! :) But true.

As we have already seen above when we discussed the results of the DataBase pattern, RAID01 and RAID1 arrays implemented by Promise FastTRAK100 TX4 controller are faster than RAID0 of 2 or 4 hard disk drives for the cases with small writes share. And this is exactly the mode for the webservers.

Conclusion

Although this review a slightly late it appeared right after the competitor :)

This test session of Promise FastTRAK100 TX4 helped me to get rid of highly unpleasant reminiscence. However, the test are usually aimed not at the moral satisfaction, but at figuring out highs and lows of the product. Let's do it then:

Highs:

  • Excellent scalability in RAID0;
  • Beautiful performance in RAID1;
  • Brilliant operation in RAID01.

Lows:

  • Low performance in SequentialWrite modes.
 
Pages: [ 1 ]

Discussion

Comments currently: 3
Discussion started: 03/24/03 02:09:20 PM
Latest comment: 05/29/03 09:38:45 AM

View comments

Add your Comment




Latest materials in Storage section

Article Rating

Article Rating: 10.0000 out of 10
 
Rate this article:
Excellent
Average
Poor