Articles: Storage

Bookmark and Share

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Testbed and Methods

The following benchmarks were used:

  • IOMeter 2003.02.15
  • WinBench 99 2.0
  • FC-Test 1.0

Testbed configuration:

  • Intel SC5200 system case
  • Intel SE7520BD2 mainboard
  • Two Intel Xeon 2.8GHz CPUs (800MHz FSB)
  • 2 x 512MB PC3200 ECC Registered DDR SDRAM
  • IBM DTLA 307015 hard disk drive as the system disk (15GB)
  • Onboard ATI Rage XL graphics controller
  • Windows 2000 Professional with Service Pack 4

Of course, we built all RAID arrays on the same controller. It was an Areca ARC1220 installed into a PCI Express x8 slot. The controller’s characteristics affect the results, of course, but at least we don’t have to guess which part of the result is due to the HDDs and which, to the controller.


The HDDs were installed into the default cages of the SC5200 system case and fastened with four screws to the bottom of the cage. We decided to benchmark RAID arrays consisting of four disks in RAID0, RAID10 and RAID5 mode.

These modes differ greatly in terms of disk load even when performing the same operations because the second array type differs from the first in using mirroring. As for RAID5, each write request to such an array translates into four requests to the drives in it. Of course, each array was built out of HDDs from the same batch and with the same firmware versions.

The controller was set at the Performance mode for maximum performance during the tests. This mode allows deferred writing and look-ahead reading for both the controller (in its own buffer memory if available) and the disks. Thus we will see if the HDDs’ firmware is optimized for working in a RAID array.

We’ll be referring to the arrays by naming the model of the HDDs they are made of. We’ll call the HDDs their series names. If there are two disks from one series (Samsung’s T133S and Western Digital RE2), we’ll mention the disk’s model name in parentheses.

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]


Comments currently: 0

Add your Comment