Articles: Storage
 

Bookmark and Share

(0) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Testbed and Methods

Our testbed was configured as follows:

  • Intel SC5200 case;
  • Intel SHG2 mainboard;
  • 2 x Intel Xeon 2.8/400FSB CPUs;
  • 2 x 512MB PC2100 Registered DDR SDRAM with ECC;
  • IBM DTLA 307015 HDD;
  • Onboard ATi Rage XL graphics;
  • Windows 2000 Pro SP4.

We used the following software:

For WinBench99 tests the hard drive was formatted as one partition with the default cluster size. The WinBench tests were run seven times each; the best result was then taken for further analysis.

To compare the hard disk drives performance in Intel IOMeter we used the FileServer and WebServer patterns:

These patterns are intended to measure the disk subsystem performance under workloads typical of file- and web-servers.

Our colleague, Sergey Romanov aka GreY, developed a WorkStation pattern for Intel IOMeter basing on the StorageReveiw's study of the disk subsystem workload in ordinary Windows applications. The pattern was based on the average IPEAK statistics StorageReview provided for Office, High-End and Bootup work modes in NTFS5 file system and mentioned in Testbed3 description.

The pattern serves to determine the attractiveness of the HDDs for an ordinary Windows user.

Well, and in the end we checked the ability of the drives to work with sequential write and read requests of variable size, and tested the drive’s performance in DataBase pattern, which imitates the work of the disk subsystem with SQL-like requests.

The controller featured BIOS version 1.00.0.37. We used the driver version 1.00.0.37. To control the arrays status and to synchronize the arrays with one another we used a special PAM utility (Promise Array Management) version 4.0.0.18.

The controller was installed into the PCI-X/133MHz slot (even though it supports only PCI32 33/66MHz). The WD360GD Raptor hard disk drives were installed into the default chassis of SC5200 case and were fastened at the bottom with four screws.

During the major test session we enabled lazy writing for all drives. The driver requests caching modes (WriteBack and WriteThrough) were changed on the fly according to the situation.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Discussion

Comments currently: 0

Add your Comment