Articles: Storage

Bookmark and Share

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Testbed and Methods

Our testbed was configured as follows:

  • Intel SC5200 case;
  • Intel SHG2 mainboard;
  • 2 x Intel Xeon 2.8/400FSB CPUs;
  • 2 x 512MB PC2100 Registered DDR SDRAM with ECC;
  • IBM DTLA 307015 HDD;
  • Onboard ATi Rage XL graphics;
  • Windows 2000 Pro SP4.

We used the following software:

For WinBench99 tests the hard drive was formatted as one partition with the default cluster size. The WinBench tests were run seven times each; the best result was then taken for further analysis.

To compare the hard disk drives performance in Intel IOMeter we used the FileServer and WebServer patterns:

These patterns are intended to measure the disk subsystem performance under workloads typical of file- and web-servers.

Our colleague, Sergey Romanov aka GreY, developed a WorkStation pattern for Intel IOMeter basing on the StorageReveiw's study of the disk subsystem workload in ordinary Windows applications. The pattern was based on the average IPEAK statistics StorageReview provided for Office, High-End and Bootup work modes in NTFS5 file system and mentioned in Testbed3 description.

The pattern serves to determine the attractiveness of the HDDs for an ordinary Windows user.

Well, and in the end we checked the ability of the drives to work with sequential write and read requests of variable size, and tested the drive’s performance in DataBase pattern, which imitates the work of the disk subsystem with SQL-like requests.

The controller featured BIOS version We used the driver version To control the arrays status and to synchronize the arrays with one another we used a special PAM utility (Promise Array Management) version

The controller was installed into the PCI-X/133MHz slot (even though it supports only PCI32 33/66MHz). The WD360GD Raptor hard disk drives were installed into the default chassis of SC5200 case and were fastened at the bottom with four screws.

During the major test session we enabled lazy writing for all drives. The driver requests caching modes (WriteBack and WriteThrough) were changed on the fly according to the situation.

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]


Comments currently: 5
Discussion started: 12/15/15 07:19:09 PM
Latest comment: 12/16/15 10:20:40 PM

View comments

Add your Comment