Testbed and Methods
The testbed was configured as follows:
- Intel SC5200 case;
- Intel SHG2 mainboard;
- Two Intel Xeon 2.8GHz CPUs (400MHz FSB);
- 2x512MB PC2100 ECC Registered DDR SDRAM;
- IBM DTLA 307015 HDD;
- Onboard ATI Rage XL graphics;
- Windows 2000 Pro SP4 OS.
We tested the controller in two benchmarks:
- WinBench 99 2.0
- Intel IOMeter 2003.02.15.
We created one partition for the total capacity of the drive in WinBench 99. We carried out each of the WinBench tests seven times and took the best result for further analysis.
For comparing the arrays performance in Intel IOMeter, we used File Server and Web Server patterns.
These patterns serve for measuring the performance of the disk subsystem under a workload typical for file and web servers.
We also used a Workstation pattern, created by Sergey Romanov (aka GReY). It is based on the statistical data about the disk subsystem workload as given in the SR Testbed 3 description. The statistical data are gathered for the NTFS5 file system in three operational modes: Office, Hi-End and Boot-up.
This pattern shows how well the controller performs in a typical Windows environment.
Lastly, we checked out the controller’s ability to process sequential read/write requests of variable size and its performance in the Database pattern, which loads the disk subsystem with SQL-like requests.
Our controller had the firmware version 9964 and we used the driver version 220.127.116.1194.
The controller was installed into a PCI-X/133MHz slot (although the controller itself only supports PCI64/66MHz).
WD360GD (Raptor) hard disk drives were installed into the rails of the SC5200 system case and fastened at the bottom with four screws.
By default, we tested the controller having enabled lazy writing into the cache (WriteBack). We also enabled lazy writing for the hard disk drives.
We tested the influence of the controller caching mode – WriteBack (WB) or WriteThrough (WT) – on four-disk RAID0, RAID5 and RAID10 arrays. It turned out however that the algorithm of working with the cache buffer only affects the speed of the controller in one specific operational mode.
Moreover, the interface of the lazy write setup in the controller BIOS is made in such a way that when you change from WB to WT, you are by default offered to disable lazy write on all drives. In other words, it is supposed that when the user prefers WT, he wants to have an array with a maximum tolerance to failures and disabling lazy write on HDDs in the array contributes to this. But when you switch from WT to WB, the lazy write status doesn’t change on the disks! Well, really why should the controller think for you?
Having carried out a series of experiments we discovered that the controller caching mode influences the performance of the array much less than lazy write on the disks does. Just look at the diagrams:
These are the results of a RAID10 array in the DataBase pattern. Again, WriteBack and WriteThrough are the names of the controller caching mode and CacheOn/CacheOff refer to the lazy write algorithms of the drives in the array.
As you see, the status of the controller caching algorithm only affects the result when there are only read operations in the queue, while the influence of the disk cache on the performance is substantial in all modes.
We supposed that Adaptec 2410SA is not the only controller that may change the disk cache mode (lazy write) when you switch the lazy write of the controller on and off. Particularly, it’s impossible to explain the difference in speed of the 3ware 8500-8 Escalade controller in WriteBack and WriteThrough modes only by the influence of the controller cache, since it is rather small, only 2.25MB. In all probability, the disks caching mode changes as you switch between these two controller modes. That’s why, and in order not to bring confusion into terms we use, we always enabled lazy write for the controller cache buffer, while “WriteBack” and “WriteThrough” denote enabled and disabled lazy write on the disks in the array.