Testbed and Methods
The following testing utilities were used:
- IOMeter 2003.02.15
- FC-Test 1.0
- Intel SC5200 system case
- Intel SE7520BD2 mainboard
- Two Intel Xeon 2.8GHz processors with 800MHz FSB
- 2 x 512MB registered DDR PC3200 ECC
- IBM DTLA hard disk drive as a system disk
- Eight Fujitsu MBA3073RC hard disk drives in the disk subsystem
- Integrated ATI Rage XL graphics
- Microsoft Windows Server 2008
We have had to change our testbed somewhat. We have switched to Windows Server 2008. The reason is simple. LSI just does not release drivers for the ancient Windows 2000.
Unfortunately, the hardware of our testbed has remained the same. We still use a mainboard with a PCI 1.0 slot and hard drives with SAS 3Gbps. However, with our test method when each HDD is connected to an individual controller port, the interface should have no effect on the performance.
So, the controller was installed into the mainboard’s PCI Express x8 slot. We used Fujitsu MBA3073RC disks installing them into the default rack of the SC5200 case. The controller was tested with eight HDDs in the following modes:
We have changed the set of arrays for our tests. To save time and effort, we do not test 4-disk and degraded arrays, but include RAID50.
We will publish the results of a single Fujitsu MBA3073RC on a LSI SAS3041E-R controller for the reference’s sake, but you should be aware that this controller/drive combination has a well-known problem. It is slow at writing in FC-Test.
The stripe size is set at 64KB for each array type.
The controller was tested with the latest BIOS and driver versions we could get: BIOS 12.0.1-008 and driver 22.214.171.124.
Before proceeding to our tests, we want to tell you about one peculiarity of this test session. In some cases you will see results of two test runs with absolutely identical arrays. This is because the controller behaved queerly in our first test session. Just take a look at the following diagram.
We had seen lots of weird things in our tests but there was something basically wrong in the 8-disk arrays being slower than the single disk at sequential writing. And there was no such problem with reading. At first we could not pinpoint the reason. Everything was right with the BBU and caching. The logs were clean but the speed refused to rise whatever we did with the settings.
We found the cause of the problem eventually. It was in the order of our tests. Our standard test procedure goes like this: a script is launching various types of loads in IOMeter and then we manually partition the disk and launch FC-Test. The sequence of IOMeter loads goes like this: the access time test goes first (it is a lot of operations with small-size random-address data blocks), then we have random reading and writing tests with a varying data block size. Next goes a group of tests with sequential requests (including multithreaded loads) and finally we emulate server loads and run the Database pattern. We found out that if we first ran FC-Test and then began the IOMeter part with the group of sequential-load tests, we had completely different results. The only explanation we could think of was that the controller was capable of adjusting for the current load. However, the controller seems to require some time or a certain number of requests to decide that the caching policy needs adjustment and during our rather short tests the controller could not catch up with our rate of load change.
And one more note: when changing arrays, we had to shut the server down after we had removed the previous array. Otherwise, we would get a low write speed again on the newly established array.
It wasn’t our business to deeply explore the controller’s operating algorithms, yet we’ve decided to publish data from two test cycles. One cycle began with random-address loads, and the other with sequential loads.
So, we can evaluate the controller’s performance in two scenarios: a file-server and a video server.