Testbed and Methods
First let us say a few words about the hardware. Considering that the Radeon HD 5870 is a top-end product ($399), it is unlikely to be installed into a computer with a dual- or single-core processor. However, this is theoretically possible. So I took two quad-core processors from AMD and Intel and added a dual-core processor which produced valuable results.
Here are the hardware components that we used to build our test systems:
- DFI LANParty DK X48-T2RS (Intel X48 Express), LGA 775, BIOS 10/03/2008;
- Gigabyte GA-MA790GP-DS4H (AMD 790GX), Socket AM2+, BIOS F7;
- ASUS P6T Deluxe (Intel X58 Express), LGA 1366, BIOS 1701;
- Intel Core 2 Duo E8400 3.0 GHz (Wolfdale, C0), 1.2 V, 6144 KB L2;
- AMD Phenom II X4 965 3.4 GHz (Deneb, RB-C3), 1.4 V, 4 x 512 KB L2, 6 MB L3;
- Intel Core i7-920, 2.67 GHz (Bloomfield, C0), 1.2 V, 4 x 256 KB L2, 8 MB L3;
- CPU cooler: Thermalright IFX-14 (two Thermalright TR-FDB fans at 1360 RPM);
- Thermal interface: Tuniq TX-2;
- System memory:
- DDR2 SDRAM 2 x 1 GB Corsair Dominator TWIN2X2048-9136C5D (1142 MHz / 5-5-5-18 / 2.1 V);
- DDR2 SDRAM 2 x 1 GB CSX DIABLO CSXO-XAC-1200-2GB-KIT (1200 MHz / 5-5-5-16 / 2.4 V);
- DDR3 PC3-12800 3 x 2GB OCZ Platinum Low-Voltage Triple Channel (Spec: 1600 MHz / 7-7-7-24 / 1.65 V);
- Graphics cards:
- HIS Radeon HD 5870 1024 MB, 850/4800 MHz;
- AMD Radeon HD 5870 1024 MB, 850/4800 MHz;
- Disk subsystem: Western Digital VelociRaptor (300GB, SATA-II, 10000 RPM, 16MB cache, NCQ) inside Scythe Quiet Drive 3.5” HDD silencer and cooler;
- Backup HDD: Western Digital Caviar Green WD10EADS (SATA-II, 1000 GB, 5400 RPM, 32 MB, NCQ);
- Optical drive: Samsung SH-S183L DVD-burner;
- System case: Antec Twelve Hundred (front panel: two Noiseblocker NB-Multiframe S-Series MF12-S1 fans at 900 RPM and Scythe Gentle Typhoon fan at 900 RPM; back panel: two Scythe SlipStream 120 fans at 900 RPM; top panel: standard 200 mm fan at 400 RPM);
- Control and monitoring panel: Zalman ZM-MFC2
- Power supply: Zalman ZM1000-HP 1000 W (with a default 140 mm fan).
On each of the three platforms the graphics cards were benchmarked singly and in CrossFireX mode at three CPU frequencies, the other system frequencies being as similar as possible. The quad-core Intel Core i7 was tested at its default frequency of 2.67GHz as well as at increased frequencies of 3.4GHz and 4.1GHz (this is the maximum clock rate I could achieve with my hardware). Overclocking was done by changing the base frequency and the multiplier.
I enabled the Load Line Calibration feature in the BIOS of my ASUS P6T Deluxe mainboard and the core voltage changed from the default 1.2V to 1.3825V depending on the frequency. The system memory (DDR3) worked at a frequency of 1.6GHz in the first two cases and 1.56GHz in the third case, its voltage being set at 1.64V. Thus, the memory frequency remained almost the same all the time. Coupled with identical timings (7-7-7-14_1T), this helped maintain very similar speeds of reading from and writing to system memory as well as its access time. The other overclocking parameters were left at their defaults in the mainboard’s BIOS (set at Auto).
It was harder with the AMD Phenom II X4 965 processor because my mainboard could not increase the base frequency higher than 256MHz. Increasing the HTLink and North Bridge frequencies as high as possible without losing stability, I overclocked the CPU to 4.1GHz at 1.525V. Besides, I tested the Phenom II X4 at its default 3.4GHz (with a voltage of 1.4V) and at a frequency of 2.67GHz (which is the default frequency of the Intel Core i7 and produces something in between the Phenom II X4 910 and X4 920 models).
Despite my using fast DDR2 modules in the system, the integrated memory controller of the AMD processor imposed some limitations resulting in a performance reduction. I could not use CAS Latency 4, Command Rate 1T and a step-up memory divisor. Unfortunately, I did not have two good 2GB modules and it was not right to sacrifice 2GB of RAM (by removing two modules) to benchmark modern games in Windows 7. So, I tried to squeeze what I could out of my components. At a frequency of 908MHz I set the main and secondary timings to minimum stable values.
I want to note that I did not deliberately slow down the AMD platform. I only tried to overclock it as best as I could with the components at hand. Running a little ahead, I must confess that my tests did not reveal any slowing down.
And finally, the third platform is with an Intel Core 2 Duo E8400 whose default frequency is 3.0GHz and maximum frequency (with air cooling) is over 4.3GHz at 1.55V, but I tested it at the same frequencies as the two previous CPUs. This will show us the difference between CPUs of different architectures in games and benchmarks as well as the practical benefit of having more processor cores, the operating frequency being the same.
I used the same DDR2 memory modules on this platform as on the AMD platform, but the frequencies and timings were different due to the peculiarities of overclocking an Intel CPU. At a CPU frequency of 4.1GHz, the memory worked at 1092MHz. At CPU frequencies of 3.4GHz and 2.67GHz, the memory frequencies were 1023MHz and 1066MHz, respectively. The main timings were higher in every case than on the AMD platform: 5-5-5-12_2T. I did not change the additional timings.
The use of different system memory, DDR3 for the Intel Core i7 platform and DDR2 for the AMD Phenom II X4 platform, makes it impossible to compare these two platforms directly. And such a comparison was not in the scope of the test session. Besides, the difference in performance between DDR2-800 (5-5-5-15_2T) and DDR3-1600 (7-7-7-20_1T) on AMD platforms is very small. And in my system I had DDR2 running at 908MHz rather than at 800MHz as in the mentioned article while the main timings were set at 5-4-4-12, so the difference from DDR3-1600 (7-7-7-20_1T) was even smaller.
One more pitfall for the AMD platform in this test session was PCI Express. The Gigabyte GA-MA790GP-DS4H mainboard is based on the AMD 790GX chipset which supports two ATI Radeon cards in CrossFireX only as PCIe x16 + PCIe x8, as opposed to both platforms from Intel which support two PCIe x16. I can’t say definitely if this provokes a performance hit on the AMD platform because I could not carry out a comparative test on a mainboard with the AMD 790FX chipset (supports two PCIe x16). I’m going to keep this fact in mind when analyzing the results.
By the way, I had planned to test with a smaller CPU frequency increment, but this proved to be unnecessary as you’ll see in the Performance section.
I used two graphics cards, which we reviewed before: an AMD Radeon HD 5870 and a HIS Radeon HD 5870. One of them was tested as a single graphics card in the system and they both were also tested in CrossFireX mode:
Now let’s move on to software and benchmarking tools that we used. All tests were performed in Windows 7 Ultimate RTM x64 operating system with the following drivers:
- Intel Chipset Drivers 18.104.22.1680 WHQL for the mainboard chipset;
- DirectX End-User Runtimes from August 2009;
- Catalyst 9.11 graphics card drivers for ATI based graphics solutions.
The graphics cards were tested in two resolutions: 1280x1024 and1920x1200. Of course, we all understand that the first resolution has no practical value for powerful graphics cards like the ones tested as well as for CrossFireX configuration on these graphics cards. However, the difference in platforms performance and CPU dependence will be more obvious than in 1920x1200. Our monitor doesn’t support resolutions above 1920x1200, but it is a minor issue, because very few gamers use higher screen resolutions anyway and the tested graphics cards cannot provide sufficient performance to ensure comfortable gaming experience in 2560x1600 resolution.
The tests were performed in two image quality modes: “High Quality” (HQ) without any image quality enhancements and “HQ+ AF16x+AA4/8x” with enabled 16x anisotropic filtering and 4x full screen anti-aliasing (or 8x FSAA if the average framerate was high enough for comfortable gaming experience). We enabled anisotropic filtering and full-screen anti-aliasing from the game settings or configuration files. If the corresponding options were missing, we changed these settings in the Catalyst Control Panel. Vertical sync was always off in driver control panel.
The list of applications and benchmarks includes three popular synthetic benchmarking suites and 15 games of various genres. All games were updated with the latest patches available at the time of tests. Here is the complete list of tests used with the settings (all games listed in their release order):
- 3DMark 2006 (Direct3D 9/10) – build 1.1.0, default settings and 1920x1200+AF16x+AA8x;
- 3DMark Vantage (Direct3D 10) – v22.214.171.124, Performance and Extreme profiles (basic tests only);
- Unigine Heaven Demo (Direct3D 11) – version 1.0, maximum graphics quality settings including shadows, activated tessellation;
- World in Conflict (Direct3D 10) – version 126.96.36.199 (b34), “Very High” graphics quality profile, UI texture quality = Compressed; Water reflection size = 512, other settings – by default;
- Crysis (Direct3D 10) – game version 1.2.1, “Very High” settings profile, two runs of “Assault harbor” test from Crysis Benchmark Tool version 188.8.131.52;
- Unreal Tournament 3 (Direct3D 9) – version 2.1, highest graphics quality settings (level 5), Motion Blur and Hardware Physics enabled, a FlyBy of the “DM-ShangriLa” map (two consecutive cycles) using HardwareOC UT3 Bench v184.108.40.206;
- Lost Planet: Colonies (Direct3D 10) – version 1.0, Maximum Quality settings, DX10 HDR Rendering, integrated benchmark including two scenes, but the results are provided from the first scene only (ARENA 1);
- S.T.A.L.K.E.R.: Clear Sky (Direct3D 10.1) – game version 1.5.10, Improved Full DX10 Lighting profile plus 16x anisotropic filtering and other maximum graphics quality settings, my own s04 demo record (a triple run of the test) on the first gaming level;
- Far Cry 2 (Direct3D 10) – version 1.03, Ultra High settings profile, two runs of the Ranch Small test from Far Cry 2 Benchmark Tool (v220.127.116.11);
- Call of Duty 5: World at War (Direct3D 9) – version 1.6, graphics and textures are set at “Extra” level, Breach demo from the same-name level;
- Left 4 Dead (Direct3D 9) – version 18.104.22.168 b3939, maximum quality, new d6 demo (two runs) on “Lighthouse” map in “Survival” game mode;
- Warhammer 40,000: Dawn of War II (Direct3D 10.1) – version 22.214.171.12434, image quality settings set to Ultra level in the game menu, two runs of built-in benchmark;
- BattleForge (Direct3D 11) – version 1.1, maximum image quality settings, shadows enabled, SSAO technology disabled, two runs of the built-in benchmark;
- Stormrise (Direct3D 10.1) – version 126.96.36.199, maximum effects and shadows quality, Ambient Occlusion disabled, two runs of the “$mn_sp05” mission demo scene;
- Tom Clancy’s H.A.W.X. (Direct3D 9) – version 1.03, maximum graphics quality settings; HDR, DOF and Ambient Occlusion enabled, two runs of the built-in benchmark;
- Call of Juarez: Bound in Blood (Direct3D 10.1) – version 188.8.131.52, maximum graphics quality settings, Shadow map size = 1024, 100-second demo in the beginning of “Miners Massacre” level;
- Wolfenstein (OpenGL 2.0) – version 1.2, maximum graphics quality settings, own d1 demo recording on Facility level;
- Resident Evil 5 (Direct3D 10.1) – variable benchmark with maximum graphics quality settings without motion blur, we took AVG values from the third scene for further analysis, because it was the most resource hungry.
Here I’d like to add that if the game allowed recording the minimal fps readings, they were also added to the charts, which is of great importance to CPU dependence analysis. Luckily 8 games out of 15 allow it. We ran each game test or benchmark twice and took the best result for the diagrams, but only if the difference between them didn’t exceed 1%. If it did exceed 1%, we ran the tests at least one more time to achieve repeatability of results.