Testbed Configuration and Testing Methodology
Our previous Trinity test session was solely dedicated to the graphics core and its performance. I believe there are no unanswered questions left there: the performance of AMD’s integrated graphics accelerator is exceptionally good. However, the rest of the hybrid processor is also very important. This will be the main topic of our today’s test session. Therefore, the majority of benchmarks, which we will discuss here, have been run with an external graphics accelerator, and the integrated GPU of the Trinity processors had no influence whatsoever on the obtained results. In other words, we are going to find out how x86 cores with new Piledriver microarchitecture can work in typical everyday tasks.
In real world we will deal with mass production processors, which are not only based on different microarchitectures, but also work at different frequencies, in different platforms and use different automatic overclocking technologies. That is why we selected testing participants for this round of tests based not only on the processor features, but mostly on their market positioning.
AMD provided us with an A10-5800K processor – the top desktop Trinity model. Moreover, AMD presents the entire A10 family as an alternative to Intel’s Core i3, as you can clearly see from their recommended pricing. Therefore, our today’s hero will be primarily competing against Intel’s dual-core CPUs from the still popular Sandy Bridge as well as the newer Ivy Bridge generations. However, we have also included the results of the junior Core i5 models, which, just like A10-5800K, are quad-core and not dual-core processors. On top of that there are AMD products for other platforms as well. Of course, Trinity was also compared against its predecessor – Socket FM1 A8-3870K processor with Llano design, as well as against two Socket AM3+ products. The first one is a quad-core Bulldozer FX-4170, which is equivalent to A10-5800K in price. The second one is a six-core Bulldozer FX-6200, which is also priced comparably with A10-5800K, although it may seem pretty unusual.
As a result, we used the following hardware and software components for our today’s test session:
- AMD A10-5800K (Trinity, 4 cores, 3.8-4.2 GHz, 4 MB L2);
- AMD A8-3870K (Llano, 4 cores, 3.0 GHz, 4 MB L2);
- AMD FX-6200 (Zambezi, 6 cores, 3.8-4.1 GHz, 6 L2 + 8 MB L3);
- AMD FX-4170 (Zambezi, 4 cores, 4.2-4.3 GHz, 4 MB L2 + 8 MB L3);
- Core i3-2125 (Sandy Bridge, 2 cores + HT, 3.3 GHz, 0.5 MB L2 + 3 MB L3);
- Core i3-2130 (Sandy Bridge, 2 cores + HT, 3.4 GHz, 0.5 MB L2 + 3 MB L3);
- Core i3-3240 (Ivy Bridge, 2 cores + HT, 3.4 GHz, 0.5 MB L2 + 3 MB L3);
- Core i3-3220 (Ivy Bridge, 2 cores + HT, 3.3 GHz, 0.5 MB L2 + 3 MB L3);
- Core i5-2320 (Sandy Bridge, 4 cores, 3.0-3.3 GHz, 1 MB L2 + 6 MB L3);
- Core i5-3330 (Ivy Bridge, 4 cores, 3.0-3.2 GHz, 1 MB L2 + 6 MB L3).
- ASUS Crosshair V Formula (Socket AM3+, AMD 990FX + SB950);
- ASUS P8Z77-V Deluxe (LGA1155, Intel Z77 Express);
- ASUS F2A85-V Pro (Socket FM2, AMD A85);
- Gigabyte GA-A75-UD4H (Socket FM1, AMD A75).
- Memory: 2 x 4 GB, DDR3-1866 SDRAM, 9-11-9-27 (Kingston KHX1866C9D3K2/8GX).
- Graphics card: NVIDIA GeForce GTX 680 (2 GB/256-bit GDDR5, 1006/6008 MHz).
- System disk: Intel SSD 520 240 GB (SSDSC2CW240A3K5).
- Power supply unit: Corsair AX1200i (80 Plus Platinum, 1200 W).
- Operating system: Microsoft Windows 7 SP1 Ultimate x64.
- AMD Catalyst 12.8 Driver;
- AMD Chipset Driver 12.8;
- Intel Chipset Driver 184.108.40.2069;
- Intel Graphics Media Accelerator Driver 15.?26.?12.?2761;
- Intel Management Engine Driver 8.?1.?0.?1248;
- Intel Rapid Storage Technology 11.?2.?0.?1006;
- Nvidia GeForce 301.42 Driver.
For our tests of the AMD A10-5800K platform we installed KB2645594 and KB2646060 OS patches, which adapt the scheduler operation for Bulldozer and Piledriver microarchitectures.
I have to say that Trinity processor lose are no longer hybrid once there is a discrete graphics accelerator installed (the same is true for the Intel CPUs). In this configuration the GPU integrated into the processor gets disabled, so it becomes impossible to utilize its resources via OpenCL or DirectCompute. However, the applications that support these interfaces, can always take advantage of the resources of the discrete graphics accelerator.