First you say that the Xeon actually is 26% faster. Then you go on to say they used maximum power consumption (TDP) on the intel chip and compared it to actual power consumption on the ARM SoC.
So over 1 million requests. How could this chip be consuming 103 watts ? That's at TDP. It would be far fetched to believe the Intel chip to be running at 100% the whole time. You can't compare actual values on one, against max on the other.
Furthermore, the Intel chip is running at 3 times the clock speed and features more advanced execution units. I'd like to know how quad-core ARM SoC was able to supposedly match the performance of the Quad-core Intel chip. The time to process 1 million requests is 182 seconds on the ARM vs 144 seconds on Intel. Seems a little weak for Intel.
According to wiki. The DMIPS on a Cortex A9 Quad Core at 2.5 Ghz (this is a A9 at 1.1 GHZ, so substantially faster) is 15 DMIPS. The Xeon would be around 38 DMIPS(2600K 3.4 ghz vs 3.3 ghz). Per core that works out to 3.5 vs 9.43. So this SoC should be 1/3 the performance, if you removed the bottlenecks.
How could this thing possibly match the Intel chip in performance ? I would imagine the bottle necks held back the Intel CPU somehow. But even then, it wouldn't be running at MAX TDP the whole time. I would imagine that on a virtualized server their would be plenty to keep the Intel chip busy, even with the botttlenecks. Whereas this SoC would only be good for lightly used servers.
This almost seems like best case scenario for ARM vs absolute worse case scenario for Intel. Good marketing.