Bookmark and Share


Advanced Micro Devices on Tuesday publicly disclosed its strategy and roadmap to recapture market share in enterprise and data center servers by unveiling products that address key technologies and meet the requirements of the fastest-growing data center and cloud computing workloads.

AMD revealed details of its 2014 server portfolio including accelerated processing units (APUs), two- and four-socket CPUs, and details on what it expects to be the industry’s premier ARM server processor. These forthcoming AMD Opteron processors bring important innovations to the rapidly changing compute market, including integrated CPU and GPU compute (APU); high core-count ARM servers for high-density compute in the data center; and substantial improvements in compute per-watt per-dollar and total cost of ownership.

“Our strategy is to differentiate ourselves by using our unique IP to build server processors that are particularly well matched to a target workload and thereby drive down the total cost of owning servers. This strategy unfolds across both the enterprise and data centers and includes leveraging our graphics processing capabilities and embracing both x86 and ARM instruction sets. AMD led the world in the transition to multicore processors and 64-bit computing, and we intend to do it again with our next-generation AMD Opteron families,” said Andrew Feldman, general manager of the server business unit at AMD.

In 2014, AMD will introduce its first higly power-efficient server microprocessors with the industry’s premier ARM server central processing unit. The 64-bit CPU, code named Seattle, is based on ARM Cortex-A57 cores and is expected to provide category-leading throughput. Seattle is an 8- and then 16-core CPU based on the ARM Cortex-A57 core and is expected to run at or greater than 2GHz.  The Seattle processor is expected to offer 2-4 times the performance of AMD’s recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt. It will deliver 128GB RAM support, extensive offload engines for better power efficiency and reduced CPU loading, server caliber encryption, and compression and legacy networking including integrated 10GbE. It will be the first processor from AMD to integrate AMD’s advanced Freedom Fabric for dense compute systems directly onto the chip. AMD plans to sample Seattle in the first quarter of 2014 with production in the second half of the year.

Also next year, AMD intends to deliver its first high-performance accelerated processing unit for severs code-named Berlin. Berlin is an x86-based processor that will be available both as a CPU and APU. The processor boasts four next-generation “Steamroller” cores and will offer almost 8 times the gigaflops per-watt compared to current AMD Opteron 6386SE processor thanks to integrated Radeon HD streaming processors. It will be the first server APU built on AMD’s heterogeneous system architecture (HSA), which enables uniform memory access for the CPU and GPU and makes programming as easy as C++. Berlin will offer extraordinary compute capabilities per-watt that enables massive rack density. It is expected to be available in the first half of 2014.

The third processor announced today is code named Warsaw, AMD’s next-generation 2P/4P offering. Warsaw is projected to provide significantly improved performance-per-watt over today’s AMD Opteron 6300 family even despite of the fact that it has the same Piledriver cores. Warsaw is a fully compatible socket with identical software certifications as its predecessors, making it ideal for the AMD Open 3.0 server. Unfortunately, Warsaw will not bring any new capabilities either on the processor or on the system architecture level. It is expected to be available in the first quarter of 2014.

Tags: AMD, Opteron, berlin, seattle, warsaw, Cortex, Steamroller, Piledriver, x86, ARM, GPGPU


Comments currently: 9
Discussion started: 06/19/13 12:16:39 AM
Latest comment: 06/20/13 07:32:38 AM
Expand all threads | Collapse all threads


It makes sense that AMD is not competing in the high end server race with Intel. The purchase of SeaMicro at the beginning of 2012 sealed that strategy. AMD has been designing products for the dense microserver market since. It stands to reason that since AMD is not aiming for the high end server ground, it would be unlikely to design high end consumer CPUs either. AMD is aiming for computation efficiency with HSA technology rather than drag car nitro performance. And they can do so because hardware now has really outstrip the vast majority of software in terms of computation needs - called good enough computing. But if this is the case, I'm not so sure how this would gel with their re-branding strategy as The maker of gaming hardware. I would have thought they would still need to continue to design FX CPUs. My personal opinion is that they should release a (one model) <= 165Watt TDP Steamroller multicore CPU, way above their Kaveri APU products (that have TDPs of 100 Watts).
1 1 [Posted by: linuxlowdown  | Date: 06/19/13 12:16:39 AM]
- collapse thread

Computers will never be good enough because programmers get lazier and lazier. There used to be a time when you would use 100% of your 16MHz processor for productive purposes and now I know plenty of people who complain 100GFLOP isn't enough.

Server end though, AMD is looking to make a killing on low performance web servers that are usually I/O bottled rather than less profitable mid-level services ones (when you include development and manufacturing glitches)
0 1 [Posted by: basroil  | Date: 06/19/13 07:45:30 AM]
What you just said does not make any logical sense. "There used to be a time when you would use 100% of your 16MHz processor for productive purposes" - and you know this how? "and now I know plenty of people who complain 100GFLOP isn't enough." - for some tasks 1 million GFLOPS is not enough, how about folding of amino acids/proteins research, don't you want to find the cure for cancer faster?
0 1 [Posted by: deepblue08  | Date: 06/19/13 11:38:49 AM]
He is actually quite right. The point is that programmers are not really programmers anymore. Very few write code by hand's all drag and drop(Visual this and that) and very low efficiency languages like Java. Java can be "fast", but never nowhere near low level languages, but when written by hand. It reminds me when in end of 90's I was making websites. I tried kinda worked for me, but a simple page was 100KB. I thought WTF? Re-wrote by hand, with SAME effect on screen and it was like 1,5KB of HTML code. Processing speed on Pentium 120 was somewhat adequate to size of code. While it was next to instant for hand written code, it was quite a few seconds for the one generated by Frontpage. Today I see similar cases not only with Java. Threads consuming bucket-load of RAM and CPU resources to do something what 30KB of assembler code would get done 100x faster. As to folding@home, I know it was witten by pretty smart people, but then how can you be sure that the algorithm is optimal? Perhaps they are using something that can be called "brute force attack". It does not seem feasible to me, that it requires hundred thousands of computers world-wide, working years, to process some chemical reactions whatever they may be, but probably I wouldn't know better than anyone here.
0 0 [Posted by: KumaN  | Date: 06/19/13 11:40:39 PM]
He is right on the money. I work at a major AV-vendor and and the coders literally drag "elements" into a workspace, tie down what variables will be global and local and then hand it over to the optimization team who then themselves feed it to the compilers...

as soon as you use a compiler, you are knowingly reducing efficacy for the cost trade off of having a server farm run, compile your code rather than have a team build highly optimized modules.
0 0 [Posted by: amdzorz  | Date: 06/20/13 07:32:38 AM]
the problem is that there is still a large gap between single threaded programming and multi threaded programming. more big languishes like c++ , java ,c# should have most of the libraries functions in parallel (just array operation).
HSA will make gpu programming a lot easier zich is really needed.

also a 16Mhz PC was simple it had no pipeline, only one alu and limited instruction set. so try do hand optemise for a 3 issue superscalar multi core processor whit a lot of other programs running and a memory timings etc. its to hard.
0 0 [Posted by: massau  | Date: 06/19/13 04:17:13 PM]

AMD is completely reorganizing so that they have the best products for mainstream consumers in all product segments. That's good for consumers as AMD always offers the best bang for the buck and often the best bang for any price such as with APUs which will eventually replace 90+ % of all processors used in all PC products. Discrete CPU/GPU combos will be a small niche market for those with too much money and a large ego.

AMD may in fact deliver a Steamroller desktop CPU that is a big performance bump over Vishera. It won't be a 160w model however it is likely to be 125w or less design yet it will deliver far better performance then the FX-9590.
2 1 [Posted by: beenthere  | Date: 06/19/13 05:42:36 AM]
- collapse thread

But AMD will be up against a 8 core Haswell E at 140 Watts within the next 12 months. So I believe a 165 Watter at 28nm is reasonable to keep remotely competitive. But we'll see.
0 0 [Posted by: linuxlowdown  | Date: 06/19/13 06:18:22 AM]


Add your Comment

Related news

Latest News

Tuesday, July 22, 2014

10:40 pm | ARM Preps Second-Generation “Artemis” and “Maya” 64-Bit ARMv8-A Offerings. ARM Readies 64-Bit Cores for Non-Traditional Applications

7:38 pm | AMD Vows to Introduce 20nm Products Next Year. AMD’s 20nm APUs, GPUs and Embedded Chips to Arrive in 2015

4:08 am | Microsoft to Unify All Windows Operating Systems for Client PCs. One Windows OS will Power PCs, Tablets and Smartphones

Monday, July 21, 2014

10:32 pm | PQI Debuts Flash Drive with Lightning and USB Connectors. PQI Offers Easy Way to Boost iPhone or iPad Storage

10:08 pm | Japan Display Begins to Mass Produce IPS-NEO Displays. JDI Begins to Mass Produce Rival for AMOLED Panels

12:56 pm | Microsoft to Fire 18,000 Employees to Boost Efficiency. Microsoft to Perform Massive Job Cut Ever Following Acquisition of Nokia