Bookmark and Share


Advanced Micro Devices said Thursday that it would tape out the first products to be manufactured using 14nm FinFET and 20nm planar process technologies in the coming quarters. The company did not elaborate on actual products, but the names of the process technologies do indicate that the chip designer will work with both GlobalFoundries and Taiwan Semiconductor Manufacturing Co.

“We are typically at the leading edge across the technology nodes. We are fully top-top-bottom in 28nm now across all of our products, and we are transitioning to both 20nm and to FinFETs over the next couple of quarters in terms of designs. So we will continue to do that across our foundry partners. […] We will do 20nm first and then we will go to FinFETs,” said Lisa Su, senior vice president and general manager of global business units at AMD, during quarterly conference call with financial analysts.

The high-ranking executive from AMD did not reveal any additional details about the products planned or exact process technologies. Given the timing, Ms. Su referred to TSMC’s 20nm process technology that is going online in February ’14 and that is likely to be used to manufacture AMD’s next-generation graphics processing units. In addition, AMD will use 14nm-XM FinFET process technology from GlobalFoundries to make certain low-power products. 14nm-XM should enter mass production stage sometimes in calendar 2014.

While AMD did mention 20nm and 14nm FinFET process technologies, it does not mean that products made using both will actually be available in 2014. Most likely the company will only tape-out new products that will reach the market in late 2014 or early 2015. Virtually all AMD roadmaps indicate that the vast majority of AMD 2014 products will be made using 28nm and 32nm SOI process technologies. Still, the company will likely introduce new graphics chips made using 20nm fabrication process

Tags: AMD, TSMC, Globalfoundries, 14nm, 14nm-XM, FinFET, 20nm, Semiconductor, Radeon, Volcanic Islands, Pirate Islands


Comments currently: 22
Discussion started: 10/18/13 08:49:25 PM
Latest comment: 07/13/16 11:07:03 AM
Expand all threads | Collapse all threads


14nm should help AMD's mobile play, but either 20 or 14nm would help get this 125watt TDP monster under control.
1 1 [Posted by: KeyBoardG  | Date: 10/18/13 08:49:25 PM]
- collapse thread

that's amd's arch's fault really considering intel manage to uses a lot less wattages on their 32nm design for their highend chips then the current amd's highend chips based on the same die.
2 0 [Posted by: SteelCity1981  | Date: 10/19/13 01:39:52 AM]
AMD still have a south bridge based power management setup, Intel have power management on die, this means much less latency on phase changes and thus much more granular control over power, which translates to much lower power usage. AMD cannot do that without ditching AM/F socket compatibility
0 0 [Posted by: Andrew Ihegbu  | Date: 12/04/14 07:37:32 PM]

Thats wat i pridicted 2 months back,,,,,amd will use globfoundries 14 nm xm
20 nm is for next gpu lineup
14 nm xm is for arm design coz globfondrs 14 nm is single varient only for low power devices.
1 2 [Posted by: mudi1  | Date: 10/18/13 09:09:48 PM]
- collapse thread

arent gpu-transistors of low power design?
thus the 1ghz frequencies
0 2 [Posted by: microbe  | Date: 10/18/13 10:28:22 PM]
High end desktop discrete GPUs often consume 200W+ power and contain many more transistors than a desktop CPU.

I suspect it has more to do about acceptable heat levels and power draw rather than the transistors switching capability.
1 0 [Posted by: JBG  | Date: 10/19/13 05:33:19 AM]
The switching capability is linked to the voltage applied and therefore to powerdraw.
Mobile transistors are built for optimal performace per watt wich is perfect for gpus in my opinion.
either way finfets arent only for smartphones.
0 0 [Posted by: microbe  | Date: 10/19/13 11:16:24 AM]
Yes voltage and frequency, as the resolution increases, the voltage can reduce and the frequency can therefore increase with the same heat output. Theoretically 14nm means 4 times the transistors, at 4 times the frequency (speed,) however to keep heating down, say in a 7" UD 4k tablet, they would probably only increase the no of transistors by double and double the frequency. So with four times the total calculate per watt/second UD at 4 times the pixels would load the system around the same as today's FHD ie 1080p. This would not only allow today's play and record in UD, but gaming, remember a UD Android TV HDMI stick currently costs $100. Also there are economy of scale calculate advantages and 14 nm GRAM further helps the process, in a desktop system you would compromise less with heat. So you would take an eight times advantage, rather than the full sixteen times advantage and create a toaster oven. Based on for example Samsungs statements, that 14nm, 64 bit, double FHD 5" smartphones will be introduced next year, then 7" UD represents the same pixels per inch. Toshiba demonstrated a 10" UD tablet at CES 2013, at the same PPI, as today's high end smartphones eg. Nexus 5 at $400, Note 3 with UD recording at $800.
0 0 [Posted by: Stuart Brown  | Date: 11/22/13 01:11:54 PM]

125w TDP CPUs are not considered "monsters" by anyone with a technical clue.The primary benefit to smaller than 32nm node size is reduced chip size via higher transistor density and lower power consumption. Cooling however at the smaller nodes as with FinFET, becomes more challenging.

Actual computing performance doesn't typically increase significantly with a node drop below 32nm but AMD's new 28/20nm products will see a significant computing performance increase.
2 4 [Posted by: beenthere  | Date: 10/18/13 10:48:49 PM]
- collapse thread

Well for us morons, doesn't lower power usage generally mean less heat generation? The 8350 throws a lot of heat and any step to bring that under control is welcomed.
2 1 [Posted by: KeyBoardG  | Date: 10/18/13 11:19:04 PM]
You'd think that. however like haswell... smaller dies = greater heat concentrations. making it still hard to cool. it will consume less energy sure.. but it will still run damn hot.
2 1 [Posted by: amdzorz  | Date: 10/18/13 11:55:28 PM]
Haswell isn't hot because of the heat concentration alone, most comes from using a cheap method of putting on the heatspreader, as Haswell-E shows, like Ivy-E showed over Ivy
0 0 [Posted by: Rollora  | Date: 10/20/13 07:54:24 PM]
125 watts if very high for modern cpus. most boards max out at 140watt support so even a slight bump in speed or voltage puts the mainboard 'out of spec'.
2 0 [Posted by: amdzorz  | Date: 10/18/13 11:56:45 PM]
Actually no...

AM3+ mobos are rated at 125w-140w and most will support an OC'd FX-8350. Yes an 8-core FX runs hotter than lesser core CPUs - as you'd expect with more cores. Highend 140W rated AM3+ model mobos (Asrock E9, 990FX Fatality, some Giga mobos and at least one Asus), will even run the just released 220w FX-9000 series without issue.

A 125w CPU isn't an issue for anyone with a clue, as I already pointed out. My FX-8350 OC'd to 4.7 GHz. has no cooling issues at all with a Aegir SD128256 even in P95 stress testing for 25+ hours.
1 2 [Posted by: beenthere  | Date: 10/19/13 12:19:59 PM]

Cant see why you make the connection between 20nm and TSMC. GF's 20nm is ramping, too.
1 0 [Posted by: Bingle  | Date: 10/19/13 02:18:32 AM]

is steamroller on am3
1 0 [Posted by: Ben King  | Date: 10/19/13 10:25:27 PM]

No. Not ever.
0 0 [Posted by: jesh462  | Date: 10/24/13 05:55:17 PM]

I would want to see a kaveri chip on 14 nm with true audio, GCN, arm truszone, 4 lane memory, 10GB lan... 4 x86 and 1-2 arm64 with 400 GCN and 4gb ram

in my next Note4
0 0 [Posted by: tcubed  | Date: 10/25/13 02:39:24 PM]
- collapse thread

Correct, I want to see it in a 7" UD 4k tablet, as mentioned above, Sammy ought to buy AMD, they want to design their own 64 bit, 14nm chips after all. Further AMD have experience in 400 GPU core graphics ideally for UD gaming, Nvidia's Tegra 4 has 72 cores, at 14 nm that's 288, for T5, AMD cores are half the transistors, so your figure of 400 is about right. As long as MS Nokia, or Apple don't snap them up first, that is, Nvidia would be relatively cheap too, with these guys capital availability. With Apples cash reserves, they could buy Intel, Qualcomm, ARM and both the big graphics guys, but the competition regulators, would throw the book at them. As that would make them 64 bit, big graphics, monopolist, IP robber barons.
0 0 [Posted by: Stuart Brown  | Date: 11/22/13 01:44:10 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture