Bookmark and Share


Nvidia Corp. reportedly plans to release its next-generation desktop graphics card code-named “Titan” sometimes in late February, 2013, and if what is said to be the first benchmark result of the GeForce GTX 780 is correct, then the new product will be a yet another breakthrough by the company. However, if the information is not completely correct, the situation with Nvidia’s next-gen starts to seem to be different.

Single-Chip Monster?

The consumer-oriented graphics cards powered by Nvidia GK110 processor will be code-named Titan after the supercomputer that employs Tesla K20X compute cards based on the same chips. Nvidia’s GK110 chip has 15 SMX clusters with 2880 stream processors, but Tesla K20X compute cards utilize chips with one SMX disabled for redundancy and hence have 2688 active stream processors. According to web-site, the GeForce GTX 780 “Titan” graphics card will also feature the chip with 2688 stream processors and 384-bit memory controller as well as 6GB of GDDR5 memory operating at 5.20GHz.

Exact specifications of the consumer graphics solutions based on Nvidia’s most powerful GPU ever are not clear at the moment and many industry sources imply that it is pretty hard for Nvidia to make GK110 work at high clock-speeds (Tesla K20X has GK110 running at 732MHz), which should result in performance that is slightly lower compared to dual-chip GeForce GTX 690.

Nonetheless, according to a leaked screenshot from 3DMark 2011 software suit for testing graphics cards, an unknown GeForce GTX graphics card, which is claimed to the GTX 780 “Titan”, scores X7107 “extreme” points in the benchmark. In the case, Nvidia GeForce GTX 680 scores around X3200, whereas GeForce GTX hits approximately X5900 – X6000.

Screenshot by

Since GK110 features a more flexible version of Kepler architecture when it comes to programmability, the GeForce GTX “Titan” 700-series graphics solutions for consumers will probably offer a number of interesting performance optimizations and capabilities. However, even with this in mind it is pretty hard to believe that a chip that does not double the amount of stream processors from GK104/GeForce GTX 680 (1536 SPs) more than double the performance of the predecessor with architectural efficiencies and clock-speed (even if Nvidia has re-spun the GK110 and managed to boost the frequency without reducing the production yield). If this is the case, then the GeForce GTX 780 “Titan” is a colossal breakthrough.

Or New Dual-Chip Flagship?

While Nvidia had about a year to re-spin its monstrous GK110 chip with 7.1 billion transistors, many of which support functionality not needed by video games, and make it run at clock-speeds that exceed 1GHz (up from 732MHz), Nvidia could also re-spin much less complex GK104 with 3.53 billion transistors and increase its performance.

Considering the fact that Nvidia gained a lot of experience with 28nm process technology of TSMC in the first months of 2012, it is possible that the firm could start to redesign the GK110 and/or GK104 in a bid to boost the frequencies, improve yields, lower power consumption and so on. Since the GK110 was originally designed with high-performance computing in mind, its redesign is something not very probable. Meanwhile, creation of a GK114 with higher clock-speed and/or number of stream processors in addition to other improvements sounds a logical move.

Even with 10% - 15% performance improvement over the GK104, the GK114 would clearly score around X3700 in 3DMark 2011 Extreme test, whereas a dual-chip graphics card would undoubtedly conquer X7400 milestone. If this is the case, then the dual-chip GeForce GTX 790 can definitely score X7101 points in 3DMark 2011 Extreme with pre-release drivers.

All in all, we do not know whether Nvidia’s next-generation top-of-the-range product is a single-chip monster or a dual-chip dragon. Hopefully, as the alleged release time-frame of Nvidia’s new product approaches, more information shows up.

Nvidia did not comment on the news-story.

Tags: Nvidia, Geforce, GK110, GK104, GK114, 28nm, Kepler


Comments currently: 11
Discussion started: 01/31/13 09:13:10 PM
Latest comment: 02/02/13 03:55:25 PM
Expand all threads | Collapse all threads


Maybe there is 2 high end gpus coming dual gpu gtx 790 which explains this score and the less powerful gtx 780 titian rumored 15% less powerful than the current gtx 690.
0 0 [Posted by: godrilla  | Date: 01/31/13 09:14:49 PM]
- collapse thread

This slide is fake. You can run a test with 2 heavily overclocked GTX680s and then cover up the GPU name and post a score of X7000+. GTX680 uses nearly 190W of power by itself and K20X with just 2688 CUDA cores at 732mhz has a 235W TDP. Unless NV plans to launch a 300W TDP card and raise the GPU clocks to 1200mhz, how can a single GK110 chip more than double the performance of a GTX680? The 5.2ghz GDDR5 also is nowhere near doubling the memory bandwidth. That's only a 30% increase in bandwidth from GTX680's 192GB/sec.

If you look at it reasonably, you'd have to at least double the shading, texture, ROP and memory bandwidth to get more than double the performance of a GTX680 unless there is some strange formula behind 3DMark11's scores that we are not aware of, or GK104 has some major bottleneck inside of it that allows Kepler to scale 3% with every 1% increase in memory bandwidth on GK110.

Also, this rumor completely contradicts the previous one where the Titan was estimated to offer 85% of the GTX690's performance. GTX690 doesn't even score X6000 points:

This shows X7100 vs. X5600-5700 for the GTX690. The same source which leaked this slide states that the Titan is a single GPU 2688 CUDA core part, with no mention that it is a dual-GPU part made up of 2 flagship GPUs like GK114s. The mathematics and power consumption do not add up.

Don't forget the rumors for a 768 SP GTX580 before its launch, 2304 SP GTX680 before its launch, etc. At least 2 generations in a row, all the leaks for NV's next flagship card vastly overestimated/exaggerated its actual performance. This rumor seems to be barking up the same tree.
7 1 [Posted by: BestJinjo  | Date: 01/31/13 10:27:00 PM]

Everything about this GeForce GK110 silliness is a joke. The biggest red flag is lack of context: No one is asking what kind of threat AMD's HD8970 poses leaving nVidia no other choice than to use their largest die with clocks raise above the Tesla K20X just to compete.

If this thing is projected to approach (let alone perform faster) than GTX690, HD8970 would need twice the stream processors as HD7970. Yet of available rumors, it looks more like 2560-sp's which is just 20% more than HD7970. 20% more than GK104 could either be a die with 10 SMX's (1920-CUDA) or any GTX680 overclocked to 1200MHz.

I think that was why nVidia limited overvolting on Kepler, nothing to do with degradation, rather to make GTX780 relevant from a sales perspective seeing how capable GTX680 could have been against what AMD is able to do.
5 1 [Posted by: lehpron  | Date: 02/01/13 09:35:09 AM]
- collapse thread

I think there are 2 separate rumors:

- Titan launching as a limited edition enthusiast card to get rid of failed/unwanted K20X parts that won't be sold to corporate clients

- GK114 competing with HD8970, with both improving performance maybe 15-25%. New rumors point to those cards being delayed to Q4 2013:
2 1 [Posted by: BestJinjo  | Date: 02/01/13 11:13:28 AM]
What the hell are you talking about. Each Nvidia shader is 64bit MIMD with 1_integer_unit and 2_float_units, wile each AMD shader is 32bit.
0 2 [Posted by: artivision  | Date: 02/02/13 03:41:03 PM]

Smells like fake...
6 0 [Posted by: TAViX  | Date: 02/01/13 10:27:57 AM]
- collapse thread

It is already proven to be 100% fake. The test was run on a GTX690 overclocked.
5 0 [Posted by: BestJinjo  | Date: 02/01/13 11:06:48 AM]

really ?

acc. to google, more sources say the same
0 2 [Posted by: neeles  | Date: 02/01/13 10:46:54 AM]
- collapse thread

LOL! The worst made up specs I've ever seen. 1.4Ghz shader hot clocks on a 2880 CUDA core Kepler with a TDP of 245W when a 732mhz 2688 CUDA core K20X has a TDP of 235W.
4 1 [Posted by: BestJinjo  | Date: 02/01/13 11:09:43 AM]
When investigating claims, whether in journalism or in legal circles, it is important to find multiple independent sources. But with rumors, most websites just source each other, so it isn't any more realistic just because it is spread around. Everyone in on this rumor is running on the assumption no one is going to check facts, and as if they will never be held responsible for straight lying to people because "they are all just rumors in the end".

All of this GeForce Titan GK110 stuff can be traced back to the same single source: sweclockers. There is no multiple independent confirmation on this, unless a website were to make sources up and agree with a popular rumor, that can and will happen; has before-- look up 'Core i9' which originated at and spread from there, except they openly admitted to making it up. People like brand names better than codenames, like it seems more real.

Take your YouTube link for instance, they claim 245W draw out of a pair of 6-pins and the slot. Why not use the same 6+8 pin configuration as the 244W GTX580? Looking at the uploader, you can see that they don't specialize in rumors and tech news; they make programs, it is advertising. They are taking advantage of a rumor to get attention on them-- and I fell for it too.
5 0 [Posted by: lehpron  | Date: 02/02/13 03:55:25 PM]

show the post
1 5 [Posted by: beck2448  | Date: 02/01/13 06:48:02 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture