Information

Dear forum members,
We are delighted to inform you that our forums are back online. Every single topic and post are now on their places and everything works just as before, only better. Welcome back!



Discussion

Discussion on Article:
Nvidia’s Next-Generation GK110-Based GeForce Code-Name “Titan”.

Started by: lehpron | Date 01/22/13 01:18:06 AM
Comments: 16 | Last Comment:  02/25/14 01:13:54 AM

Expand all threads | Collapse all threads

[1-6]

1. 
I think someone via sweclockers used the recent EK announcement that a Tesla K20 (GK110) full-cover waterblock that would arrive in February 2013 to mean that GeForce versions must be arriving too (as if demand for a $3,500 compute card can't be that high to make a waterblock just for them), and looked for evidence to support it.

http://www.ekwb.com/news/...uadro-Tesla-water-blocks/

It is a good theory and implies an incredible gamble on EK's part to stay ahead of competition with a template, even though GeForce cards tend to have more circuitry support for higher power levels.

But to copy K20/K20X's specs as the supposed "GTX780" while disregarding the costs and gains from a business perspective on behalf of nVidia, as well as the utter lack of rumored/potential AMD Radeon counterparts required to necessitate such a response (where is the threat?); it isn't well thought out.

nVidia will no undoubtedly take a LEAN approach in responding to HD8970, they aren't going to go overboard for us. If anyone doesn't know of LEAN:http://www.lean.org/whatslean/
1 0 [Posted by: lehpron  | Date: 01/22/13 01:18:06 AM]
Reply
- collapse thread

 
LEAN is what AMD has been doing since rv770, and what nvidia has done with one sku, 680...which is a good match for bw, rops, tmus, compute/sfu units, and clock yields on the process within 225w. It's a pretty perfect storm. I imagine the refresh will be very similar, but use 7ghz ram (old 6ghz was 1.6v and new 7ghz uses 1.5v) because 680 is clocked at stock (1112/6008) to use every ounce of bw...hence the higher clocks the process is capable would be better matched with faster ram within that TDP. In theory it could scale perfectly to 1300/7000, granted I don't think that is realistic on any consistent basis, especially within 225w.

On the subject of LEAN, AMD realized long ago the nvidia logic of maximizing for lower-voltage (and/or best yielding) clocks while using more and better-matched logic wasn't the best business model for them. (I assume)they estimate that performance level and then work backwards to find the lowest amount of logic that the process, with higher voltage/clocks can scale on a consistent basis within set tdps for markets (ie 150, 225, 300) to reach that level, and plan accordingly for salvage parts using more power-efficient and yielding clocks being one notch (pci-e connector) down. That is LEAN. nvidia's attempt at such a strategy (the 500 series) with stock tdps right at pci-e thresholds to limit the connectors, with overclocking going out of spec was a nice value, but pretty shady. Square peg, round hole. You can see they gave up on any residual planning with their three 150-225w gk104 parts and 140w <150w part, shoe-horned to beat competing products.

Everything we've seen points to AMD using 2560sp, or 40 CUs, which will probably be clocked in the ~1050mhz/6400 range (and probably overclock to 1200mhz or so like other high-end products). Why does it make sense? Because in theory 2560sp at 1.175v should be able to hit 1166mhz, which would saturate 7ghz on a 384-bit bus...all reasonable clocks/voltages for 28nm and gddr5. IE, lean.

OTOH, everything points to nvidia using more logic, probably less well-matched for 48 ROPs than gk104 is for 32 to keep power down and/or yields up using lower clocks; one would assume similar to 670 (980mhz) etc. In theory, 14 smx/2688 shaders could be clocked up to gtx680 (1112) levels with 7ghz ram and still have the same compute/bw ratio which could probably be done with low voltage...and because of the separate sfus and different cache/bw setup (or at least current optimizations) one-would-think perform around 10%+ better per clock than 2560sp from AMD. It probably won't clock very high before running into the tdp wall, but will probably be enough regardless. We shall see.

IOW though, it's probably the same old story. Logic versus voltage/clocks. Yields versus die size. $899 versus likely a hell of a lot less. Careful planning vs shoehorning.
1 1 [Posted by: turtle  | Date: 01/22/13 03:31:52 AM]
Reply
 
I have a problem with the potential power levels of what is being suggested. The implied specs of this "GTX780" with raised frequencies would be close to 300W for a single-GPU card, considering Tesla K20X with 2688-CUDA at 732MHz and 6GB at 5.2GHz is 235W. HD7970 is already a high wattage card, how much higher would it be with 2560sp at an even higher frequency? Probably same range. But 2560sp makes HD8970 barely 20% faster than HD7970, why would nVidia need a GK110 when a GK104 can overclock to 1200MHz and meet it (in gaming)?

It makes more sense that nVidia either rebrand a GTX680 as GTX780 with raised frequencies similar to HD7970's GHz Edition or make GK114 with 9 SMX and raise frequencies to call it even-- they are the leanest routes with the absolute minimal expenditure of resources.

I never bought nVidia's claim regarding the locking of overvolt options due to degradation, I think they did it to keep their next generation relevant, as if GK104's voltage was much more flexible. They wanted to protect their bottomline.
0 0 [Posted by: lehpron  | Date: 01/22/13 11:32:58 AM]
Reply
 
HD8970 is not replacing a 925mhz HD7970, but a 1050mhz HD7970GE. GTX680 at 1200mhz will not come close to HD8970 if HD8970 is 20% faster than HD7970GE because HD7970GE is already 7-11% faster than GTX680. At high resolutions, HD7970GE pulls away even more (http://www.computerbase.d...x-hd-7970-mit-i7-3970x/5/)

That means GTX680 would need at least a 30% faster clock to match an HD8970, or almost 1400mhz.

Nvidia wants to retake the performance crown convincingly and make $ doing so. You can't price the Titan at $900 unless it's at least 25-30% faster than a $549 HD8970 or NV's pricing would look outrageous if it's only 15% faster.

Nvidia will probably release both GK110 and GK114. GK114 will go against HD8970 while GK110 will be a Halo $900 card beating everything else by 25%+. If not, the Titan would be a fail/money grab.

All of this are just rumors though as we don't even know HD8970's performance, specs or pricing.
0 0 [Posted by: BestJinjo  | Date: 01/23/13 09:32:09 AM]
Reply
 
2688 CUDA core chip clocked at 1112mhz in a 235W power envelope? Ya OK, how? GTX680 uses nearly 190W of power at load. Even with 28nm node being more mature, you think you can just increase performance 83%+ (2688 CUDA x 1112mhz) / (1536 CUDA x 1058mhz), while boosting memory bandwidth 50% by going to a wider and more power hungry 384-bit bus over 7Ghz GDDR5 on the same 28nm node? Not a chance. Never in the history of GPU making has any company increased performance 80%+ on the same node from one flagship card to the other. This chip is coming in clocked well under GTX680 or TDP has to go up beyond 235W.

Also, this statement in the article:

"The monstrous GK110 chip (pictured) contains whopping 7.1 billion of transistors and has potential to deliver 8 – 9TFLOPS of single precision compute performance."

That's impossible. For a GK110 2688 CUDA core chip to reach 8 Tflops, the clock speeds need to be 1489mhz.

800-850mhz GPU clocks sound reasonable at 235W TDP.
0 0 [Posted by: BestJinjo  | Date: 01/23/13 09:27:06 AM]
Reply

2. 
Nvidia can afford to produce these specialty monsters for the high end as they are much stronger financially than AMD. No doubt enthusiasts will SLI them with super cooling to set new records.
0 0 [Posted by: beck2448  | Date: 01/22/13 03:17:41 AM]
Reply
- collapse thread

 
Sure there are people willing to pay for it and upgrade PSUs accordingly, but that logic is shot down when Intel still pushes quad-cores to the upper end of the mainstream segment. In fact, they can afford their own specialty monsters in any segment they currently address, but they aren't doing it.
0 0 [Posted by: lehpron  | Date: 01/22/13 12:04:27 PM]
Reply
 
Does not make any business sense, Both AMD and Nvidia's bread & butter are not in the Highend GPU segment but the low-mid segments.
1 0 [Posted by: redeemer  | Date: 01/22/13 02:03:03 PM]
Reply
 
Actually Nvidia makes huge profits from its pro line which is all high end.The halo effect from having the current king of GPUs is very valuable.
2 0 [Posted by: beck2448  | Date: 01/22/13 02:52:57 PM]
Reply

3. 
Why would you pay so much to play console ports anyways???
3 0 [Posted by: TAViX  | Date: 01/22/13 05:41:19 AM]
Reply

4. 
These are probably going to be for workstations and high end desktop market. They will be rarer than the 690 but nothing a dual 7950 wont stomp for half the price.

Anther nvidia fail if that price is anything close to what is real
1 0 [Posted by: keysplayer  | Date: 01/22/13 07:01:37 PM]
Reply

5. 
For the Nvidia GTX TITAN REview and Performance report:
http://www.gamingtron.com...s-and-performance-report/
0 0 [Posted by: sohilmemon  | Date: 02/18/13 01:32:23 PM]
Reply

6. 
I'll buy two and post a Gamers and Overclockers review here http://www.hotoverclockers.com
0 0 [Posted by: techjesse  | Date: 02/20/13 01:50:46 PM]
Reply

[1-6]

Back to the Article

Add your Comment