News
 

Bookmark and Share

(36) 

Nvidia Corp. said that the customers among high-performance computing (HPC) users are waiting eagerly for the next-generation code-named Kepler processors to upgrade their supercomputers or deploy new systems. Besides, the company confirmed, albeit indirectly, that the highly-anticipated Kepler graphics processing units will be available in the second quarter of calendar 2012, but did not elaborate.

"Our professional solutions business is expected to have another record year. Maximus enables us to sell more than one GPU on to a workstation, and new supercomputer centers around the world are anticipating the shipment of Kepler," Rob Csongor, vice president of investor relations at Nvidia, during the most recent conference call with financial analysts.

One of the main advantages of Kepler architecture for supercomputers is over two times higher double precision GFLOPS performance per watt compared to Fermi architecture that is in use today. Some of the technologies that Nvidia promised to introduce in Kepler and Maxwell (the architecture that will succeed Kepler) include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on.

Although Nvidia started to ramp up manufacturing of chips that are based on Kepler architecture in calendar 2011, the company will only release actual products powered by those chips in March, April or even later. In fact, Nvidia's interim chief financial officer even implied that the next-gen products will be launched in the coming months, but did not directly state that Kepler will be available in the second quarter of first quarter of the company's fiscal 2013 (which ends in late April '12).

"Looking ahead, while we anticipate continued negative effects from the hard drive shortage, we believe gaming demand will continue to be robust, driven by the combination of our next-generation Kepler architecture and new hit games, such as Mass Effect 3 [due in March - X-bit labs] and Diablo III [due in Q2 2012 - X-bit labs], both highly anticipated PC games coming in early calendar year 2012," said Karen Burns, interim CFO of Nvidia.

The company remains cautious about its Q1 FY2013 results; for the quarter the company expects revenue between $900 million and $930 million, which is below $962 million in Q1 FY2012 (ended April 30, 2011). Perhaps, drop of revenue may signal extremely limited availability of Kepler in March-April timeframe, or Nvidia's decision to start mass roll-out of next-gen products in May, which will be calendar Q2, but will not belong to Q2 of Nvidia's FY2013.

Tags: Nvidia, Geforce, Kepler, 28nm, Maxwell, Tesla, Quadro

Discussion

Comments currently: 36
Discussion started: 02/17/12 03:22:33 PM
Latest comment: 02/21/12 10:39:50 PM
Expand all threads | Collapse all threads

[1-5]

1. 
show the post
0 3 [Posted by: redeemer  | Date: 02/17/12 03:22:33 PM]
Reply
- collapse thread

 
show the post
1 5 [Posted by: BestJinjo  | Date: 02/17/12 07:26:22 PM]
Reply
 
I am sorry But AMD has never been behind Nvidia not for a very long time. Large complex die size is the reason for delay issues with yields. AMD has always been first to the punch when it comes to new architecture and fab process. Original Fermi launch was a disaster, AMD is much more efficient per mm2 than Nvidia and thats a fact
3 3 [Posted by: redeemer  | Date: 02/18/12 07:06:19 AM]
Reply
 
"AMD has always been first to the punch when it comes to new architecture and fab process"

What are you talking about?

8800GTX launched on 90nm by full 6-months ahead of HD2900XT. Sure, HD2900XT was made on 80nm, but the card was total garbage. G80 was actually by FAR the more advanced architecture.

3800 series was also lacklustre. HD4800 series was great but AMD made almost no $ on it. HD5800 series launched 6 months ahead of Fermi but NVidia still managed to retain ~59% market share on the desktop while AMD barely made $ on those cards. Makes sense, they were selling HD5850 for $269 and HD5870 for $369, while GTX480 was $499. Who had the right strategy exactly?

Before that GeForce 6800 series launched 1 month earlier than X800XT series and had a more advanced architecture since it supported SM3.0 (both on the same 130nm process):

http://www.gpureview.com/...hp?card1=179&card2=46

"I am sorry But AMD has never been behind Nvidia not for a very long time."

Performance wise, HD2900XT was horrible. It was one of the worst cards ever made. HD3870 tried to fix HD2900XT but its anti-aliasing hit was massive. It was beaten by a mid-range 8800GT card. AMD fixed HD3870 anti-aliasing issues but had to regain gamer's mind share with $299 HD4870 after lacklustre performance of HD2900/3800 series. $199 HD4850 and $299 HD4870 hurt their profits. Peformance wise, despite being on the 65nm node, GTX280 launched on June 16, 2008 and was still faster than 55nm HD4890 that launched on April 1, 2009. That means it took AMD almost a year and 55nm process to catch NV's card made on 65nm process...

http://www.gpureview.com/...p?card1=567&card2=608

Then it took 9 months with HD6970 just to catch up to GTX480. But GTX580 actually beat HD6970 to launch and was 20% faster.
http://www.computerbase.d...schnitt_leistung_mit_aaaf

GTX480 launched March 26, 2010, while HD6970 launched December 14, 2010.

HD5870 beat GTX480 by 6 months to launch but had 1 Tessellation engine that tanks in modern games because it cannot handle Tessellation. GTX480/570 are often 50-100% faster in games like Crysis 2 or Batman AC vs. the HD5870.
Batman: AC = horrible performance by HD5870
http://www.anandtech.com/...-radeon-hd-7970-review/20

Crysis 2 = horrible performance by HD5870
http://www.techspot.com/r...rce-gtx580-soc/page7.html

It took AMD a full node shrink to 28nm and a brand new architecture to just beat a 14-months old 40nm Fermi GTX580 by 20-25% on a 2-year-old architecture. That's impressive to you?

Finally, AMD's graphics division has always been behind Nvidia when it comes to financials. It's very easy to make a videocard and sell it as price/performance leader while making almost no $ or actually losing $ on it. That's not how a good business should be run. Despite all the delays, large die sizes, etc. Nvidia is actually making $ and maintains higher market share on the desktop.

The only company that ever made a lot of $ selling videocards against Nvidia was ATI Technologies. AMD's "small die" strategy has so far been only good on paper. Since HD2900XT series, AMD has traded earnings/profits for market share gains against NV while NV has been making $$$ and maintaining leadership on the desktop.

2011 = Nvidia made $581 Million
2011 = AMD lost $1 million

"Original Fermi launch was a disaster"

Maybe for you. Fermi laid the foundation for GPGPU architecture that allowed it to win a lot of corporate customers with its Tesla line. It took AMD years to figure out that GPGPU is the future and finally release GCN architecture. Nvidia refreshed Fermi and made tons of $$$ on it. Fermi architecture was too advanced for 40nm process. Still, GTX460 was one of the best cards of that generation for gamers. Even now HD7770 still loses badly to GTX560/HD6870. Pretty sad since you keep arguing about more advanced nm process nodes but here we have a $160 28nm card losing to last generation $170 40nm card:

http://www.legionhardware...adeon_hd_7770_7750,3.html

Efficiency per mm^2 is a useless metric that geeks argue over on forums. For example, HD7970 is more efficient per mm^2 than HD6970 series but it is actually less efficient per transistor (HD7970 is 40% faster than HD6970 but has 65% more transistors). So is HD7970 more efficient or less efficient than HD6970 series? That depends on what metric you use. But again, unless you can secure more corporate contracts, gain market share and actually make $, performance/mm^2 is just a nice metric to wave in the air.

The only thing that matters in this business is making $ and NVidia has consistently done so with 44-52% Gross Margins and positive cash flow. AMD has struggled despite their "superior" small die size strategy and advantage in performance/watt.

Nvidia's strategy clearly works better and they are a much better managed company financially. ATI by itself was worth $4B+. AMD mismanaged ATI to the point of losing $ on the graphics division for years, and resulting in goodwill writedowns.
4 5 [Posted by: BestJinjo  | Date: 02/18/12 07:32:11 AM]
Reply
 
show the post
0 4 [Posted by: redeemer  | Date: 02/18/12 08:03:46 AM]
Reply
 
HD5970 is a dual GPU card. Why are you comparing GTX580 vs. HD5970? People who want the fastest single GPU card don't particularly care for HD5970, GTX590, HD6990, etc.

"Fermi beat itself with high power draw and heat it was the most critized gpu architecture to date, forced Nvidia to make a refresh"

Most criticized by who? Nerds on the interwebs, Charlie of S/A? Fermi was one of the most successful GPU architectures of all time. Here is why: it fundamentally changed the way the industry thought of and approached GPU design. Graphics cards were no longer created just for 1 task only - graphical acceleration. Fermi laid a foundation for the entire GPU industry to move towards a general purpose GPGPU graphics solution. We are seeing that trend as AMD has followed into the same direction with its GCN architecture. However, why did AMD wait until Nvidia took this risk first? Who is the market leader trying to innovate and who waits for another company to take a risk?

Fermi's 1st generation tessellation was also revolutionary. It beat AMD's 8th generation tessellation unit in HD6900 series. Look how well GTX480 has endured while HD5870 is now struggling. Isn't that a testament to how much more forward looking GTX480 was?

You brought an anecdotal example of a few GTX590 burning up in smoke is a way to discredit the card. You are quick to bring out the 5970 against GTX580 but yet have failed to acknowledge that GTX590 beats it? Sounds contradictory.

http://www.computerbase.d...schnitt_leistung_mit_aaaf

mm^2 efficiency is useless. At the end of the day, no one cares that a GTX460 has a smaller mm^2 die than HD7970 does, or that HD3000 has a smaller die than Llano's APU. People walk into a store or buy online based on their budget and what performance/features they want. End of story.

BTW, I own an AMD card at the moment. So much for your fanboy argument. I bought my 6950s in CF because GTX570 runs out VRAM at my resolution of 2560x1600. But that doesn't mean I won't acknowledge that NV:

1) Has released 2 very advanced/revolutionary GPU architectures with G80/GT200B and Fermi, while Cayman and Cypress were just reworked R600 designs. Although VLIW-5/4 did well for games at the time, they don't lend themselves well for games or applications that benefit from compute;

2) Makes way more $, despite large die sizes and delays.

3) Has at least attempted to take business risks by providing scientific and financial solutions through graphics products, while AMD waited on the side-lines until NV lead the way for GPGPU compute. AMD waited until they saw that the industry's demand for GPGPU was growing - but why was it growing? Nvidia was growing that market;

4) Die sizes are more or irrelevant, mm^2 is irrelevant, unless it negatively impacts the business. But this argument cannot possibly be made since NV continues to have more desktop discrete GPU market share and makes more $ than AMD as a whole. So clearly, large die sizes have not negatively affected NV to the extent that they "should" abandon the large die strategy.

Off the top of your head, please tell me the die sizes of X1900XTX, HD2900XT, HD3870, HD4870. You can probably tell me that on average AMD's die sizes were smaller than Nvidia GPU's in all of those generation. Ya, so what? None of that matters because Nvidia had competitive cards in each of those generations.

Price/performance, ultimate performance for a single high-end GPU, how well a card runs modern games, how good the drivers are, features (Eyefinity, PhysX, 3D surround gaming, etc.) -- those things actually matter to gamers.
3 5 [Posted by: BestJinjo  | Date: 02/18/12 10:31:04 AM]
Reply
 
The X1900XT GPU was larger than 7900GTX (384 vs. 278 million transistors), but except that everything is correct.
1 0 [Posted by: cosminmcm  | Date: 02/19/12 11:25:28 PM]
Reply
 
Yes, mm^2 efficiency is not really that important for you - especially if it is not making you or the company any profit. If it were everything, how could it be that Nvidia is making 100x the profit of AMD with such huge and inefficient silicon?

As to the power consumption efficiency, GTX 570 is actually more efficinent than HD 6970 (both perform about the same, while GTX 570 consumes a few watts less as more reviews show it to be the case rather than vice versa).

HD 5970 is not really any faster than GTX 580 overall. Plus you lose true triple buffering (which is invaluable for those who do not want to suffer screen tearing with 60Hz monitors, without experiencing "fractioned" frame rates that come at such a noticeable hitch). For some games, you have to wait 1-2 months until a compatible Crossfire profile comes out. For some other games, multi-GPU scaling is never really efficient (see Skyrim, Batman:AC, Starcraft II, etc..). And there is also the dreaded microstuttering: http://www.youtube.com/watch?v=emG7ZNIsxw8
(see how HD 5870 is actually smoother and faster than HD 5970 in some parts - even if the video isn't 60fps to really show the stuttering). Tessellation, PhsyX, CUDA (Just Cause 2 and Civilization V), and Stereo3D are some other things that make GTX 580 that much more desirable than HD 5970 for me personally.

Take a look at the Voodoopower ratings and look at where GTX 580 lies in comparison to HD 5970:
http://alienbabeltech.com...hp?f=6&t=21797#p41174
(the ratings do not account for PhysX, CUDA, or Stereo3D..)

.....And I swear by Stereo 3D (that I've been using for 5+ years, now with 3D Vision and 65" DLP HDTV with 0.05ms response time for ZERO ghosting)!
2 0 [Posted by: Bo_Fox  | Date: 02/18/12 11:27:34 AM]
Reply
 
Yes, mm^2 efficiency is not really that important for you - especially if it is not making you or the company any profit. If it were everything, how could it be that Nvidia is making 100x the profit of AMD with such huge and inefficient silicon?

As to the power consumption efficiency, GTX 570 is actually more efficinent than HD 6970 (both perform about the same, while GTX 570 consumes a few watts less as more reviews show it to be the case rather than vice versa).

HD 5970 is not really any faster than GTX 580 overall. Plus you lose true triple buffering (which is invaluable for those who do not want to suffer screen tearing with 60Hz monitors, without experiencing "fractioned" frame rates that come at such a noticeable hitch). For some games, you have to wait 1-2 months until a compatible Crossfire profile comes out. For some other games, multi-GPU scaling is never really efficient (see Skyrim, Batman:AC, Starcraft II, etc..). And there is also the dreaded microstuttering: http://www.youtube.com/watch?v=emG7ZNIsxw8
(see how HD 5870 is actually smoother and faster than HD 5970 in some parts - even if the video isn't 60fps to really show the stuttering). Tessellation, PhsyX, CUDA (Just Cause 2 and Civilization V), and Stereo3D are some other things that make GTX 580 that much more desirable than HD 5970 for me personally.

Take a look at the Voodoopower ratings and look at where GTX 580 lies in comparison to HD 5970:
http://alienbabeltech.com...hp?f=6&t=21797#p41174
(the ratings do not account for PhysX, CUDA, or Stereo3D..)
1 0 [Posted by: Bo_Fox  | Date: 02/18/12 11:39:37 AM]
Reply
 
Whoa!!!!!!!!!!!!!!!!!!!!!!!!

That's one of the best history "books" I've ever read on Nvidia vs ATI/AMD.

You're pretty much correct on all accounts!

Eric Demers, the brainchild behind Radeon development, is now leaving the company.

What is painful for AMD is that while they did the right thing and bought ATI to ensure that they are well-suited in the graphics direction of the future (which is becoming more and more important compared to the CPU), they have not really been making any $$ ever since. At least the 5xxx series kept them from going bankrupt as they gained a few % points of marketshare with new DX11 cards.

And what's even more painful is that the AMD fanboys who have AMD stock are not receiving any dividends at all - while the shares have been losing value for the most part.
2 0 [Posted by: Bo_Fox  | Date: 02/18/12 11:13:51 AM]
Reply
 
Thanks!

Eric is moving onto to Qualcomm. Rumors have it that because Read wants to focus on Fusion (HSA) and thus allocate engineering and financial resources away from high-end graphics, that both Demers and Killbrew have left AMD to pursue other opportunities since they lived and breathed high-end graphics.

What really surprises me the most is how the arguments that financials shouldn't count.

There is another way to look at why financials should count. Imagine if Mercedes-Benz was selling its cars for 20-30% less than BMW? Imagine if Godiva was selling its chocolates for 20-30% less than Lindt? Imagine if Cartier or Breitling lowered the prices of its watches by 20-30% against Rolex?

In each of those instances, the automotive, chocolate and watch customers would be ecstatic. However, how long can a company sustain such operations? Companies cannot continue to survive by simply manufacturing good products just to maintain market share. What this means is foregoing profits FOR market share. Most companies cannot afford to do so long-term.

AMD has "won the hearts of many gamers" with $299 HD4870, $259 HD5850/ $369 HD5870, ~$370 HD6970, etc. But that's the equivalent of Mercedes, Godiva and Breitling practically giving their product for "free". Nvidia could have easily done the same, but in the process made little $.

I actually respected ATI more for releasing high-end 9800XT, X800XT/PE, X1900XTX, etc. Back then ATI was never content with being 2nd best. They wanted to have the best high-end GPU and the best mid-range and low-end GPUs. Gamers also never thought of ATI as the "lower-priced" brand.

The difference is now HD7970 is back to "ATI's" pricing model but has only brought 20-25% more performance over the previous fastest high-end card. That is not at all ATI's or Nvidia's business model. When ATI or NVidia bring a new $500-600 flagship card, it's generally at least 40-50% faster than the previous fastest card from either camp. If HD7970 was 40-50% faster out of the box than GTX580, that would returning to the old roots of ATI so to speak.
1 3 [Posted by: BestJinjo  | Date: 02/18/12 01:11:14 PM]
Reply
 
show the post
0 3 [Posted by: Aragorn  | Date: 02/18/12 03:40:59 PM]
Reply
 
Newsflash. In most industries in the world, companies make different design decisions to achieve results. Here is an example:

1) Lamborghini Aventador LP700-4 produces 690 hp from a naturally aspirated 6.5 Litre V12 and finishes 1/4 mile in 10.6 seconds.

2) McLaren MP4-12C produces 591 hp from a twin-turbo 3.8 Litre 6-cylinder engine and does 1/4 mile in ~ 10.7 seconds.

3) Ferrari 458 Italia produces 592 hp from a naturally aspirated 4.5 Litre V8 engine and does 1/4 mile in ~ 10.8 seconds.

Are you one of those people who would argue that MP4-12C has the best engine because it achieves similar performance out of only 3.8 Litres? That's exactly how your small die size > large die size argument sounds. Ferrari using V8 and Lamborghini using V12 are part of those firms' respective strategies and heritage. It's what they stand for at the moment.

Similarly, Nvidia has been making large monolithic die size GPUs for a long time (barring few exceptions). Making large die sized GPUs is a part of their strategy. OTOH, AMD changed their focus to making small die GPUs. However, this happened a long time ago and not just starting with HD5800 series. Both strategies can coexist.

Notice how all 3 supercars achieve amazing performance but are using 3 different engine designs/displacement/technologies? Their design variations all result in success and coexist in this world.

Comparing die sizes is like comparing engine displacement in cars. Without other pertinent information, it tells us nothing about overall performance, power/energy consumption, overclocking abilities, features, how well the product will sell, its final price, or how it will impact the profitability of a firm.

Why is it so hard for people to understand that large die vs. small die is a design choice. Small die sizes are only better if the company can make more $ of them, or produce better performance. AMD hasn't delivered in either of these cases since it purchased ATI. Since 8800GTX (G80), Nvidia has made more $ and made the faster single GPU card in every generation.

Please explain why AMD's strategy is better using logical facts and I'll listen. Until then, it's just a normal day at the office for Nvidia, a company that continues to dominate desktop discrete GPU market share and make $ every year, unlike AMD (that actually lost $177 million last quarter and lost $ overall in 2011...).

Also, are you forgetting the part where HD7970 was shrunk to 28nm? How large would HD7970 be if it incorporated GPGPU compute features and its ipmroved Tessellation engines but was made on 40nm? Alternatively, how small would GTX580 be today if it was shrunk to 28nm? Comparing 2 GPU die sizes on different nodes is meaningless. And yet, that's exactly what you are doing because you haven't even seen Nvidia's response yet. Let's compare HD7970 to Kepler and then see how it turns out.

Unless Kepler is slower than HD7970 on a 500-550mm^2 die, then your small die argument doesn't hold water. Even if Kepler is only 10% faster than HD7970 on a 550mm^2 but Nvidia makes more $ than AMD, why would the consumer/gamer care? Corporate clients won't care either. Most gamers don't open the heatspreader of GPUs when they buy their videocards to measure die sizes; only computer geeks on forums do. I suppose it gives them more reasons to argue performance/die or performance/mm^2 to provide more reasons to support their "loyal dedication" to 1 brand and/or to have more ammunition to deprecate another brand.

But ask yourself this: Do you buy Graphics cards @ certain price to play games? Or do you buy hardware to compare specs on paper? I buy videocards for games/features, not based on die sizes or numbers of TMUs, SPs, or transistor density.....

As to your last comment, only a blind fanboy would want Nvidia to fail (or AMD for that matter). We all want healthy competition in this industry. So far AMD has launched first, now it's Nvidia's turn. If you must upgrade today, go ahead and get the HD7970. It's a good card if you got $ to burn.
1 2 [Posted by: BestJinjo  | Date: 02/18/12 07:40:52 PM]
Reply

2. 
1 3 [Posted by: redeemer  | Date: 02/18/12 09:34:42 AM]
Reply
- collapse thread

 
What am I supposed to take from the link you just sent? That the author is completely incompetent in strategy and financial statement analysis? That NVidia is going bankrupt?

As an investor, that write-up is meaningless to me since Charlie is not a financial expert or an equity research analyst. He has no background in finance or as an investor that I am aware of. I'll find real information from equity research reports. I don't need his useless opinion on Nvidia's financial statements.

As a gamer, I am not impressed by HD7970 series given the price and the little performance this new series has brought to the table without requiring a 20-30% overclock to shine. I am waiting for Kepler to either 'encourage' AMD to respond with a faster clocked HD7970, or lower their prices, or for Kepler to blow HD7970 out of the water.

Most importantly, I am actually waiting for next generation games. So far, I see nothing in 2012 that warrants upgrading even if Kepler is 2-3x faster than HD7970. Just being honest.

I just wanted you to understand that contrary to your opinion that Nvidia should stop producing 500-550 mm^2 GPUs, it is their strategy that actually produced good performing GPUs, while also making $ for the firm. AMD's small die strategy has resulted in slower GPUs in general, them having to compete on price just to gain market share, and in general poor profitability for investors as a result of their graphics division under-performing.

Imagine if Ferrari sold its cars for $100,000 instead of $300,000. Sure, you might say Ferrari makes "better" cars than Lamborghini then, but who is laughing at the end of the day? That's why Ferrari doesn't sell you premium product @ low prices.

Obviously, as a gamer I'll acknowledge that HD6950 series was a great card. But as a business product, it was an utter failure of epic proportions since it allowed many gamers to unlock it into a $370 card, taking away even more profits from AMD.

And that brings us to HD7970 card. I have no problem with its $550 price and would gladly pay $500+ on a new AMD GPU, but not if it's barely faster than the previous generation without massive overclocking.
2 3 [Posted by: BestJinjo  | Date: 02/18/12 10:35:50 AM]
Reply
 
If you see nothing in 2012, just try out Stereo3D (Nvidia has a new 3D Vision 2 kit that is enhanced with a nice 27" monitor).

You'll want to be playing your favorite games all over again (well, most of them, since not all are ideal with S3D) for 2012, and scratching your head on how much you've missed out on for the past few years. Playing Dirt (an awesome racing game) in S3D gave it so much more life - that is, depth and immersion. Going back to 2D make it look so flat and lifeless. Plus I was able to race better, due to better depth perception of the tracks in the distance, able to make better judgments. 1920x1080x1920 (3D) is better than 2560x1600 for most games.

Why wait until Microsoft finally unlocks the Stereo3D feature in DX11.1 with Windows 8, when Nvidia offers excellent S3D support right now? The depth and convergence can be customized for most games, so that you feel comfortable with the perception. Sometimes, it has to be adjusted so that it looks perfectly natural.

The horizontal resolution is effectively doubled since you're now seeing both sides to the same image (left and right eyes). This effectively gives you free 2x1 SSAA which works a bit like temporal AA since it's 60Hz per eye, but without the glitches of temporal AA that ATI introduced a few years ago. And the depth itself makes the image look so much bigger, making your 24" monitor feel more like a 50"+ monitor once your eyes are drawn into the expanding "trapezoidal" cube.

Sh!t, I swear by S3D even more than color vs black-and-white. All of the Source-based games (HL2 EP1/2, Sin Episodes, Portal 1/2, L4D 1/2, etc..) were breathtaking in S3D. Batman:AA/AC are superb. RE5 is jaw-dropping but needs better graphics like RE6. Bioshock.. ooh.. Dead Space is that much more scary in 3D, even if turning off motion blur and a couple other effects - the experience alone makes it more than worth it. (See my review on Dead Space 2: http://alienbabeltech.com...ace-2-an-alien-view/all/1 )
Many many other games I've done in S3D... Far Cry was played with max graphics and 3D goodness back in 2006. 3D greatly enhances many of the older games. Oh well. The 3D revolution is happening over the course of a few years just like with HDTV of the 2000's.

EDIT: Actually, the "depth resolution" should be doubled, from just the horizontal resolution. If your eyes are seeing both sides, you are seeing a left image of 1920 pixels, and then a right image of 1920 pixels, alternated so fast that it fools your brain into a 3D image. 1920x2 = 3840 horizontal pixels. Since the number of pixels also show how detailed the depth could be (if you set it to 100%, or "full" depth while maintaining proper convergence -- usually I like to adjust the convergence too if I can for most games out there), the depth resolution would as well be 1920 pixels. So, the "effective" resolution is either 1920x1080x1920 (with a 2x1 SSAA effect) or 3840x1080x1920"horizontally-interlaced". The frame rates are usually dropped by about half (or just a bit more), but you get not only 2 sides to the horizontal resolution but also a entire "depth" stereo resolution which is like magic given that the "input" information to your eyes is "cubed", or increased by a power of magnitude while the frame rates still manage.

Plus people who wear sunglasses should never complain. Optimize the convergence so that you never get headaches.
2 0 [Posted by: Bo_Fox  | Date: 02/18/12 11:51:18 AM]
Reply
 
The 6950 did not take away profits, bread and butter come from low to mid segments anyway. If anything there will be significantly more profit since the 6950's will sell more volume. High-end enthusiasts will still buy the 6970 no matter what. You must understand the kind of buyers the gtx 680 and 7970 will attract, just because you find no reason to buy the 7970 at this time doesnt mean its a waste. Multi panel high resolution gaming will still require more than the 7970 is capable of.
0 0 [Posted by: Aragorn  | Date: 02/18/12 04:29:50 PM]
Reply
 
"Multi panel high resolution gaming will still require more than the 7970 is capable of."

Ya, which means $1,200 on Graphics to spend to run such a setup. Not many people can afford that and those who do probably wanted a much higher performance upgrade from their dual GTX580 or HD6970s, don't you think?

1 2 [Posted by: BestJinjo  | Date: 02/20/12 11:59:35 AM]
Reply
 
According to Charlie, NV should had been dead at least a year ago.
Take my advice for free: do not read Charlie too much, especially before a night, otherwise you would end up with broken health.
2 0 [Posted by: Azazel  | Date: 02/20/12 05:03:15 AM]
Reply

3. 
thank you BestJinjo for brining facts to a debate instead of a lot of people just saying NV sucks or AMD sucks, but have nothing at all to backup their claims. still cant believe dude was trying to compare 580 and 5970. comparing 590 to 5970 however is a whole different story.
2 0 [Posted by: blackdragon1230  | Date: 02/18/12 11:38:04 AM]
Reply
- collapse thread

 
So it is ok to compare the 6970 to the gtx580?? The gtx580 is a bigger chip for christ sakes and its cost $100 more of course its faster especially in pro Nvidia based games!! Though being a dual GPU the 5970 and gtx580 are in the same price bracket.
0 2 [Posted by: Aragorn  | Date: 02/18/12 04:23:39 PM]
Reply
 
What are "pro Nvidia based games"?

Let me guess, the same games where HD7970 now handily beats GTX580? Battlefield 3 with 4x MSAA, Crysis 2 and Lost Planet 2 with Heavy Tessellation, Civilization 5 with compute shaders? Yes, the same games where GTX480/580 handily beat HD5870/6970 cards are now "Pro-AMD"...

Interesting how those "pro-NV" games are now the very games where HD7970 has the highest lead over HD6970. Those "pro-NV" games highlight the advancements AMD made with GCN in respect to compute and tessellation hardware improvements.

The reason GTX580 was $100-130 more than HD6970 was because it was the fastest single GPU. You should know considering it was 15-20% faster than HD6970 for 14 months. Now HD7970 is 20% faster than GTX580 and sells for $100 more ($550 while GTX580s can be found for $450). The shocking part is that it took AMD GCN + 28nm shrink and > 1 year to get there. Name a single generation in the last 10 years going all the way back to Radeon 8500 when the next generation ATI/AMD card was only 20% faster than the previous high-end Nvidia card. It has never happened. ATi/AMD always brought more than 20% improvement over NV's previous fastest card.

Again, by you focusing on die sizes means you clearly don't understand Nvidia's business model. The reason the die size of GTX580 was large is because Fermi architecture incorporated a lot of GPGPU computation required for Tesla product lines and as a foundation for Kepler and Maxwell and beyond. Think about how large HD7970 would be if it was made on 40nm process. It would be MUCH larger than HD6970 was in order to "fit" GPGPU and its advanced Tessellation engines. The difference is Nvidia introduced advanced GPGPU and Tessellation on 40nm, hence the gargantuan die size of GF100/110. One could argue Nvidia should have waited until 28nm, but it is what it is and it paid off for them since AMD's GCN redesign of VLIW-4/5 more or less validates Nvidia's business bet on GPGPU as the future of GPUs.

Maybe this will help you understand better that chip design is far more complex than small vs. large die as you make it sound:

http://www.trefis.com/company#/NVDA?from=search

About 30% of Nvidia's stock price is related to Professional Graphics division. That's your Quadro and Tesla lines. It's logical that there are certain features in NVidia's GPUs that are incorporated into the "gaming GPU" that serve their purpose for their professional line. Nvidia would probably love to design Gaming GPUs only and Professional GPUs only. However, given their limited R&D funds and manufacturing abilities, they design 1 "multi-purpose chip" that does it all. As a result, what you end up with is a large GPU because it must be good at more than just videogames. Nvidia wants to provide solutions to financial service companies, to scientists, etc. These applications require completely different functional units in GPUs than we gamers require.

AMD is not as dependent on revenues in this space. Their businesses are run differently. For Nvidia, desktop graphics is actually slightly less important than their professional customers. Essentially when you are comparing die sizes, you are comparing apples vs. oranges since NV has a very large lead in performance in professional applications and is also good at games. OTOH, until GCN, AMD primarily focused on designing gaming GPUs. This is why you stating that GTX580 "should be much faster than HD6970" given the die sizes needs to be taken into consideration within this context that certain functional units within GTX580 are not there for games.
2 2 [Posted by: BestJinjo  | Date: 02/18/12 08:18:56 PM]
Reply
 
You were correct when you said that Nvidia took a step in the right direction with Fermi by focusing more on the gpgpu capabilities. You were also correct when you said that AMD is following in thier footsteps with the GCN architecture. But you don't give AMD's vector unit approach enough credit. Remember that in order to unlock all the cores (in the 480) to make the GTX 580 Nvidia had to remove some gpgpu specific hardware. In other words, Nvidia took a step backwards in order to get better gaming performance. The Fermi architecture was a great performer but it was too inefficient.

Have you read the rumors about Kepler. Apparently Nvidia is getting rid of hotclocks. It seems like Nvidia will be making their gpus more like AMD's gpus. So, if the rumors are true, most of your arguments fall apart...You are right about Nvidia making more $$$; yet, that's only because there are more Nvidia fanboys out there than AMD fanboys (and all that driver talk that doesn't apply to today).
...By the way, I'm switching from AMD to Nvidia this time around only because Nvidia is focusing more on performance/watt...unless it's a huge failure.
0 2 [Posted by: rwwot  | Date: 02/19/12 12:11:02 AM]
Reply
 
"Remember that in order to unlock all the cores (in the 480) to make the GTX 580 Nvidia had to remove some gpgpu specific hardware."

This seems to be a popular misconception and I am pretty sure it is has been shown to be incorrect.

http://www.anandtech.com/...nvidias-geforce-gtx-580/3

"For GF110, NVIDIA included a 3rd type of transistor, which they describe as having “properties between the two previous ones”. Or in other words, NVIDIA began using a transistor that was leakier than a slow transistor, but not as leaky as the leakiest transistors in GF100. In fact this is where virtually all of NVIDIA’s power savings come from, as NVIDIA only outright removed few if any transistors considering that GF110 retains all of GF100’s functionality."

In fact, both GTX480 and GTX580 have 3B transistors. No GPGPU functionality was ever removed from GTX580.

To address your 2nd point, Kepler has been in development since 2009 because Nvidia has already announced that they were working on the follow-up of Fermi. Nvidia didn't just decide 6 months ago to "copy" AMD's GPU by removing hot clocks. It was likely a design decision reached a long time ago as a result of a complete redesign of the architecture. Nvidia has stated years ago that their goal was to improve Dual-precision performance/watt by 3-4 times from Fermi to Kepler before anyone even knew about GCN. Nvidia has promised to introduce in Kepler and Maxwell virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on. These features were proposed by them years ago.

Fundamentally, Nvidia has switched to a full Scalar architecture starting with 8800GTX, 5 years ago. In other words, even 5 years ago Nvidia was already laying the foundation for GPGPU compute, slowly.
http://www.anandtech.com/show/2116/6

It's highly unlikely that Nvidia decided out of the blue to remove hot clocks to "follow" AMD. These design decisions take place at the theoretical level before anyone begins the design of the chip on the computer.

The 3rd point about Nvidia having a leading position primarily because of fanboys. Well, every company has "fanboys". Apple, Porsche, they all have them. However, wouldn't you agree that while AMD fans tout Eyefinity as the "killer" feature of AMD cards, they completely dismiss PhysX, 3D surround gaming (or actually properly working 3D gaming), SLI drivers that work better, NV's ability to support SSAA in DX10 that took AMD forever to bring, custom game profiles that NV had forever. These advantages existed for a long time in NV's camp.

Nvidia also had good price/performance with 8800GT/S, 9800GT/X, GTX460/560. Nvidia also had the fastest single GPU cards with 8800GTX/Ultra, GTX280/285, GTX480/580. There are a lot of valid reasons why people have purchased NV cards because of features and performance and have nothing to do with blind fanboism. Also, in my personal experience of owning 4 ATI/AMD cards and 3 Nvidia cards, all of my Nvidia cards overclocked better in % terms on air cooling. Obviously that depends on the model (i.e., 5850 and 7970 overclock well). But still, all these advantages are almost always dismissed by AMD supporters.
2 2 [Posted by: BestJinjo  | Date: 02/19/12 07:59:47 AM]
Reply
 
Man, you are stubborn! Logic was never one of AMD's fans strengths. I admire you for what you are doing now, I don't have the nerves to do that. You have a thumbs up from me for all your comments.
1 1 [Posted by: cosminmcm  | Date: 02/19/12 11:51:39 PM]
Reply
 
show the post
0 3 [Posted by: veli05  | Date: 02/20/12 09:27:14 AM]
Reply
 
Honest question: You don't think AMD engages in viral marketing activities or has closer ties with certain reviewers?

It's ironic that you despise such tactics but yet you link a site from one of the most-NV hated reviewers in the world, who is also incompetent in hardware testing. Did you even read the pathetic HD7770 review from [H]? Notice how it disagrees from every other review on the Internet? That guy's credibility with reviewing graphics is 0. HD6870 and GTX560Ti pummel HD7770 and he still thinks HD7770 is a good value. What a joke [H] is. Kyle Bennett is a vocal NV hater as well, going way back. He tries to provide "real world playability" but yet finds playing FPS games at 30-35 fps and racing games at 40-45fps average acceptable. Also, time after time his results don't agree with any other website that also do manual game testing runs - Bit-Tech, Hardware Canucks, GameGPU, Anandtech.

Obviously it's impossible for anyone on the Internet to "prove" to you that they are not an NV/AMD paid marketer.

For example, even those who might prefer NV cards probably agree that AMD's 7900 series performs very well once it's overclocked ~ 1100mhz+. At the same time there are many AMD card owners who feel disappointed they would have liked for AMD to release 7900 series with more aggressive clocks. I've seen 7950s reach 1200mhz overclocks on air cooling, which amounts to an incredible 50% overclock.

However, I think it's also unreasonable when AMD supporters continuously bash the large die strategy, while ignoring financials, ignore any advantages NV's cards have in terms of image quality/features and produce absolute claims such as the only reason NV still exists is because it has millions of customers who are all fanboys.

Frankly, the post below mine by mosu just highlights that blatant fanboism seems to exist on both sides of the camp.

Both NV and ATi/AMD have produced awesome cards over the years. Only blind AMD fans deny that GeForce 3, GeForce 4, 6800GT, 7950GT, 8800GT/S, 9800GT, GTX460, GTX560Ti and GTX570 weren't great cards.

The anti-NV propaganda is just as pathetic. The same fans that used to dismiss Tessellation and Compute as the future of gaming now use those features to show how much superior AMD cards are.

Irony at its finest.
2 2 [Posted by: BestJinjo  | Date: 02/20/12 11:28:32 AM]
Reply
 
show the post
0 3 [Posted by: veli05  | Date: 02/20/12 01:44:29 PM]
Reply
 
When did NVidia publicly put down AMD? Please share.

Still, you'd rather support a company that had posters such as JF-AMD spreading information for months before Bulldozer's launch that its IPC will be superior to Phenom II, that Bulldozer would redefine price/performance for enthusiasts?

Both AMD and NV will downplay each other's weaknesses (i.e., AMD saying Tessellation usage was too extreme during HD5800/6900 series, but now it's touting HD7970 as the best GPU in Tessellation) or Nvidia not focusing on the fact that its GPUs can't natively support >2 displays, but yet touts that it has the only workable 3D surround gaming solution?

AMD is no charity either. The minute they saw an opportunity, they released an HD6970 "replacement" priced at $550. It's in your best interest that Kepler is actually better than HD7970.
1 2 [Posted by: BestJinjo  | Date: 02/21/12 05:05:23 PM]
Reply
 
show the post
0 4 [Posted by: rwwot  | Date: 02/20/12 08:42:09 PM]
Reply
 
The article doesn't contradict itself at all. It states that NVidia used different transistors in GF110 to achieve better power consumption. Also, GF110 was released on a more mature 40nm process. So in fact, there is no evidence to support the view that any GPGPU functionality was ever removed from GF110.

In fact, benchmarks still show GTX580 being faster in all Tessellation and Compute scenarios.

Did you run CUDA related programs that support your theory that GTX580 has worse performance in GPGPU compute?

Notice how much faster GTX570 and 580 are compared to GTX470 in Compute? That would be impossilbe if GTX570 was gimped.
http://www.anandtech.com/...amd-radeon-7950-review/15

Notice how GTX580 is faster than GTX480 in Tessellation:
http://techreport.com/articles.x/22192/7

Notice how GTX580 beats GTX480 in all Compute benchmarks in another review?
http://techreport.com/articles.x/22192/8

Can you find any review that shows that GTX580 had its compute performance neutered, which benchmarks to prove it?
1 2 [Posted by: BestJinjo  | Date: 02/21/12 04:55:50 PM]
Reply
 
An unlocked (512 cuda core) GTX 480 with 64 texture filtering units would perform better than a GTX 580. If the article you used as evidence did not contradict itself, this would not be true.

The less leaky transistors in the 580 do not perform as good as more leaky transistors in the 480...Also, the GTX 480 has 3.2 billion transistors while the GTX 580 has 3 billion transistors. Saying that the GTX 480 and the GTX 580 both have 3B transistors is very deceptive. Nvidia butchered the GTX 480 and enabled some of the cores that were disabled so that it would both perform better than the GF100 and not consume as much power.
0 1 [Posted by: rwwot  | Date: 02/21/12 10:39:50 PM]
Reply

4. 
show the post
0 5 [Posted by: mosu  | Date: 02/20/12 12:26:51 AM]
Reply
- collapse thread

 
I think the "truth" is distorted in your mind. The only time in the last 12 years where NV lost badly was GeForce 5 series. In every other generation they have been competitive with AMD/ATI in both performance and performance/$. There are a few exceptions such as unlockable HD6950 or insanely overclockable GTX460 where each camp had an unbeatable card temporarily. Similarly, AMD lost horribly during HD2900. 3800 series was also rather poor.

I can't even believe the ignorance of comparing NV and Apple fans considering Apple charges a very large premium for its products. Does NV charge a large "premium" over AMD cards?

Currently,

GTX560 > HD7770 in price/performance
GTX560 Ti ~ HD6950 in price/performance
GTX560 Ti 448 ~ HD6950 2GB in price/performance
GTX570 ~ HD6970 in price/performance (actually 570 is cheaper now)
GTX580 costs slightly less than HD7950 in most places. That's reasonable since HD7950 is barely faster without overclocking.

The only cards without direct competition are HD6850/6870. But those 2 are so good that they even make AMD's own HD7770 seem underwhelming. You make it sound as if generation after generation NV releases more expensive and worse performing cards.

Are you denying that:

GeForce 3 was better than Radeon 8500?
GeForce 4 had no competition at all from ATI until 9700 trounced Geforce 5
GeForce 6 competed well with X800 series
GeForce 7 competed well with X1800/1900 series

Recent examples have Nvidia totally dominating AMD with the entire GeForce 8 series (there was not as single card worth buying from AMD in all of 2900 or 3800 series), GTX260 216 was competitive with HD4870, GTX275 with HD4890, GTX470/480 outlasted HD5800 series (that tank in DX11 games with Tessellation); see GTX500 series above.
2 2 [Posted by: BestJinjo  | Date: 02/20/12 11:33:15 AM]
Reply

5. 
show the post
0 3 [Posted by: mitunchidamparam  | Date: 02/21/12 10:03:05 AM]
Reply

[1-5]

Add your Comment




Related news

Latest News

Tuesday, July 22, 2014

10:40 pm | ARM Preps Second-Generation “Artemis” and “Maya” 64-Bit ARMv8-A Offerings. ARM Readies 64-Bit Cores for Non-Traditional Applications

7:38 pm | AMD Vows to Introduce 20nm Products Next Year. AMD’s 20nm APUs, GPUs and Embedded Chips to Arrive in 2015

4:08 am | Microsoft to Unify All Windows Operating Systems for Client PCs. One Windows OS will Power PCs, Tablets and Smartphones

Monday, July 21, 2014

10:32 pm | PQI Debuts Flash Drive with Lightning and USB Connectors. PQI Offers Easy Way to Boost iPhone or iPad Storage

10:08 pm | Japan Display Begins to Mass Produce IPS-NEO Displays. JDI Begins to Mass Produce Rival for AMOLED Panels

12:56 pm | Microsoft to Fire 18,000 Employees to Boost Efficiency. Microsoft to Perform Massive Job Cut Ever Following Acquisition of Nokia