Dear forum members,
We are delighted to inform you that our forums are back online. Every single topic and post are now on their places and everything works just as before, only better. Welcome back!


Discussion on Article:
Super Hero: Gigabyte GeForce GTX 580 Super Overclock Graphics Card Review

Started by: eldemoledor1409 | Date 09/26/11 06:04:35 PM
Comments: 18 | Last Comment:  09/24/16 08:14:25 AM

Expand all threads | Collapse all threads


show the post
1 4 [Posted by: eldemoledor1409  | Date: 09/26/11 06:04:35 PM]
- collapse thread

Well, just imagine that 570 is around the same performance as 6970 and there you have it. They tested the 580 against its closest competitor, and that is 6970.
3 0 [Posted by: cosminmcm  | Date: 09/27/11 06:38:29 AM]

show the post
1 6 [Posted by: JuicyOrange  | Date: 09/27/11 05:12:34 AM]
- collapse thread


1920x1080 8xMSAA in Shogun 2:
GTX580 1.5GB = 27.2 fps
HD6970 2GB = 25.2 fps

Both are a slideshow.

If you want to play this game smoothly at those settings, grab 2x HD6950 2GBs for the price of a single GTX580.

Since the introduction of HD6950 2GB (unlocked/overclocked to HD6970), 2x HD6950s have been better than GTX580 all this time anyway.

Of course, most users with GTX580 are probably running 2 or more. A single GTX580 for $500+ is a horrible price/performance card. You can pick up a GTX570 for $280-300 or GTX480 Lightning for $300. I think most understand that it has a 30-40% price premium for people who don't want multi-GPU setups and are willing to pay a premium for the "fastest single GPU solution".

Personally, I'll stick with the 6950 and reinvest the savings into HD7000/GTX600 series. After building computers for >10 years, future-proofing is a laughable concept.

In 12 months from now, a next generation $299 HD7000/GTX600 series will smoke this card.
0 1 [Posted by: BestJinjo  | Date: 09/27/11 01:29:17 PM]

u can reduce other settings like shadow detail from ultra to medium which will give a big performance boost and very little difference in appearance. With those tweaks, Shogun 2 in high resolution + high AA would be perfectly playable if it weren't for the VRAM limitation. Also, Shogun 2 is not a FPS, u don't need 60+ fps.

Well, a card which is at least $100 bucks more expensive should be faster. The point that the 1.5GB VRAM craps out with some games- mods is still valid.

My point is, that buying a GFX card for $500 which already comes at it's limits regarding VRAM size in existing games is not a good investment. Future games/mods will certainly not have lower VRAM requirements. Adding another 1.5GB VRAM would have not made a big difference in price but a big difference in viability imho. And that is why I regard this card as a failure.
0 1 [Posted by: JuicyOrange  | Date: 09/28/11 12:55:45 AM]

I just wondering why AMD/ATI Chips have a maximum of 256 bit Memory interface so far (I skip 2x256 bits for those dual GPUs), and wanted to know if they could make for example 6970 with 384 bit memory interface did it matters or not by the AMD's architecture...
I remember those days that we checked the huge difference between the GPU power margins of 64 & 128 bit Video Cards.
could someone please tell me about AMD's Max Mem. Interface of 256 bits? Anyone? Sergey?
0 0 [Posted by: Pouria  | Date: 09/28/11 08:44:31 AM]
- collapse thread

There are 2 different approaches you can use to arrive at memory bandwidth.

The first is the current NV approach. You use higher memory interface but slower memory chips (this makes the PCB board of the card more expensive, but slower GDDR5 memory is cheaper to purchase). It also puts less pressure on a more robust memory controller that may have trouble working at much higher GDRR5 memory frequencies (i.e., exactly what happens with Fermi). With a less complex memory controller, you can allocate more transistors to other facets of the GPU, such as Tessellation/geometry processing.

For example, GTX580:

GDDR5 4008 mhz x 384-bit / 8 / 1000 = 192.38 GB/sec

AMD has a better memory controller that allows it to work with much faster GDDR5 frequencies. As such, they can save $ by using the cheaper 256-bit memory interface. The downside, is that this memory controller is 2x larger on the GPU die than found in Barts (i.e., HD6870).


GDDR5 5500 mhz x 256-bit / 8 / 1000 = 176 GB/sec

If you were only looking at 384-bit vs. 256-bit numbers, you could have assumed that
GeForce may have up to 50% more memory bandwidth. But you have to look at memory frequency too.

The truth is, in many previous generations, AMD and ATI chose a combination of factors in determining which way to go. Both have their advantages and disadvantages.

1) Do not assume that higher # memory interface (i.e., 320-bit > 256-bit) automatically results in a faster card. For example, 8800GT 512MB (256-bit) is actually faster than 8800GTS 640mb (320-bit) or HD2900XT (512-bit)

2) Do not assume that higher memory bandwidth results in a much faster videocard. There could be other greater bottlenecks such as shader computation, texture fill-rate, VRAM limitations, tessellation advantage, etc.

For example, HD4890 has 124.8 GB/sec memory bandwidth vs. 128.3 GB/sec for the GTX560Ti. But the GTX560 Ti is way faster in all modern games.
1 0 [Posted by: BestJinjo  | Date: 09/28/11 09:26:41 AM]
Thank you for your information BestJinjo !
I didnt know that ATi has reached to 512 bits for HD2900XT.
Sure I concern the other specs but I meant why with this chips they are still on 256 bits and according to your words Price is the most important reason I think... Thanks again
0 0 [Posted by: Pouria  | Date: 09/28/11 04:24:32 PM]

Great post !! Nice and effective blog post. The content is too short but effective. I love the information you share here. Its an well written blog post by you. This is awesome blog post.
Selenium Online Training Python Online Training Pentaho Online Training
0 0 [Posted by: juliedavid  | Date: 09/24/16 08:14:25 AM]


Back to the Article

Add your Comment