An important characteristic of every graphics card, the amount of graphics memory determines the card’s ability to store a large number of textures and other data on itself avoiding the need to access the system memory or hard disk. Each access has a negative effect on the overall performance because the speed of accessing graphics memory is usually faster even with inexpensive graphics cards than the speed of accessing system memory, let alone disk memory. Thus, it is very desirable to store all the textures and other data in the graphics card’s local memory in order to achieve maximum performance in games.
Today, top-end gaming graphics cards come with 512MB to 1GB of graphics memory. Such solutions usually have a 256-bit or wider memory bus, from 16 to 32 texture-mapping units, and from 16 to 24 raster operators. For such devices the amount of graphics memory can be a significant performance-determining factor, especially in high resolutions with enabled full-screen antialiasing, but is it the same for cheaper and less advanced solutions?
Graphics cards weaker than ATI Radeon HD 2900 and Nvidia GeForce 8800 usually have 8-12 (occasionally, 16) TMUs and ROPs and a 128-bit memory bus, and the memory is also clocked at a rather low frequency. These factors may have a limiting effect in games, negating the influence of the amount of graphics memory while the memory frequency plays a much greater part.
However, graphics card makers sometimes install quite a lot of memory on inexpensive cards, and it is often slow memory working at a reduced frequency in comparison with the standard version. They try to reach two goals at once by doing so: they make the product more appealing since an inexperienced user is likely to respond to big numbers, and they also get rid of their stores of slow memory chips. Is this approach justifiable for graphics cards priced at below $100? We’ll try to find out by benchmarking a new card called PowerColor HD 2600 Pro 512MB DDR2.