News
 

Bookmark and Share

(7) 

Nvidia Corp.'s next-generation Kepler architecture for graphics processing units (GPUs) and compute accelerators promises a lot with its new levels of performance and new set of DirectX 11.1 capabilities. However, it will take Nvidia almost a year to fully roll-out the Kepler family, according to a newly published information.

In a bid to avoid a situation with massive delay of the new family, Nvidia will start with introduction of relatively simplistic products, code-named GK107 and GK106, according to information published by 4Gamer.net web-site. Although the GK107 (128-bit memory) will support DirectX 11.1, unlike the GK106 (256-bit memory bus) it will not feature PCI Express 3.0. The more powerful GK104 will feature PCIe 3.0 and 384-bit bus, whereas the GK110 is projected to carry two of such chips. Both GK104 and GK110 will be available later than the less advanced parts. The most advanced Kepler-family chip will be code-named GK112. The product is projected to feature 512-bit memory bus. The flagship single chip solution will be the last in the Kepler 1.0 family and will be presumably released towards the end of 2012.

All Kepler-generation of chips will be made using 28nm process technology at Taiwan Semiconductor Manufacturing Company. Thanks to new fabrication process, the Kepler products have proved to be very efficient and competitive in the mobile space, according to Nvidia. The decision to address mobile computers first partly forced Nvidia to concentrate on development of entry-level solutions first.

Kepler is Nvidia's next-generation graphics processor architecture that is projected to bring considerable performance improvements and will likely make the GPU more flexible in terms of programmability, which will speed up development of applications that take advantage of GPGPU (general purpose processing on GPU) technologies. Some of the technologies that Nvidia promised to introduce in Kepler and Maxwell (the architecture that will succeed Kepler) include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on. Entry-level chips may not get all the features that Kepler architecture will have to often.

Nvidia did not comment on the news-story.

Tags: Nvidia, Geforce, 28nm, Kepler

Discussion

Comments currently: 7
Discussion started: 11/29/11 02:19:14 PM
Latest comment: 12/05/11 04:08:41 AM
Expand all threads | Collapse all threads

[1-3]

1. 
show the post
1 8 [Posted by: vid_ghost  | Date: 11/29/11 02:19:14 PM]
Reply

2. 
So NVIDIA's Kepler is the same thing as Graphics Core Next from AMD.
[tick] Virtual memory.
[tick] pre-emption.
[tick] GPU autonomous (interrupts, etc.).

Now this time AMD has a good head start if those roadmaps are correct.
3 1 [Posted by: Filiprino  | Date: 11/29/11 02:43:57 PM]
Reply

3. 
show the post
2 6 [Posted by: vid_ghost  | Date: 11/29/11 03:32:02 PM]
Reply
- collapse thread

 
show the post
1 5 [Posted by: BestJinjo  | Date: 11/30/11 10:25:31 PM]
Reply
 
Making games that will crush yer supercomputer is trivial. The most talented minds compromise to painful levels to bring creativity with crippled innovation. Waiting out this generation of consoles is all that seperates your machine from utter humiliation. Lowest common denominator greed on a corporate level. If you do not buy the Hardware the game will not be made vs. If you do not make the game the Hardware will not be bought. endless Catch 22.

a good creatives centric post by industry veterans suffering the stale end of consolitis:

http://www.polycount.com/...ad.php?t=91152&page=3


I think an important factor which is completely ignored with graphics is its affects on gameplay.

Every one of these fantastic looking games out on the market today on the consoles have earned its looks from using a ton of tricks, and having a very focused scene of gameplay, and faking everything that is outside of it. And there's a ton of budget put down into developing systems that can do these things, or stream content, and there's a ton of artists that have to spend time on doing things that really doesn't add to the art at all, like optimizations, planning to make something cost less than it does, building lod steps, remaking stuff since they already baked something.

It's not just about the art, it's what kind of worlds we can create that are actually real and going on, without having to rely on faking everything that is outside of the players near view, it's about having the same kind of fidelity but being able to do these elderscrolls type of games where you can just build the world and not have to worry about how to solve the backdrop.



There's a ton of money going in to trying to fake things to seem like things they are not, while we could in fact with the current pc hardware levels having reached exponentially higher levels than the consoles do some really real experiences without having to cut down on the fantastic level of technical achievements we have managed to reach, and the artwork we could do with those tools.[/quote]

If you are an artist then grow a pair and be an artist. The very thought of what I might be able to do with
shadows and light in a future without the current console economic ball and chain is liberating in itself.
A beautiful hypereal future better than realism does not suffer from anything uncanny:


Geomerics Enlighten Demo at GDC 2010 - YouTube


I am not sure how a new generation of consoles will introduce any concerns that would make visuals
more expensive. ( executing new technologies may require more programming development )
but an army of artists that have an interest in and a savvy for next gen 2.0 is just that....
the size does not have to grow if you already have a team that was hired for and proven they can handle "new".
It's now all gravy ( or should be ). the pain of transistioning happened already.
You as a specialist have already been hired.
( when the industry concentrated on elevated graphics in the first place ).


To some extent the art content being produces for your average game is already way overdetailed for console spec. Just look at the high resolution sub-d and sculpted meshes for characters, weapons, etc for most games. Then come to the realization that these are being sized down to 512x512 to fit on consoles. This is excess on the source pipeline in the first place, but when we consider this its not going to be all that crazy to simply pump out models with higher poly budgets and texture resolutions.

Personally for me, I have to do a lot more work when I know I have a low polygon budget and texture resolution, attention to detail and scale, as well as complexity of silhouette has much more work go into it. Oh you may think creating a 2000 poly model vs a 4000 is easier, but its not really the case.

In addition to all of that, better spec'd consoles can help automate and just generally improve a lot of visual aspects without much or any need for extra content. Realtime radiosity, reflections, better dynamic animations, better shaders, etc etc etc.

Not to mention the VAST potential for improvement in allof that stuff that you know, makes games fun, like AI, gameplay systems and physics that are incredibly processor intense.[/quote]

What irks me is the complete ignorance of what could be possible by those who prefer the comforting though that their Hardware is still "boss".

It is not.
That day is not evean close.
I have seen behind the curtain.
and "trust me... You WILL want to buy that upgrade when that day comes"
0 0 [Posted by: claydough  | Date: 12/05/11 04:08:41 AM]
Reply

[1-3]

Add your Comment




Related news

Latest News

Wednesday, October 8, 2014

8:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

12:22 pm | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

9:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

6:41 pm | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture

Monday, August 25, 2014

6:05 pm | Chinese Inspur to Sell Mission-Critical Servers with AMD Software, Power 8 Processors. IBM to Enter Chinese Big Data Market with the Help from Inspur