News
 

Bookmark and Share

(7) 

Microsoft Corp. plans to release a patch for Xbox One game console, which will boost performance of the system’s graphics processing unit by 8% - 10%. The update will remove compulsory reservation of GPU horsepower for processing of Kinect’s video data.

Graphics sub-systems of Microsoft Xbox One and Sony PlayStation 4 have a lot of similarities on the architectural level as both are powered by AMD GCN [graphics core next] technology, but Sony’s solution offers higher performance. Microsoft Xbox One’s graphics processing unit features 768 stream processors, 48 texture units and 4 render output units which all run at 853MHz. To compensate relatively slow quad-channel DDR3 memory sub-system (with up to 68GB/s bandwidth), the GPU is equipped with 32MB of embedded SRAM/ESRAM with 102.4GB/s throughput and low-latency By contrast, Sony PS4’s graphics engine operates at 800MHz, but features 1152 stream processors, 18 highly-efficient texture units (each is supposed to have several texture address (TA) and texture filtering (TF) blocks)) and high-bandwidth GDDR5 memory bus (with 176GB/s peak bandwidth).

Not only Xbox One’s GPU has about 33% lower computational power than the PlayStation 4’s, but it reserves 2% of GPU performance for processing of Kinect’s audio data and 8% of GPU performance for  processing of Kinect’s video data. Basically, 10% of the GPU remains idle even at times when a game does not use the motion sensor. As a consequence, in some games (Tomb Raider 2013, for example) PlayStation 4 renders 60 frames per second, whereas the Xbox One can only hit 30 frames per second.

Quite naturally, game developers need to learn how to efficiently use Xbox One’s 32MB SRAM/ESRAM GPU cache to speed up rendering of graphics-intensive video games, however, this will take time. In a bid to boost performance now, Microsoft is working on a patch that will make 8% reservation of GPU horsepower for Kinect optional, reports HotHardware web-site. As a result, games that do not use Kinect will be able to use the additional resources to improve fps and/or graphics quality. While 8% is not a lot, over time there will be more tweaks here and there to improve the GPU of the Xbox One.

It is worth mentioning that less than a quarter after the launch Microsoft is rebalancing the Xbox One platform towards higher performance in video games. The new Kinect 2 sensor remains an important part of the whole Xbox One project, but it is obvious that right now the software giant needs to concentrate in improving the key part of the platform: graphics processing unit.

Tags: Microsoft, Xbox One, Xbox, AMD, Radeon, GCN

Discussion

Comments currently: 7
Discussion started: 01/31/14 09:36:47 AM
Latest comment: 02/04/14 11:44:49 AM
Expand all threads | Collapse all threads

[1-3]

1. 
30 fps vs 60 fps is a really really BIG difference.
3 1 [Posted by: TAViX  | Date: 01/31/14 09:36:47 AM]
Reply
- collapse thread

 
yeha i thin kthey are going to need a little bit more performance tweeking before they are able to get to the point where they can run 60fps on all games bc i do not tjhink that 10% more performance is going to increase it enough to run at 60 fps
4 0 [Posted by: william fryman  | Date: 01/31/14 11:27:49 AM]
Reply

2. 
If they boost performance to accommodate the Kinect controller, it means they have had complaints that it interferes with performance.
That is something they should have noticed during the testing period, instead of rushing it to market.
3 0 [Posted by: caring1  | Date: 01/31/14 08:35:01 PM]
Reply

3. 
Both M$, and SONY had better start ordering more powerfull processors from AMD, or others, because there is only a short window remianing before AMD and Nvidia, begin to merge the CPU with the descrete GPU, and produce an entire caming console on a PCI card. Maxwell is Nvidia's reply to the AMD dedicated gaming console APU's of the Xbone and PS4! Just as the API wars are beginning between Mantle, DX*.*, OpenGL, there will be a war to get CPUs up close and connected, Low Latency wise and Shareing the same fat data bus, with the GPU! And what way is the best way to reduce latency between CPU and GPU on a descrete GPU, the best way is to merge the CPU and the GPU onto the descrete graphics card, making the discrete GPU into a complete APU, with its own OS, GDDR5 memory, Large ON DIE RAM(to boost the on card gaming OS and Engine), and CPU/GPU combo shareing the same on die memory memory controller to a unified memory address space. The begnnings of this are starting, Nvidia's maxwell may start out with a single Denver ARM ISA based core, but the competition will force Nvidia to add more denver cores to Maxwell, to compete with AMD, as AMD will be beginning to rework its Gaming console APUs, into more powerfull descrete PCI based complete gaming platforms on a PCI card. This is where gaming is gonig to evolve, driven by the need for lower latency between CPU and GPU, this need is currently the driving force behind the API improvments of Mantle, but the real solution for the latency issue, is to merge the CPU with the GPU, and descrete GPUs will evolve into a complete gaming PCI based APUs that are consoles/computers unto themselves, all on a PCI card.
0 2 [Posted by: BigChiefRunAmok  | Date: 02/02/14 11:40:36 AM]
Reply
- collapse thread

 
It would be really dumb to make discrete PCI card machines. Not only would you have to pay out the butt for such a compact item but you would have to purchase a motherboard, proc, and ram just to be able to run the machine. It's not like you can just have a bare PCI express slot with nothing holding it.

On GPU latency, the best way to address that is to continue improving the PCIe standard. Above that, minor hardware improvements like hUMA and new APIs are what they should push for on the software side.
1 0 [Posted by: evernessince  | Date: 02/03/14 11:55:47 PM]
Reply
 
People pay out the nose, ears and Butt, for graphics cards, and Maxwell is getting a Denver ARM ISA based core to add to the GPU. PCI requires encodeing/encapsulating decoding/de-encapsulating of data, and this will always require overhead, and introduce Latency! Getting a CPU in as close a proximity to the GPU, and having the CPU/GPU share a large on Die RAM, FAT data BUS, and GDDR5 memory(on the PCI card!) is the the way to go! UMA(hUMA is just a fancy AMD marketing term) for Unified Memory Access, saves having to move huge amounts between non unified memory address spaces(one 64 bit pointer, takes much less time to transfer, than a whole Butt load of data takes to transfer the old non unified memory way). Descrete GPUs have in them 90% of what it takes to be a full general purpose computer, they have memory, memory controller, GDDR5 memory, data bus, and other on die control blocks, so adding a general purpose computer, to the vector processor(GPU) is just a matter of adding another on die functional block of logic, in this case a CPU(general purpose) to the Vector Processor, and both the CPU and GPU can share the GPU's on die memory controller, hell most memory controllers are almost if not CPUs in their own right! So how much space would an on DIE CPU take up on the discrete GPU PCI board? Not very much, AMD crams 8 cores onto its console APUs, and one of them has a large on die RAM, so the GPU/CPU combo(APU, whatever name the marketing monkeys can think up) in not going to take up any more than a few MM squared on the CPU/GPU's DIE. If you do not think that A discrete GPU has a De Facto mother board on that PCI card, then how does it use the GDDR5 memory, and the Fat GPU style data bus! It is the same thing, it is just refered to as a daughter card with its own memory and address bus, and daughters do become mothers, just look at the Big Iron servers, and HPC/Supercomputers, with thousands of motherboards each hosting independent PCI based computer systems 8 or more PCI slots per Moatherboard. If the CPU and the GPU are both on die they can communicate over the internal on die data bus, and if the CPU/GPU has a large on die RAM then most of the time they will not have to deal with PCI based transfers, as the on die RAM will be attatched driectly to the high speed on die internal bus, and if any code does not already reside on the large on die RAM well the memory controller will do its job, and if the gaming engine and OS vital functions reside on the on die RAM(with proper cacheing functionality they will) then the games will run a whole Butt loads faster on a PCI based gaming platform.
0 0 [Posted by: BigChiefRunAmok  | Date: 02/04/14 08:50:18 AM]
Reply
 
"It would be really dumb to make discrete PCI card machines. Not only would you have to pay out the butt for such a compact item but you would have to purchase a motherboard, proc, and ram just to be able to run the machine. It's not like you can just have a bare PCI express slot with nothing holding it."

Where in the name of [insert favorite deity here, or other] did you get the idea that I ment just the PCI based gaming system, without a motherboard to hold up the slot? the motherboard is there to host the general purpose OS, the PCI complete gaming system on a PCI card has its own optimized for gaming OS distro(Steam OS, other) and game/gaming engine loaded at the time, so as not to need the assistence of the Motherboard CPU/whatever once the PCI based gaming system is booted up and loads its gaming OS/game, the motherboard CPU and OS, is just there to assist in boot up or passing the game/gaming engine to the gaming OS on the PCI card, in fact the motherboard OS does not have to do any work other than monitoring the system, the PCI based gaming platform is quite capable of doing its own disk IO, and OS booting by itself, as any computer can do, or multiple CPU/GPU based computing platforms can do, via the bus mastering and DMI circuitry that have been part of the motherboard standards for years.

Just A question evernessince, have you ever looked at the chips on a descrete GPU? Do you not see the memory on the PCI card, the GDDR5 memory ICs, the data and Address Bus traces on the PCI card, the GPU(vector processor) with its on DIE memory controller(Nivida since fermi, AMD with its APUs/descrete GPUs)! And Just beacuse the current descrete GPUs do not have branch prediction units and such, you think thay are not processors in their own right for vector computing tasks(gaming graphics), and that they are still not processors that require memory controllers, and such for their PCI based graphics computing, when in fact the only difference between CPU and GPU, motherboard, and PCI daughter complete computing platforms, is maybe as little as a branch prediction unit and a few other bits of logic added to the GPU. GPUs are computers, just not general purpose computers, and the descrete GPU only cards are still computing platforms(vector computers).

Most people own a desktop or 2, and a laptop, and having a PCI gaming platform (AMD gaming APU based, Nvidia Denver ARM ISA APU equivalent) on a PCI card would make any old desktop a gaming console, or TWO(for desktops with more PCI slots). Who would not like that, and for those with no desktop, get a Steambox, they are mostly complete desktops in a small formfactor(for some SKUs), or larger for others.
0 0 [Posted by: BigChiefRunAmok  | Date: 02/04/14 11:44:49 AM]
Reply

[1-3]

Add your Comment




Related news

Latest News

Thursday, November 6, 2014

6:48 am | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

8:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

12:22 pm | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

9:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

6:41 pm | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture