News
 

Bookmark and Share

(8) 

Nvidia Corp. said that it had not disabled support of multi-core central processing units (CPUs) in its PhysX application programming interface (API) and that allegations made by Advanced Micro Devices were not true.

“This is yet another completely unsubstantiated accusation made by an employee of one of our competitors.  I am writing here to address it directly and call it for what it is, completely false.  Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period.  Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities,” said Nadeem Mohammed, director of PhysX product management at Nvidia.

Mr. Mohammed used to work on PhysX at Ageia and then Nvidia, thus, he is a person that is more than familiar with the physics effects processing technology. According to Mr. Mohammed, since the merger with Nvidia there had been no changes to the software development kit (SDK) code “which purposely reduced the software performance of PhysX or its use of CPU multi-cores”.

In fact, according to the representative from Nvidia, it is technically impossible for the API developer to disable support for execution on certain amount of CPU cores.

“Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves. One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX. This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU. This level of multi-core support and programming methodology has not changed since day one. And to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case,” said Mr. Mohammed.

Last week Richard Huddy, a software developer relationship manager at AMD, accused Nvidia of tweaking PhysX API so that processing of physics effects is only done on two microprocessor cores in order to artifically make graphics processing units seem better solution for computing of physics effects in games that use PhysX. Back in 2009 Nvidia disabled support of PhysX on the company's own GeForce GPUs as well as Ageia PhysX physics processing cards when drivers detected ATI Radeon hardware present.

Tags: Nvidia, PhysX, ATI, AMD, Radeon, Geforce, Ageia

Discussion

Comments currently: 8
Discussion started: 01/21/10 06:17:58 AM
Latest comment: 01/26/10 12:55:54 AM
Expand all threads | Collapse all threads

[1-7]

1. 
Now need to have some journalists ask the game developers, why did they chose not to use the multithreading capabilities.
0 0 [Posted by: uibo  | Date: 01/21/10 06:17:58 AM]
Reply
- collapse thread

 
Of curse brow, I don't believe in accusation of AMD to Nvidia...
0 0 [Posted by: samueldiogo  | Date: 01/21/10 10:50:28 AM]
Reply

2. 
Nvidia is monopolist and that is very clear.Now court must say something.
0 0 [Posted by: Blackcode  | Date: 01/21/10 07:07:51 AM]
Reply

3. 
Really? Says the company who disables PhysX when an ATi card is on the system.
0 0 [Posted by: RtFusion  | Date: 01/21/10 07:43:57 AM]
Reply

4. 
Seems like an easy theory to prove or disprove. I vote all ATI users set up and contribute a fund for purchase and implementation of the PhysX SDK !
0 0 [Posted by: Borat Demerjian  | Date: 01/21/10 04:25:16 PM]
Reply

5. 
The api allows multi core and the devs to pick that - they don't because the game was mostly likely written for the triple core xbox and hence they tweak physics so it runs fine on one core leaving the other two to do the rest of the game. This stays when the game gets ported to the PC. It doesn't matter as the physics are simple enough to run fine on one core.

If a dev was making a PC only game and wanted fancier software physics I am sure they could use more threads like the nvidia bloke says.

However the special extensions to games like batman:AA aren't really written by the game devs, they are largely done by nvidia. They obviously run in hardware on an nvidia gpu, but the software fallback for when you don't have an nvidia gpu doesn't seem to be multi-threaded. I suspect to make it multi-threaded would take more work and being as the whole point is to sell nvidia gpu's not amd/intel multi-core cpu's they aren't going to put in that work.

Hence if it's a dev's work integral for the game you are fine, if it's a specific nvidia extension to the game put in there to sell nvidia gpu's you'd better have one or you will suffer.
0 0 [Posted by: Dribble  | Date: 01/22/10 02:53:23 AM]
Reply

6. 
This was demonstrated by TomsHardware some while ago. They've tested CPUs .. single/dual/triple & quad cores. The article was dedicated to CPU & GPU performance in Batman with Physx Enable/Disabled. What they've noticed was that, when Physx was disabled, most CPUs would be utilized up to 90% most of the time and the FPS was high. When they've enabled Physx, the CPU utilization would DROP to 30% this signal us 2 things : 1) On nVIDIA GPU powered systems, the GPU takes care of the Physx processing and maybe the CPU is free. but 2) On the ATi GPU powered systems the FPS is capped as dual/triple or quad core CPUs get the same FPS number while the CPU utilization is around 30 ~ 40 %. Wasn't the CPU supposed to work to 100% just to keep up with the nVIDIA GPU's Physx processing? Wasn't there supposed to be HIGHER CPU utilization with Physx enabled because there is no nVIDIA GPU present to do the Physx work?

Also, looking at their graphs, the guys @ Tom's noticed that even on Core i7, with Physx disabled only 2 cores are used. With Physx enable A SINGLE core is working. There you have it: in TWIMTBP program the games are worked by nVIDIA so that, even if you don't get better performance with their cards, they'll make sure that you'll get LOWER performance if you have an ATi card or if you don't use their Physh accelerator and want to use a Quad Core instead.

The simple explanation is that, just as GearBox did with Borderlands, the game developers don't really involve themselves with the nVIDIA optimization. If they're in the TWIMTBP program, they'll just send the source to nVIDIA and let nVIDIA's men do the optimizations and send the game source engine back to the developer. I know this about Borderlands because nVIDIA compiled the physxcudart_20.dll with their compiler only for SSE2 endowed CPUs so anybody else with and AMD CPU that had no SSE2 was getting an ugly set of 5 errors. I just took the recompiled file for x86 CPUs and I've finished Borderlands on the nonSSE2 CPU , Barton 2300 MHz w/ HiS X1950 Pro 512 MB AGP . nVIDIA has a tradition in cheating. ATi cheated also in the past. But ATi's cheat was reserved in hardcore "optimizations" in their drivers and no handicap was included in GAMES that were optimized for their GPUs. nVIDIA, on the other hand, is famous for the fact that any game included in TWIMTBP program is capped on ATi cards. Try studying the CPU utilization graph on YOUR system if you have an ATi GPU and a TWIMTBP game. You'll se that the CPU is not fully utilised. Meaning that the performance is capped. Then ask a friend to give you an nVIDIA card and watch as the CPU utilization will not go below 80 ~ 90 % .
0 0 [Posted by: East17  | Date: 01/25/10 02:50:31 AM]
Reply

7. 
ATI is developing open Physics standard and free implementation expecialy on ATI GPU,s and AMD cpu's. Nvidia is making same mistakes as 3DFX, being to proprietary, closed and limited in support, versatility and flexibility to the developers, customers and a whole IT industry. As a result Nvidia Physics that is a great peace of technology is extremely limited in exposure on the market and towards customers - gamers, e.g disabling Nvidia Physics working with ATI cards.

On the bright side Nvidia Fermi is scheduled for March. Fermi 360, 380 aka Geforce 360 and Geforce 380 is expected between 2nd - 10th of March 2010. The dual Fermi 380x2 aka Geforce 395 at this stage is expected in May 2nd - 10th 2010. Expect Nvidia refreshed Fermi lineup of all video cards in Q3 2010 - a shrink in die 32nm or 28nm, refined GPU'S, optimatisation, beter scalability, higher clocks, increased number of shaders and other tweaks of existing Fermi.

ATI refreshed GPU lineup scheduled for Q3 2010 around September is a shrinked Evergreen core, likely a 32nm, increased GPU core clocks, memory clocks and increased number of shaders with new naming variants of Evergreen lineup e.g 5860 etc.

Competition is good it as it gives customers more choice and lover price tag ;-D.
0 0 [Posted by: Aleksa  | Date: 01/26/10 12:55:54 AM]
Reply

[1-7]

Add your Comment




Related news

Latest News

Monday, April 14, 2014

8:23 am | Microsoft Vows to Release Xbox 360 Emulator for Xbox One. Microsoft Xbox One May Gain Compatibility with Xbox 360 Games

Tuesday, April 1, 2014

10:39 am | Microsoft Reveals Kinect for Windows v2 Hardware. Launch of New Kinect for Windows Approaches

Tuesday, March 25, 2014

1:57 pm | Facebook to Acquire Virtual Reality Pioneer, Oculus VR. Facebook Considers Virtual Reality as Next-Gen Social Platform

1:35 pm | Intel Acquires Maker of Wearable Computing Devices. Basis Science Becomes Fully-Owned Subsidiary of Intel

Monday, March 24, 2014

10:53 pm | Global UHD TV Shipments Total 1.6 Million Units in 2013 – Analysts. China Ahead of the Whole World with 4K TV Adoption

10:40 pm | Crytek to Adopt AMD Mantle Mantle API for CryEngine. Leading Game Developer Adopts AMD Mantle

9:08 pm | Microsoft Unleashes DirectX 12: One API for PCs, Mobile Gadgets and Xbox One. Microsoft Promises Increased Performance, New Features with DirectX 12

3:33 pm | PowerVR Wizard: Imagination Reveals World’s First Ray-Tracing GPU IP for Mobile Devices. Imagination Technologies Brings Ray-Tracing, Hybrid Rendering Modes to Smartphones and Tablets

2:00 pm | Nokia Now Expects to Close Deal with Microsoft in Q2. Sale of Nokia’s Division to Close Next Month