• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Physx uses x87 code?

Joined
Apr 4, 2008
Messages
4,686 (0.80/day)
System Name Obelisc
Processor i7 3770k @ 4.8 GHz
Motherboard Asus P8Z77-V
Cooling H110
Memory 16GB(4x4) @ 2400 MHz 9-11-11-31
Video Card(s) GTX 780 Ti
Storage 850 EVO 1TB, 2x 5TB Toshiba
Case T81
Audio Device(s) X-Fi Titanium HD
Power Supply EVGA 850 T2 80+ TITANIUM
Software Win10 64bit
I didn't see anyone else post this, and I found it entertaining, or depressing. http://realworldtech.com/page.cfm?ArticleID=RWT070510142143&p=1

x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87. By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

There were all sort of reasons to hate physx already, this just makes the whole thing seem like a scam...
 
Last edited by a moderator:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,049 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
sigh .. already saw discussion of this at xs ..

deprecated? author needs to learn what deprecated means in developer speak. floating point is a feature of x86 cpus and not going away

sse is not just a magical checkbox that you can turn on and off at compile time, you have to actively port your code which costs time and money.

you could make the same argument why microsoft doesnt just compile windows for the iphone
 
Joined
Jan 2, 2008
Messages
3,296 (0.55/day)
System Name Thakk
Processor i7 6700k @ 4.5Ghz
Motherboard Gigabyte G1 Z170N ITX
Cooling H55 AIO
Memory 32GB DDR4 3100 c16
Video Card(s) Zotac RTX3080 Trinity
Storage Corsair Force GT 120GB SSD / Intel 250GB SSD / Samsung Pro 512 SSD / 3TB Seagate SV32
Display(s) Acer Predator X34 100hz IPS Gsync / HTC Vive
Case QBX
Audio Device(s) Realtek ALC1150 > Creative Gigaworks T40 > AKG Q701
Power Supply Corsair SF600
Mouse Logitech G900
Keyboard Ducky Shine TKL MX Blue + Vortex PBT Doubleshots
Software Windows 10 64bit
Benchmark Scores http://www.3dmark.com/fs/12108888
But it sure will be a blast if they rewrite / optimize the code utilizing SSE.. we would see physics effects GPU accellerated-like using CPU.
 

ahmedz_1991

New Member
Joined
Mar 9, 2009
Messages
6 (0.00/day)
Location
Egypt
System Name Windows® 7 x64 Ultimate
Processor Intel® Core2 Duo E4400 @ 2.00 GHz
Motherboard Intel® D946GZIS
Memory Nanya® 2 GBs PC2-5300
Video Card(s) Built-in Intel® 946GZ
Storage WD® 160 GBs
Audio Device(s) Built-in Intel® D946GZISSL
that's what i wanted 2 know if NVidia is trying 2 better the graphics or just better the image of its GPU
 

Kreij

Senior Monkey Moderator
Joined
Feb 6, 2007
Messages
13,817 (2.20/day)
Location
Cheeseland (Wisconsin, USA)
The fact that the PhysX libraries and drivers use x87 code is really not that big of a deal.

This however ...
Moreover, PhysX code is automatically multi-threaded on Nvidia GPUs by the PhysX and device drivers, whereas there is no automatic multi-threading for CPUs

... would seem to indicate that before Nvidia snatched up Ageia, they (Ageia) perhaps limited the CPU performance (by limiting thread execution) so their PPU's would outperform CPUs. Nvidia seems to just have followed suit.

Nvidia purchased Ageia in 2008, so they would have had time to change the CPU executing code, but that would not really make any sense as they are not promoting CPUs, and it would not have been a wise decision from a (GPU) marketting standpoint.

That being said, Nvidia owns the code and I stand behind their right to do whatever they want with it. They are, after all, in the business to make money.
 
Joined
Nov 2, 2008
Messages
887 (0.16/day)
Processor Intel Core i3-8100
Motherboard ASRock H370 Pro4
Cooling Cryorig M9i
Memory 16GB G.Skill Aegis DDR4-2400
Video Card(s) Gigabyte GeForce GTX 1060 WindForce OC 3GB
Storage Crucial MX500 512GB SSD
Display(s) Dell S2316M LCD
Case Fractal Design Define R4 Black Pearl
Audio Device(s) Realtek ALC892
Power Supply Corsair CX600M
Mouse Logitech M500
Keyboard Lenovo KB1021 USB
Software Windows 10 Professional x64
The fact that the PhysX libraries and drivers use x87 code is really not that big of a deal.

Read page 4, "Why x87?", again:

x87 uses a stack of 8 registers with an extended precision 80-bit floating point format. However x87 data is primarily stored in memory with a 64-bit format that truncates the extra 16 bits. Because of this truncation, x87 code can return noticeably different results if the data is spilled to cache and then reloaded. x87 instructions are scalar by nature, and even the highest performance CPUs can only execute two x87 operations per cycle.

In contrast, SSE has 16 flat registers that are 128 bits wide. Floating point numbers can be stored in a single precision (32-bit) or double precision (64-bit) format. A packed (i.e. vectorized) SSE2 instruction can perform two double precision operations, or four single precision operations. Thus a CPU like Nehalem or Shanghai can execute 4 double precision operations, or 8 single precision operations per cycle. With AVX, that will climb to 8 or 16 operations respectively. SSE also comes in a scalar variety, where only one operation is executed per instruction. However, scalar SSE code is still somewhat faster than x87 code, because there are more registers, SSE instructions are slightly lower latency than the x87 equivalents and stack manipulation instructions are not needed. Additionally, some SSE non-temporal memory accesses are substantially faster (e.g. 2X for AMD processors) as they use a relaxed consistency model.

The only reason to use those antiquated x87 instructions in this day and age is to artificially skew the performance differences between PhysX on the GPU versus the CPU. This is as bad as cooking your graphics drivers so they perform better in the benchmarks but not in real-world use. :shadedshu
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
For me that article lacks any relevance since it's the GPU code path that is been used. That code has been optimized to run on the GPU, not on the CPU, so taking it, benchmarking it on a CPU and claiming that it's not optimized for the CPU is totally pointless. Of course it's not.

Until I see the same tests made on a game with the "advanced GPU PhysX" checkbox disabled, I'm not going to take anything seriously. CPU PhysX is in no disadvantage to other physics APIs, so it is very posible that the CPU code path does use SSE. But why should Nvidia optimize for the CPU something that is supposed to run on the GPU, I don't know. I'll wait until he makes the other tests he mentions. Considering that Havok is not any faster than software PhysX, we may even discover that Havok uses x87 too, or that it uses it, but SSE makes no difference.

This article has been commented in many forums, and quite some people have mentioned that the PhysX that Nvidia is using is the one that came from Meqon, which was developed much sooner than SSE was out and like Wizzard said porting something is not a checkbox you can turn on and off. So again if the GPU code path, the one that kicks off when you turn the "GPU accelerated PhysX" checkbox on, was going to be used on GPUs and the CPU will only be used for small actions, why on earth do they have to spend weeks and loads on money optimizing it for a CPU? What does it matter if 3% (good optimization) or 6% of the CPU (bad) is being used?
 

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
All technical information aside, I think people are fully aware that the the CPU should be able to do what they say can only happen on the GPU without unnecessary performance penalties. For years they've tried to convinced the masses of GPU physics and have IMO not been very successful. Although many do not know the technical aspects of it they clearly see the propitiatory nature and marketing spin of it. And for them that's more then enough to weight in on their opinions to not support such endeavors.

Heck even Nv doesn't want you to support it when they themselves lock out consumers who use their hardware as a PPU when the primary GPU is not their own. Only later to release a beta driver without the lock out. Then later to reinstate the lockout out with future drivers.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
All technical information aside, I think people are fully aware that the the CPU should be able to do what they say can only happen on the GPU without unnecessary performance penalties. For years they've tried to convinced the masses of GPU physics and have IMO not been very successful. Although many do not know the technical aspects of it they clearly see the propitiatory nature and marketing spin of it. And for them that's more then enough to weight in on their opinions to not support such endeavors.

Heck even Nv doesn't want you to support it when they themselves lock out consumers who use their hardware as a PPU when the primary GPU is not their own. Only later to release a beta driver without the lock out. Then later to reinstate the lockout out with future drivers.

The CPU is NOT able to do it or we would have plenty of CPU implementations for the CPU matching PhysX capabilities already. The fact that Havok does not have something that comes even close to PhysX in terms to particles/bodies used, etc, is proof enough. Intel has been trying hard to say their CPUs are better than the GPU, so if it was as easy as taking a physics API they own and making it run on SSE, well they would have done it. It's been 4 years since PhysX and nothing released in that regards, which means it cannot be done.

That's how I see it and I know I am right.
 

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
The CPU is NOT able to do it or we would have plenty of CPU implementations for the CPU matching PhysX capabilities already. The fact that Havok does not have something that comes even close to PhysX in terms to particles/bodies used, etc, is proof enough. Intel has been trying hard to say their CPUs are better than the GPU, so if it was as easy as taking a physics API they own and making it run on SSE, well they would have done it. It's been 4 years since PhysX and nothing released in that regards, which means it cannot be done.

That's how I see it and I know I am right.
No one cares about what you think the CPU can or cannot do, match or cannot match nor which engine is used. All people care about is the outcome of how the game plays. And so far, no developer can create a game were it is completely dependant on the GPU physics without the help of the CPU. ;) All physx has done in most of the games offered is remove some of the load from the CPU then proportionating it to a specific video card. That is what people don't care for and is why it's IMO so unpopular.

Now if physx was using the CPU more efficiently, allowing everyone with an adequate CPU to play the game as intended (like consoles for example), I don't think so many would put up such a fuss about it all. But with tidbites of news trickling down the interweb that phsyx isn't using the CPU as good as it should only confirms the opinions of others. Who are aware of the already marketing angle of physx and it's propitiatory needs in order to run it.
 
Last edited:
Joined
Mar 11, 2010
Messages
25 (0.00/day)
sigh .. already saw discussion of this at xs ..

deprecated? author needs to learn what deprecated means in developer speak. floating point is a feature of x86 cpus and not going away

sse is not just a magical checkbox that you can turn on and off at compile time, you have to actively port your code which costs time and money.

you could make the same argument why microsoft doesnt just compile windows for the iphone

Deprecated means that the use of those instructions is discouraged . x87 has been deprecated in 2000 from intel and 2003 from AMD . Physx code is much younger.
SSE2 can be turned off or on at compile time actually. If you write your code in C++ and then enable the compiler to use SSE for floating point operation then the compiler generates only SSE code and no x87 code. In GCC you can use the optimization flag "-mfpmath=sse" to do such a thing.
For hand written assembly code it wouldn't be too hard to port the code from x87 to SSE2. SSE is actually a much simplier instruction set than x87 that has a lot of register stack management. Also ... writing optimized assembly code and using x87 really defeats the purpose of handwriting assembly ...

This together with the multithreading limitations makes it seem deliberate ...
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
No one cares about what you think the CPU can or cannot do, match or cannot match nor which engine is used. All people care about is the outcome of how the game plays. And so far, no developer can create a game were it is completely dependant on the GPU physics without the help of the CPU. ;) All physx has done in most of the games offered is remove some of the load from the CPU then proportionating it to a specific video card. That is what people don't care for and is why it's IMO so unpopular.

Now if physx was using the CPU more efficiently, allowing everyone with an adequate CPU to play the game as intended (like consoles for example), I don't think so many would put up such a fuss about it all. But with tidbites of news trickling down the interweb that phsyx isn't using the CPU as good as it should only confirms the opinions of others. Who are aware of the already marketing angle of physx and it's propitiatory needs in order to run it.

Oh wow sorry if something I said offended you. Patethic response. Try making a point instead of attacking me. ;)

PhysX on GPU does much more than off-loading, it adds effects that no other physics API has shown yet (on real time). And years pass and we still have no alternatives. Again if it so easy and the CPU is soooo able to do it, why Havok an Intel owned API and mayor competitor to PhysX (which btw has lost like 20% market share to PhysX in the last 2 years...) does not have something similar, when it's clear that Intel is so desperate to show that the CPU is superior? What's more, why did Havok rely on AMD GPUs to try and make some competence to Nvidia? :laugh:
 

EastCoasthandle

New Member
Joined
Apr 21, 2005
Messages
6,885 (0.99/day)
System Name MY PC
Processor E8400 @ 3.80Ghz > Q9650 3.60Ghz
Motherboard Maximus Formula
Cooling D5, 7/16" ID Tubing, Maze4 with Fuzion CPU WB
Memory XMS 8500C5D @ 1066MHz
Video Card(s) HD 2900 XT 858/900 to 4870 to 5870 (Keep Vreg area clean)
Storage 2
Display(s) 24"
Case P180
Audio Device(s) X-fi Plantinum
Power Supply Silencer 750
Software XP Pro SP3 to Windows 7
Benchmark Scores This varies from one driver to another.
Oh wow sorry if something I said offended you. Patethic response. Try making a point instead of attacking me. ;)
I think it's clear who's upset here and it's certainly not me. Also, disagreeing with your post isn't an attack. But clearly it didn't stop you from making your own. :slap:

PhysX on GPU does much more than off-loading, it adds effects that no other physics API has shown yet (on real time). And years pass and we still have no alternatives. Again if it so easy and the CPU is soooo able to do it, why Havok an Intel owned API and mayor competitor to PhysX (which btw has lost like 20% market share to PhysX in the last 2 years...) does not have something similar, when it's clear that Intel is so desperate to show that the CPU is superior? What's more, why did Havok rely on AMD GPUs to try and make some competence to Nvidia? :laugh:
I think your are missing the obvious, no one care about that. People are simply content with the way physics is currently implemented in games. Just look at how popular console gaming has become;). The truth of the matter is simple, evidence has been presented to show that physx on the CPU isn't as optimized as it should be. People been saying that for years regardless if they had proof of it or not. Now that the subject has once again surfaced it's clear that more information has been presented as to what is actually going on.

Regardless of how disdained you are regarding the CPU for physics it is still an important aspect still used in todays games. Be that as it may it's only apparent that we are going to agree to disagree about the subject. Furthermore, since it's clear your replies have started with name calling our conversation has pretty much wrapped up.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Saying that no one cares, when it so obvious you do care, or you wouldn't be here, is well...

Playing the console card neither is wise, since we are here, discussing in a tech forum about GPUs and CPUs not consoles.

The fact of the matter is that there's many people who care or wouldn't be upset and so willing to jump onto Nvidia's neck everytime something related to PhysX appears on the net, and it's not as if they waited for any kind of proof or confirmation. The article made by Kanter this time is completely lacking, as he only tested 2 demos and those demos have always been aimed at the GPU. Take Mirror's Edge, take Batman, take a game at least and not a GPU PhysX demo... And test it in all its forms, with GPU PhysX acceleration turned on and off. He has not demostrated that x87 is used by PhysX to run on the CPU, because he has not even run a game with a CPU code path, a game (or setting) that is supossed to run on the CPU. If he at least had used the game Cryostasis instead of the tech demo which was released to show GPU PhysX he would at least have half a point.
 
Joined
Aug 17, 2009
Messages
1,585 (0.30/day)
Location
Los Angeles/Orange County CA
System Name Vulcan
Processor i6 6600K
Motherboard GIGABYTE Z170X UD3
Cooling Thermaltake Frio Silent 14
Memory 16GB Corsair Vengeance LPX 16GB (2 x 8GB)
Video Card(s) ASUS Strix GTX 970
Storage Mushkin Enhanced Reactor 1TB SSD
Display(s) QNIX 27 Inch 1440p
Case Fractal Design Define S
Audio Device(s) On Board
Power Supply Cooler Master V750
Software Win 10 64-bit
If NVIDIA is making claims about the superiority of the GPUs over CPUs based on code that is unfairly ported to each platform, then that is deception. And that is wrong.
 
Joined
Jan 14, 2009
Messages
2,644 (0.47/day)
Location
...
System Name MRCOMP!
Processor 5800X3D
Motherboard MSI Gaming Plus
Cooling Corsair 280 AIO
Memory 64GB 3600mhz
Video Card(s) GTX3060
Storage 1TB SSD
Display(s) Samsung Neo
Case No Case... just sitting on cardboard :D
Power Supply Antec 650w
i for one am NOT happy with the current physics in games at all.... (to say people are happy with current physics is wrong. there not, but no one offers any games that have very nice physics. people havnt seen how much more indepth a game is with working realistic physics.)


if someone offered a much better physics option then physx i would buy it asap.


games with well done physics really stand out for me.



If NVIDIA is making claims about the superiority of the GPUs over CPUs based on code that is unfairly ported to each platform, then that is deception. And that is wrong.

they coded for the GPU not CPU. nvidia dosnt make CPUs. why would nvidia code physics for cpus?
nvidia is not going to go out of there way to optimise the code to run on a cpu as best that it can.... and its obvious that they cant run the code as well on cpus as gpus... its been said twice further up in this thread.

Physx is ahead in what it can do compared to havok. if what Physx can do on a GPU can also be done on a CPU.... why cant havok do it?
 
Last edited:
Joined
Aug 17, 2009
Messages
1,585 (0.30/day)
Location
Los Angeles/Orange County CA
System Name Vulcan
Processor i6 6600K
Motherboard GIGABYTE Z170X UD3
Cooling Thermaltake Frio Silent 14
Memory 16GB Corsair Vengeance LPX 16GB (2 x 8GB)
Video Card(s) ASUS Strix GTX 970
Storage Mushkin Enhanced Reactor 1TB SSD
Display(s) QNIX 27 Inch 1440p
Case Fractal Design Define S
Audio Device(s) On Board
Power Supply Cooler Master V750
Software Win 10 64-bit
they coded for the GPU not CPU. nvidia dosnt make CPUs. why would nvidia code physics for cpus?

OK, then why was the article written? Is the author just making up the fact that there is CPU Physx code?
 
Joined
Feb 24, 2009
Messages
3,516 (0.63/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
The fact that the PhysX libraries and drivers use x87 code is really not that big of a deal.

This however ...


... would seem to indicate that before Nvidia snatched up Ageia, they (Ageia) perhaps limited the CPU performance (by limiting thread execution) so their PPU's would outperform CPUs. Nvidia seems to just have followed suit.

Nvidia purchased Ageia in 2008, so they would have had time to change the CPU executing code, but that would not really make any sense as they are not promoting CPUs, and it would not have been a wise decision from a (GPU) marketting standpoint.

That being said, Nvidia owns the code and I stand behind their right to do whatever they want with it. They are, after all, in the business to make money.

That's the first thing I thought of when I saw the article and saw some interjections by others. nVidia bought the company lock stock and barrel. Ageia developed the code for their stuff, and if I'm not mistaken, rolling the code over to nVidia hardware was effortless.

So nVidia has been keeping the proprietary API code updated for their hardware. Why is this a shock? I'd be much more pissed over something like no AA for ATI cards in Batman than this. Did 3Dfx optimize Glide for other cards? Why would they?

Btw on the physics of games thing, I'm sure that everyone who plays games would agree that in general it has made gameplay better. With that said, I'd take Crysis type physics over nVidia's version any day of the week. In my opinion the Crysis team did a better job and still is compared to what nVidia has. nVidia's physics through Physx just does not look natural in comparison. Saying that I realize that this is relative though and others may disagree on this last part.
 

ctrain

New Member
Joined
Jan 12, 2010
Messages
393 (0.08/day)
Deprecated means that the use of those instructions is discouraged . x87 has been deprecated in 2000 from intel and 2003 from AMD . Physx code is much younger.
SSE2 can be turned off or on at compile time actually. If you write your code in C++ and then enable the compiler to use SSE for floating point operation then the compiler generates only SSE code and no x87 code. In GCC you can use the optimization flag "-mfpmath=sse" to do such a thing.
For hand written assembly code it wouldn't be too hard to port the code from x87 to SSE2. SSE is actually a much simplier instruction set than x87 that has a lot of register stack management. Also ... writing optimized assembly code and using x87 really defeats the purpose of handwriting assembly ...

This together with the multithreading limitations makes it seem deliberate ...

No, that's not what the flag does. MSVC will not pump out vectorized SSE code. It WILL use handwritten SSE versions of runtime functions though. You're not going to see fast SSE stuff magically popping up by flipping a switch.

MS provides intrinsics to help keep you out of assembly land, but you're still writing it yourself anyway. The compiler will attempt to optimize SSE stuff if you used intrinsics, but it won't optimize inline assembly ever.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
they coded for the GPU not CPU. nvidia dosnt make CPUs. why would nvidia code physics for cpus?

because if it doesnt run well on a CPU, game devs arent going to risk integrating it in their games properly.

If a game wont work on older hardware, ATI hardware, intel GPU hardware, etc... why make it neccesary to the game?

This attitude of "nvidia GPU or nothing' is what forces PhysX to be nothing more than a gimmick for things that dont change gameplay at all.

If it runs on all systems acceptably and faster/more detailed on Nvidia hardware, then it will be more widely supported.
 
Joined
Jan 2, 2008
Messages
3,296 (0.55/day)
System Name Thakk
Processor i7 6700k @ 4.5Ghz
Motherboard Gigabyte G1 Z170N ITX
Cooling H55 AIO
Memory 32GB DDR4 3100 c16
Video Card(s) Zotac RTX3080 Trinity
Storage Corsair Force GT 120GB SSD / Intel 250GB SSD / Samsung Pro 512 SSD / 3TB Seagate SV32
Display(s) Acer Predator X34 100hz IPS Gsync / HTC Vive
Case QBX
Audio Device(s) Realtek ALC1150 > Creative Gigaworks T40 > AKG Q701
Power Supply Corsair SF600
Mouse Logitech G900
Keyboard Ducky Shine TKL MX Blue + Vortex PBT Doubleshots
Software Windows 10 64bit
Benchmark Scores http://www.3dmark.com/fs/12108888
Can they do the same test on Havoc, Bullet, etc? Because if those apis use SSE, then Id say Physx is quite optimized for an x87 laced app..
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Can they do the same test on Havoc, Bullet, etc? Because if those apis use SSE, then Id say Physx is quite optimized for an x87 laced app..

That's my reasoning too. If PhysX on CPU is so crippled, how is it that it runs just as well as any other game using other APIs like Havok or Bullet? Before any claim is made all three APIs have to be compared and the CPU PhysX has to be used.


Btw on the physics of games thing, I'm sure that everyone who plays games would agree that in general it has made gameplay better. With that said, I'd take Crysis type physics over nVidia's version any day of the week. In my opinion the Crysis team did a better job and still is compared to what nVidia has. nVidia's physics through Physx just does not look natural in comparison. Saying that I realize that this is relative though and others may disagree on this last part.

I agree that Crysis does a better use for physics than other games including those with PhysX, but it uses the CPU to run them and everybody knows how the fps drop when there's just a bunch of explosions and such.

Also: http://www.youtube.com/watch?v=YG5qDeWHNmk - Crysis PhysX 3000 barrels: read the description the actual framerate with 3000 barrels falling was 0.2 FPS or 1 frame every 5 seconds.

Or if you prefer Havok: http://www.youtube.com/watch?v=7f33GYOC2as

Compare compare those to: http://www.youtube.com/watch?v=s_2Klve_2VQ - PhysX screensaver running off a 9500GT doing both graphics and physics at smooth fps.

To date PhysX screensaver continues to be the best example of what could be done with PhysX. For example, if Bad Company 2 used GPU physx instead of what it uses, the buildings could be destryed more realistically and not the same crappy way every single time, just as the piles of bricks in the screensaver, the walls could be destroyed into realistic bricks. Everything would be the same as it is in BC2 except that every explosion would have different results and the bricks could be thrown away off the building and cause damage like real explosions do, where it's not the explosion what makes most damage, but the shrapnel which reahes a far greater radius than the blast.
 
Last edited:

ctrain

New Member
Joined
Jan 12, 2010
Messages
393 (0.08/day)
Can they do the same test on Havoc, Bullet, etc? Because if those apis use SSE, then Id say Physx is quite optimized for an x87 laced app..

Pretty sure Havok is threaded and has been optimized by both Intel and AMD.

Intel owns Havok now so I'd expect that it's pretty fast under the hood.

For example, if Bad Company 2 used GPU physx instead of what it uses, the buildings could be destryed more realistically and not the same crappy way every single time, just as the piles of bricks in the screensaver, the walls could be destroyed into realistic bricks. Everything would be the same as it is in BC2 except that every explosion would have different results and the bricks could be thrown away off the building and cause damage like real explosions do, where it's not the explosion what makes most damage, but the shrapnel which reahes a far greater radius than the blast.

This would be hellish from a rendering standpoint, let alone physics. How do you plan to get that working online?
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.50/day)
Location
Reaching your left retina.
Pretty sure Havok is threaded and has been optimized by both Intel and AMD.

Intel owns Havok now so I'd expect that it's pretty fast under the hood.

Havok is as threaded as PhysX is (that is completely multithreaded), but in every single game using Havok that I have tested only one core is used. I have tested all the Source engine games, Oblivion, Fallout, Assassin's Creed, Bioshock, Company of Heroes, Ded Space and many many others. See it's not as if I only tested a bunch and ALL of them used only one core on my Quad for physics* and most used 1 core for everything.

*If at all. I mean 2 cores were used on my quad so I assume havok was using one of them...

This would be hellish from a rendering standpoint, let alone physics. How do you plan to get that working online?

The rendering part is being done already and are you telling me that a LAME 9500GT can do what it does on the screensaver, but a mainstream DX11 card would not be able to run BC2 plus what I said??

Also what I'm asking for is that they stop trying to improve graphics and they improve physics instead. BC2 requires much much more resources than i.e. L4D and it doesn't look so much better. I refer to the fact that we can say it looks 2x times better which is a lot certainly, but it requires a card that is 8 tims faster for doing so. Give me a game that requires 6x times more GPU for the graphics and uses the other 2x for proper physics. That won't ever happen probably, because they would have to work. Uggg! Work! Vade retro!
 
Last edited:
Joined
Jan 2, 2008
Messages
3,296 (0.55/day)
System Name Thakk
Processor i7 6700k @ 4.5Ghz
Motherboard Gigabyte G1 Z170N ITX
Cooling H55 AIO
Memory 32GB DDR4 3100 c16
Video Card(s) Zotac RTX3080 Trinity
Storage Corsair Force GT 120GB SSD / Intel 250GB SSD / Samsung Pro 512 SSD / 3TB Seagate SV32
Display(s) Acer Predator X34 100hz IPS Gsync / HTC Vive
Case QBX
Audio Device(s) Realtek ALC1150 > Creative Gigaworks T40 > AKG Q701
Power Supply Corsair SF600
Mouse Logitech G900
Keyboard Ducky Shine TKL MX Blue + Vortex PBT Doubleshots
Software Windows 10 64bit
Benchmark Scores http://www.3dmark.com/fs/12108888
Pretty sure Havok is threaded and has been optimized by both Intel and AMD.

Intel owns Havok now so I'd expect that it's pretty fast under the hood.
Precisely.. then why dont we see ludicrous amounts of debris, smoke, movable clothe in other physics middleware utilizing titles? If sse would theoretically bump up Physix performance on software rendering (2x like they said it should), then Id say they should go for it.. but 2x more physics calculating power wont show an equal amount of processing as to what a dedicated physics calculating add on card would do.. as it was evident (cant find that graph on that liquid physics thing), it would require more than twice the processing power... not just the 2x benefit of the sse.
 
Top