• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce 4XX Series Discussion

Status
Not open for further replies.
First of all. I was not pitting it deliverately and second, I extended my argument with the inclusion of HD4890, the result is the same, and you decided to forgot about that little fact. How convenient, the HD4890 is not doing any better than if it had 512MB, but let's just forget about that and say the point is flawed.

Third the GTX285 doesn't need to run at 850 Mhz to be 40% faster than the HD4890, so that perf/transistor count is the same.

And finally, you are just trolling. My original claim is that we don't know the clocks of Fermi. If it is 600 Mhz like GT200, then it's twice the GT200, but if it runs at say 750 Mhz (still much less than 850Mhz) it will smoke the HD5870 big time.

In fact, I choose 2560x1600 in order to my claim NOT being flawed. ROPs make a card faster at higher resolutions and GT200 has twice as many. Where do all those extra transistors come from??? A LOT come from those extra ROPs, so we have to find a setting where both chips are using all their power, if we want to compare perf/transitors.

Anyway I'm done with the topic so don't bother replying.

Plz don't call people trolls based on that,

Since the GT300 is based on a totally new archtecture saying if its ???Mhz it will be faster than a HD5870 does not mean anything in reality because nobody knows exactly how that architecture will perform, for all we know it will be a bust, or maybe it will be the next step(which it will probably be) and in terms of saying rops give more performace at higher res, Rops instead of pixel rate are more tied to how fast the card can write the memory to the frame buffer and it changes depending on the memory bus they are connected to and since GDDR3 and GDDR5 don't have similar write clock and the 2 cards use different architectures comparing rops is iffy.

Do the Bulk of the Transistors come from rops, . . . . no(not much of the die at all is part of the rop domain), the bulk of the transistors are processing cores(IE, shaders.)

and from what I have seen having had a 9800GTX+ 512 and a GTS250 1GB, an extra 512mb can do wonders at high res but in the same right, ATI cards are better with higher resolutions with less memory(but more does always help) since the HD4K series so saying oh if the card were 1GB, it would perform better is not very right, a HD4870 with 1GB isn't going to perform much differently than a HD4870 with 512MB, why . . . look at most of the benchmarks on TPU and you will see why, its obvious that HD4K cards only need 512MB, any more than that dosen't give much of a boost and the reason you see the HD4890's being a good bit faster is because of their clocks, ben is right in my mind(Of course I only look at TPU info since this is TPU).
 
Plz don't call people trolls based on that,

Since the GT300 is based on a totally new archtecture saying if its ???Mhz it will be faster than a HD5870 does not mean anything in reality because nobody knows exactly how that architecture will perform, for all we know it will be a bust, or maybe it will be the next step(which it will probably be) and in terms of saying rops give more performace at higher res, Rops instead of pixel rate are more tied to how fast the card can write the memory to the frame buffer and it changes depending on the memory bus they are connected to and since GDDR3 and GDDR5 don't have similar write clock and the 2 cards use different architectures comparing rops is iffy.

Do the Bulk of the Transistors come from rops, . . . . no(not much of the die at all is part of the rop domain), the bulk of the transistors are processing cores(IE, shaders.)

and from what I have seen having had a 9800GTX+ 512 and a GTS250 1GB, an extra 512mb can do wonders at high res but in the same right, ATI cards are better with higher resolutions with less memory(but more does always help) since the HD4K series so saying oh if the card were 1GB, it would perform better is not very right, a HD4870 with 1GB isn't going to perform much differently than a HD4870 with 512MB, why . . . look at most of the benchmarks on TPU and you will see why, its obvious that HD4K cards only need 512MB, any more than that dosen't give much of a boost and the reason you see the HD4890's being a good bit faster is because of their clocks, ben is right in my mind(Of course I only look at TPU info since this is TPU).

gt200b.jpg


A lot of GT200's die comes from ROPs and attached silicon like frame buffer as you can see above.

Saying that ROP only counts to writes to memory is absolutely wrong. How about z testing? blending? Fact is that higher fill-rate helps at higher resolutions and it just takes a look at the benchmarks. Apart from Wizzards reviews, I suggest you take a look at these:

http://www.anandtech.com/video/showdoc.aspx?i=3539&p=16
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=17
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=18
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=19
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=20
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=21

http://techreport.com/articles.x/16681/6
http://techreport.com/articles.x/16681/7

As you can see in those benches and in Wizzards review the HD4890 starts high in the rank at 1920x1200 and then it falls behind at 2560x1600.

Of course, you can achieve the same fillrate with half the ROPs and twice the clock speed but that is not the case.

Where memory and writing to memory most counts is in anti-aliasing. AA needs much much more memory than the frame itself, and that also counts for the bandwidth used.
 
Charlie pulls nvidia apart over fake cards & first silicon

He says they're lying about when they had first silicon, because the sample shown just a few days ago was A1 silicon and this was clamed to have been ready months ago, apparently.

He shows several pictures proving that the cards are fake, when Jen-Hsun had claimed that they were real. He explains in great detail, as usual. The dodgy 90 degree power connectors especially, are a dead giveaway. Can't argue with the evidence there. :D

He ends with this great line: In the end, what you have here is a faked Fermi board. Jen-Hsun held up a scam card. If you watch the video here, he says, "This puppy here, is Fermi". Bullshit.

Awesome!

Read about it SemiAccurate (for once it's not Fudzilla, hehe).

I will now brace myself for the inevitable anti-Charlie flames... :laugh:
 
He says they're lying about when they had first silicon, because the sample shown just a few days ago was A1 silicon and this was clamed to have been ready months ago, apparently.

He shows several pictures proving that the cards are fake, when Jen-Hsun had claimed that they were real. He explains in great detail, as usual. The dodgy 90 degree power connectors especially, are a dead giveaway. Can't argue with the evidence there. :D

He ends with this great line: In the end, what you have here is a faked Fermi board. Jen-Hsun held up a scam card. If you watch the video here, he says, "This puppy here, is Fermi". Bullshit.

Awesome!

Read about it SemiAccurate (for once it's not Fudzilla, hehe).

I will now brace myself for the inevitable anti-Charlie flames... :laugh:

There's a thread on this, with my post in it: http://forums.techpowerup.com/showpost.php?p=1578613&postcount=83

Nvidia must not be dumb enough to think we're that dumb, to actually let us take close-up pictures of it. I think that a company of such a caliber must have done it for a reason (explained in my linked post).
 
He says they're lying about when they had first silicon, because the sample shown just a few days ago was A1 silicon and this was clamed to have been ready months ago, apparently.

He shows several pictures proving that the cards are fake, when Jen-Hsun had claimed that they were real. He explains in great detail, as usual. The dodgy 90 degree power connectors especially, are a dead giveaway. Can't argue with the evidence there. :D

He ends with this great line: In the end, what you have here is a faked Fermi board. Jen-Hsun held up a scam card. If you watch the video here, he says, "This puppy here, is Fermi". Bullshit.

Awesome!

Read about it SemiAccurate (for once it's not Fudzilla, hehe).

I will now brace myself for the inevitable anti-Charlie flames... :laugh:

When they start selling fake cards I'll be concerned. This nvidia announcement seems to be a marketing spoiler for ATI's launch (if I were nvidia I'd do it too) and Charlies comments seem to be FUD (if I were ATI, I'd encourage Charlie to write more). Why should 'knowledgeable' people even care? For me, I'm just really keen to see the new products benched against each other.
 
When they start selling fake cards I'll be concerned. This nvidia announcement seems to be a marketing spoiler for ATI's launch (if I were nvidia I'd do it too) and Charlies comments seem to be FUD (if I were ATI, I'd encourage Charlie to write more). Why should 'knowledgeable' people even care? For me, I'm just really keen to see the new products benched against each other.

It was a show they put on for shareholders. Showing nothing would of been worse than showing whatever they were showing.
 
When they start selling fake cards I'll be concerned. This nvidia announcement seems to be a marketing spoiler for ATI's launch (if I were nvidia I'd do it too) and Charlies comments seem to be FUD (if I were ATI, I'd encourage Charlie to write more). Why should 'knowledgeable' people even care? For me, I'm just really keen to see the new products benched against each other.

I don't see how Charlie's article is FUD at all. nvidia tried to dupe people by saying it's the real thing, but he's simply outed their lie and he quotes all sources too, what's wrong with that?

Is it his writing style that people really object to, perhaps? With the lies and spin that nvidia put out, I think he's right to be pissed at them all the time.

And before I'm called an ATI fanboy, please note that my main card is an nvidia one and my collection of graphics cards includes high-end models from both camps.
 
There's a thread on this, with my post in it: http://forums.techpowerup.com/showpost.php?p=1578613&postcount=83

Nvidia must not be dumb enough to think we're that dumb, to actually let us take close-up pictures of it. I think that a company of such a caliber must have done it for a reason (explained in my linked post).
Strategic or not, ATi is not going to fall in this trap.
They learned quite a lesson from the G80 vs R600 period and I doubt they are dumb enough to relax on nVidia.
More importantly, whats the point of grabing the crown if you lose the mainstream market?
The HD 5700 series @sub $200 is right around the corner, that is where to gold is.
ATi which is now AMD changed their strategy and target the mainstream with the bang got buck.
AMD is not really known for going mussle to mussle with its competion for a while now.
 
Last edited:
And talking of the low chip yield problems, aren't people a little disappointed that nvidia designed their new card with only six clusters instead of eight, leading to a 384 bit bus and not the full 512 bit one? Wanna take a bet that it's because the manufacturing defect rate was too high (and perhaps the clock speed too low as well)? In rough terms were's losing 6/8 or 25% of the performance of this chip because of this decision, which is quite significant.
 
I never said Rops only write to the memory I am saying that the rops writing to the memory is one of the most important tasks that rops handle because at the end of the pipeline thats the last thing thats done, no matter how fast a card is if it can't write to the memory fast enough then everything else means nothing.

Erocker is right, Nvidia is doing whats expected of a business show some stuff to there shareholders and the other fatcats and get some more money and time for development, ATI would do the same exact thing towards AMD.
 
Last edited:
I never said Rops only write to the memory I am saying that the rops writing to the memory is one of the most important tasks that rops handle because at the end of the pipeline thats the last thing thats done, no matter how fast a card is if it can't write to the memory fast enough then everything else means nothing.

Indeed, that's the straight line speed or if I remember correctly, the technical term is called the fill rate or bandwidth.

In a perfect world, the memory bandwidth will match the processing power of the GPU exactly. In practice, it's better to have slightly more bandwidth than the GPU can fill, to prevent bottlenecks.

A good example of this bottleneck appears to be the new 5870. From what I can see, not quite enough of this precious bandwidth is available to it and hence its performance suffers at high resolutions.
 
I never said Rops only write to the memory I am saying that the rops writing to the memory is one of the most important tasks that rops handle because at the end of the pipeline thats the last thing thats done, no matter how fast a card is if it can't write to the memory fast enough then everything else means nothing.

Erocker is right, Nvidia is doing whats expected of a business show some stuff to there shareholders and the other fatcats and get some more money and time for development, ATI would do the same exact thing towards AMD.

So in your opinion, blending the different fragments into a single picture, z testing and stencil are not important? ROPs do a lot of important things and they are important no matter how you look at it. They are the heart of rasterization.

Also I have always based my opinion regarding the Hd5870 memory bottleneck in that you don't R&D a 2.15 billion transistor, 1600 SP and 32 ROP beast, just to let the comparably cheap memory subsystem make it run like a 1200 SP, 24 ROP card (it's just a poetic license). You would have made a 1200 SP, 24 ROP card from the beginning isn't it? It's not as if they build the different parts and then hrow them together to see what happens.

I have said more regarding that here: http://forums.techpowerup.com/showpost.php?p=1579461&postcount=1427
 

What's so funny?

What is that thing? Is it the wreck of some 50's computer perhaps?

No, that is actually Fermi. The problem is that they haven't put all of those wires into a board. They have the GPU core, but the rest of the wiring for the unit to the memory etc is like you see in that photo.
 
No, that is actually Fermi. The problem is that they haven't put all of those wires into a board. They have the GPU core, but the rest of the wiring for the unit to the memory etc is like you see in that photo.

HAHAHAHAHAHAHAHA :laugh: :laugh:

EDIT: Comment retracted: see next two posts.
 
Last edited:
Did you even read the source post? Laughing for Post Count +1, right? :shadedshu

I think we have ourselves a misunderstanding here. I thought your answer was in jest, as that thing looks nothing like a graphics card.

I certainly wasn't trying to cause you offense and I'm sorry if I did. :toast:

I also managed to miss the xtremesystems link under the picture (I'm good at missing the obvious, sometimes ;) ). Now I've looked at that post, I'm very dubious that it's a prototype Fermi. It just doesn't look anything like a graphics card and is far too large - just look at the switches at the top of the picture. Think how big they must be for human hands to hold and how that compares to the size of the rest of the rig. Also, where the hell does it plug into a PCI-E slot for example?

It still looks like some computer component or prototype of the 50s-70s, which used to have masses of wiring like that and had those sort of old fashioned switches on it.

I've seen modern prototype computer boards before and they don't have a mass of wires like that. Ever. It'll look similar to a production board, but will have various differences in how the main chip looks, component layout and perhaps just a few patch wires. There will be other differences too, but it will look recognizable as the target product.

I'd sure like to know what that thing is, though.
 
GT300 to launch in late November

Courtesy of Fudzilla again.
 
No, that is actually Fermi. The problem is that they haven't put all of those wires into a board. They have the GPU core, but the rest of the wiring for the unit to the memory etc is like you see in that photo.
Did you even read the source post? Laughing for Post Count +1, right?
Wait... You...

You actually thought that joke pic saaya posted on XS is the real thing?
:eek:
You should know Sascha K aka. saaya can be a real jester sometimes.
:p

Now the joke's on you, Blinge.
:D
 
34925239510d4107fc15.jpg


At first, I thought that was the loom of fate from "Wanted" movie (the one with Angelina Jolie and Morgan Freeman in it)!
 
Wait... You...

You actually thought that joke pic saaya posted on XS is the real thing?
:eek:
You should know Sascha K aka. saaya can be a real jester sometimes.
:p

Now the joke's on you, Blinge.
:D

The joke is on Blinge alright...

Did you read the rest of the thread? Fermi's working form (durring the keynote) was a bundle of sockets, boards, and wires. Read between the lines.

,|||. Thumb, index, middle, ring, pinkey ya hack.
 
Status
Not open for further replies.
Back
Top