Fermi

Message boards : Graphics cards (GPUs) : Fermi
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 16 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16119 - Posted: 1 Apr 2010, 21:46:51 UTC - in response to Message 16117.  
Last modified: 1 Apr 2010, 21:56:27 UTC

Also MSI and EVGA...

Cant see any special card designs, so I would suggest you buy based on price and warrenty.

GTX470 (Manufacturer, Warrantee, price):

Gainward 24months £299
VGA 24months £316
Asus 36months £319
MSI 24months £328
Gigabyte 24months £333
EVGA 9999months?!? £334

I am not sure about the EVGA 9999months (lifetime warranty perhaps); will they exist in 10 or 20 years, do you need a warranty that long, what does their small print say?

The Gainward price looks good but their design reputation is poor (personally I have bad experiences here too).
The Asus warranty looks reasonably attractive for the price.
Obviously the EVGA warranty, if valid, is better; but how many of us would want such a card for more than 3years, and is it worth it?
I see no reason to consider the others, other than the reputation of Gigabyte (but not for £1 less than EVGA)!

The GTX 480 cards are similarly warrantied.
ID: 16119 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16120 - Posted: 1 Apr 2010, 22:22:11 UTC

Gainward usually is suxxxx (me personally faced that), but while all of them got reference design, is make no difference at all.
ID: 16120 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16122 - Posted: 1 Apr 2010, 22:29:19 UTC - in response to Message 16120.  

Perhaps Gainward went out of their way to find the worst capacitors around, again. I Wont Go Near Another Gain-Ward!
ID: 16122 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16127 - Posted: 2 Apr 2010, 8:02:26 UTC - in response to Message 16122.  

All the first run of cards are actually built by NVIDIA (via sub-contract), so all the cards are the same.
So it’s down to warranty really, and EVGA are likely to be doing a 10year or lifetime warranty. As the temperatures are high (90deg+) it has been suggested that it would be better to get one with a lifetime warranty!
ID: 16127 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16130 - Posted: 2 Apr 2010, 12:27:15 UTC - in response to Message 16127.  

EVGA are likely to be doing a 10year or lifetime warranty. As the temperatures are high (90deg+) it has been suggested that it would be better to get one with a lifetime warranty!

XFX will probably have their usual double lifetime warranty. A bit of silliness with the lifetime warranties in a way since the usefulness of the card will most likely be nil within 5 years or so. What's nice about the XFX warranty though is that it transfers to the 2nd owner.
ID: 16130 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16203 - Posted: 8 Apr 2010, 16:33:47 UTC
Last modified: 8 Apr 2010, 16:36:42 UTC

IMPORTANT! Nvidia's 400 series crippled by Nvidia

In an effort to increase the sales of their Tesla C2050 and Tesla C2070 cards, Nvidia has intentionally crippled the FP64 compute ability by a whopping 75%. If left alone, the GTX 480 would have performed FP64 at 672.708 GFLOPS, almost 119 GFLOPS more than ATI's 5870. Instead, the GTX 480 comes in at 168.177 GFLOPS. Here is a link to Nvidia's own forum where we have been discussing this. You will see there is also a CUDA-Z performance screenshot confirming it, on top of the confirmation by Nvidia's own staff. Nvidia is not publically making anyone aware that this is the case. Anandtech also snuck the update onto page 6 of a 20 page review of the GTX 480/470.


proof:
http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1662#38378
http://forums.nvidia.com/index.php?showtopic=164417&st=0

the most worst "dreams" came true...

Sure it's not my business, but IMO project should speed-up OpenCL development in order to be able to use ATI cards.
ID: 16203 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 16204 - Posted: 8 Apr 2010, 17:52:10 UTC - in response to Message 16203.  
Last modified: 8 Apr 2010, 18:16:49 UTC

This is the crummy side of Capitalism. It's no longer survival of the fittest, it's life support. I'm not a Communist, but when Companies purposefully cripple their own baby, with the intent of making their favorite child look better, it gives me a bitter taste in the mouth & sends chills down my spine.

So the mainstream GPU isn't the favorite child, but instead of just giving it all the love & support to make it grow strong & lead the way, they have to shoot it in the leg so that the favorite child can win the race.

Even if they had to sell 10 GTX480's for every Tesla. They'll get their money back & generate lots of sales, inspire confidence in Nvidia. But no, they had to sell their Telsa's, even if it runs the risk of taking the whole Fermi family down (& possibly also Nvidia).

I've seen a Country crumble under the weight of 3 generations of Nepotism & Collusion, which bred Corruption. It's was family first, those who knew the first family, & what you had to do to get to know the first family. Inbreeding led to the fall of Rome, stupid wars in Europe, & The Mad King is dead, long live the Mad King! They'd spend 1 Billion on a project that only cost 500 Million & didn't even care if they ever got their money back. They didn't need the highway you see, they WANTED the highway! They say jump, you say how high & you NEVER asked why you had to do all that jumping...
ID: 16204 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16205 - Posted: 8 Apr 2010, 18:08:11 UTC

liveonc,
+1, exactly what I was thinking about...

right now I see 2 options for myself:
- try to get GTX295 biult on one PCB for cheap
- wait for GPUGRID being able to use ATI cards for crunching and meanwhile continue to use GTX275


ID: 16205 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 16206 - Posted: 8 Apr 2010, 19:15:47 UTC - in response to Message 16205.  
Last modified: 8 Apr 2010, 19:16:48 UTC

Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480.

For what it regards computing, double precision is twice as slow anyway compared to single precision but the code must be able to use it.

GPUGRID uses single precision because it coded for it and it is faster. [SETI the same].

Long term they might relax the double precision/single precision restriction, maybe just a driver update. Don't think that it is in the hardware.

gdf
ID: 16206 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16208 - Posted: 8 Apr 2010, 20:31:22 UTC
Last modified: 8 Apr 2010, 20:32:53 UTC

For BOINC this is a really bad move. There are projects which could really benefit from wide spread fast double precision capabilities. But for the existing projects it's not that bad:

GPU-Grid
We're using single precision, so crippling dp doesn't change Fermi performance at all.

Milkyway
It runs almost at peak efficiency on ATIs, so previously any nVidia GPU or CPU was much less energy efficient. These shouldn't run MW, since their crunching power can be put to much better uses (projects the ATIs / nVidias can't run). Fermi could have been the first nVidia GPU to actually be of use at MW, but not in this crippled form.

Collatz
It runs very well on ATIs without dp (all but the top chips) and on compute capability 1.0 and 1.1 cards, whereas 1.2 and 1.3 cards are not any faster per theoretical FLOPS. This project should be left to the cards which can't be used anywhere else efficiently [and as a backup for the MW server..] .. i.e. all the smaller ATIs and nVidias which are too slow for GPU-Grid (or others). CC uses integers, so crippled dp on Fermi doesn't change the fact that it shouldn't be used here anyway.

Einstein
Their app offers only mild support for the CPU. Credit-wise it's absolutely not worth it. Science-wise I think we'd need proper load balancing to make it feasible on GPUs. That's a software problem, so I don't know how Fermi hardware affects this.

SETI
EDIT: just read GDFs post that SETI is using "only" sp as well.

BTW: ATI does not support dp on all but their highest end chip. They're free to do this, but consider the following: Juniper (HD57x0) is almost exactly half a Cypress (HD58x0). Everything is cut in half, except some fixed fuction hardware like display driving circuitry which is present in both chips. Therefore one would expect the amount of transistors to be about half of each other. And that's indeed the case: 2154 million versus 1040 million. 2 Junipers would need 2080 transistors. That means it would have cost them about (2154-2080)/2 = 37 million transistors to put dp support into Juniper. That would have increased it's transistor count and thereby area and cost (approximately) by 3.5%. And leaving the feature in would not have cost them any development effort: just reuse the same shaders for the entire product family. Removing the feature did require a redesign and thus cause cost.

It's a legitimate business decision: reducing cost on all of their mainstream chips does count, even if it's as small as ~4%. And enabling dp on Redwood and Cedar would probably not have provided very little benefit anyway. I would have preferred it in Juniper, though.

MrS
Scanning for our furry friends since Jan 2002
ID: 16208 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16209 - Posted: 8 Apr 2010, 20:41:44 UTC - in response to Message 16206.  

Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480.


That's what they say.. but, seriously, it's an effort on their side to disable it. Disabling it costs them something, whereas just leaving it enabled would have required no action at all.

Their motivation is clear: pushing Tesla sales.

And I doubt it's a simple driver issue. Anyone buying non-ECC Geforces over Teslas for number crunching is running non mission critical app and would probably just use a driver hack (which would undoubtfully follow soon after the cards are released).
I read nVidia went to some length to disable the installation of Quadro drivers (better CAD performance) on recent Geforce models the hard way. Not sure if a bios flash could help, though.

MrS
Scanning for our furry friends since Jan 2002
ID: 16209 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16210 - Posted: 8 Apr 2010, 21:10:26 UTC

ouch... if GPUGRID uses sp, that's different story :-) but it's really pity that if one day GPUGRID would like to use dp I have to spend $$$
ID: 16210 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16211 - Posted: 8 Apr 2010, 21:30:43 UTC - in response to Message 16210.  

In the absolutely best case you're "only" 50% slower in dp than in sp. The only reason to use dp is if the precision provided by sp is not enough - a circumstance GPU-Grid can luckily avoid. Otherwise they wouldn't have been CUDA pioneers ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 16211 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 16212 - Posted: 8 Apr 2010, 21:34:09 UTC - in response to Message 16211.  

We don't really avoid it. We do use double precision emulation via software in some specific part of the code where it is needed. It is just implemented in terms of single precision.
Of course, if you do have to use it all the time, then it makes no sense.

gdf
ID: 16212 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 16214 - Posted: 8 Apr 2010, 22:14:16 UTC - in response to Message 16209.  

Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480.


That's what they say.. but, seriously, it's an effort on their side to disable it. Disabling it costs them something, whereas just leaving it enabled would have required no action at all.

Their motivation is clear: pushing Tesla sales.

And I doubt it's a simple driver issue. Anyone buying non-ECC Geforces over Teslas for number crunching is running non mission critical app and would probably just use a driver hack (which would undoubtfully follow soon after the cards are released).
I read nVidia went to some length to disable the installation of Quadro drivers (better CAD performance) on recent Geforce models the hard way. Not sure if a bios flash could help, though.

MrS


It's not the first time & it's getting really anoying. Everything from rebranding to disabling/crippling with the intent to push Tesla's. As if it wasn't hard enough for them to justify to people to pay $500 for a GTX480 that eats lots of electricity, produces tonnes of heat & only slighly outpreforms a much cheaper Ati. They're shooting themselves in the foot every time they try to pull these tricks. I'm really considdering Ati, but I'd hate to see Nvidia go away. One person is no loss, but I'm not the only one thinking these thoughts.
ID: 16214 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16215 - Posted: 8 Apr 2010, 22:39:00 UTC - in response to Message 16214.  

Again, double precision is not required to run GPUGrid tasks; accuracy & statistical reliability of the models is achieved in different ways here.

That said, I broadly agree with what you are saying.
If NVidia had the sense to create a single precision core design it would have been less expensive to design, manufacture, sell and might have been in the shops last Nadal. This would also have separated gaming cards from professional cards. This still needs to be done.

All your eggs in one basket strategies always have their downfalls.
ID: 16215 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16217 - Posted: 8 Apr 2010, 23:00:30 UTC

Yet it seems that AMD/ATI has no problem at all implementing fast double precision in their consumer grade cards.
ID: 16217 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
fractal

Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16218 - Posted: 9 Apr 2010, 4:01:02 UTC

From http://forums.nvidia.com/index.php?showtopic=165055
- Tesla products are built for reliable long running computing applications and
undergo intense stress testing and burn-in. In fact, we create a margin in
memory and core clocks (by using lower clocks) to increase reliability and long life.

ok, did you catch that.. Increase reliability by using lower clocks!

Now, if you were making a consumer card where all the market information including statements from your competition say that the consumer market does not need double precision floating point, what would you choose to increase reliability ... reduce clocks all around, or just reduce clocks on double precision floating point, a feature not needed by the market? I know which I would select. And we know what ATI selected when they removed double precision floating point from all but the top end models of their mainstream line of GPUs.

Maybe they will work out the technical hitches in the future so they can crank up the clock speed on double precision floating point. But for now, all these conspiracy theories are making me remember the grassy knoll..

ID: 16218 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 16219 - Posted: 9 Apr 2010, 5:02:13 UTC - in response to Message 16218.  
Last modified: 9 Apr 2010, 5:39:37 UTC

It's not about WHY Ati removed double precision, but HOW Nvidia crippled the GTX470/480. If the logic behind Tesla's being so great because "memory and core clocks (by using lower clocks) to increase reliability and long life" were true, why didn't they choose to cripple the Tesla's instead?

GPU's have warranties, some even longer then people care to have them. Personally, I'd use a GPU for 2-3 years I even OC them, sure they are prone to errors, but GPUGRID.net is for me just a hobby.
ID: 16219 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 16221 - Posted: 9 Apr 2010, 7:45:53 UTC - in response to Message 16219.  

A question about ATI removing double precision in the lower cards.

As far as I know, Fermi and the 5000 series ATI cards use the same compute cores to compute single and double precision, just processing the data differently. There is not specific hardware for double precision (may a little of control units). I don't think there is much to gain by removing double precision in terms of saving transistor. So, maybe ATI is doing marketing as well when they remove double precision.

gdf
ID: 16221 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 16 · Next

Message boards : Graphics cards (GPUs) : Fermi

©2025 Universitat Pompeu Fabra