Message boards :
Graphics cards (GPUs) :
Fermi
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 16 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Also MSI and EVGA... Cant see any special card designs, so I would suggest you buy based on price and warrenty. GTX470 (Manufacturer, Warrantee, price): Gainward 24months £299 VGA 24months £316 Asus 36months £319 MSI 24months £328 Gigabyte 24months £333 EVGA 9999months?!? £334 I am not sure about the EVGA 9999months (lifetime warranty perhaps); will they exist in 10 or 20 years, do you need a warranty that long, what does their small print say? The Gainward price looks good but their design reputation is poor (personally I have bad experiences here too). The Asus warranty looks reasonably attractive for the price. Obviously the EVGA warranty, if valid, is better; but how many of us would want such a card for more than 3years, and is it worth it? I see no reason to consider the others, other than the reputation of Gigabyte (but not for £1 less than EVGA)! The GTX 480 cards are similarly warrantied. |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Gainward usually is suxxxx (me personally faced that), but while all of them got reference design, is make no difference at all.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Perhaps Gainward went out of their way to find the worst capacitors around, again. I Wont Go Near Another Gain-Ward! |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
All the first run of cards are actually built by NVIDIA (via sub-contract), so all the cards are the same. So it’s down to warranty really, and EVGA are likely to be doing a 10year or lifetime warranty. As the temperatures are high (90deg+) it has been suggested that it would be better to get one with a lifetime warranty! |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
EVGA are likely to be doing a 10year or lifetime warranty. As the temperatures are high (90deg+) it has been suggested that it would be better to get one with a lifetime warranty! XFX will probably have their usual double lifetime warranty. A bit of silliness with the lifetime warranties in a way since the usefulness of the card will most likely be nil within 5 years or so. What's nice about the XFX warranty though is that it transfers to the 2nd owner. |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
IMPORTANT! Nvidia's 400 series crippled by Nvidia proof: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1662#38378 http://forums.nvidia.com/index.php?showtopic=164417&st=0 the most worst "dreams" came true... Sure it's not my business, but IMO project should speed-up OpenCL development in order to be able to use ATI cards.
|
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
This is the crummy side of Capitalism. It's no longer survival of the fittest, it's life support. I'm not a Communist, but when Companies purposefully cripple their own baby, with the intent of making their favorite child look better, it gives me a bitter taste in the mouth & sends chills down my spine. So the mainstream GPU isn't the favorite child, but instead of just giving it all the love & support to make it grow strong & lead the way, they have to shoot it in the leg so that the favorite child can win the race. Even if they had to sell 10 GTX480's for every Tesla. They'll get their money back & generate lots of sales, inspire confidence in Nvidia. But no, they had to sell their Telsa's, even if it runs the risk of taking the whole Fermi family down (& possibly also Nvidia). I've seen a Country crumble under the weight of 3 generations of Nepotism & Collusion, which bred Corruption. It's was family first, those who knew the first family, & what you had to do to get to know the first family. Inbreeding led to the fall of Rome, stupid wars in Europe, & The Mad King is dead, long live the Mad King! They'd spend 1 Billion on a project that only cost 500 Million & didn't even care if they ever got their money back. They didn't need the highway you see, they WANTED the highway! They say jump, you say how high & you NEVER asked why you had to do all that jumping...
|
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
liveonc, +1, exactly what I was thinking about... right now I see 2 options for myself: - try to get GTX295 biult on one PCB for cheap - wait for GPUGRID being able to use ATI cards for crunching and meanwhile continue to use GTX275
|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480. For what it regards computing, double precision is twice as slow anyway compared to single precision but the code must be able to use it. GPUGRID uses single precision because it coded for it and it is faster. [SETI the same]. Long term they might relax the double precision/single precision restriction, maybe just a driver update. Don't think that it is in the hardware. gdf |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For BOINC this is a really bad move. There are projects which could really benefit from wide spread fast double precision capabilities. But for the existing projects it's not that bad: GPU-Grid We're using single precision, so crippling dp doesn't change Fermi performance at all. Milkyway It runs almost at peak efficiency on ATIs, so previously any nVidia GPU or CPU was much less energy efficient. These shouldn't run MW, since their crunching power can be put to much better uses (projects the ATIs / nVidias can't run). Fermi could have been the first nVidia GPU to actually be of use at MW, but not in this crippled form. Collatz It runs very well on ATIs without dp (all but the top chips) and on compute capability 1.0 and 1.1 cards, whereas 1.2 and 1.3 cards are not any faster per theoretical FLOPS. This project should be left to the cards which can't be used anywhere else efficiently [and as a backup for the MW server..] .. i.e. all the smaller ATIs and nVidias which are too slow for GPU-Grid (or others). CC uses integers, so crippled dp on Fermi doesn't change the fact that it shouldn't be used here anyway. Einstein Their app offers only mild support for the CPU. Credit-wise it's absolutely not worth it. Science-wise I think we'd need proper load balancing to make it feasible on GPUs. That's a software problem, so I don't know how Fermi hardware affects this. SETI EDIT: just read GDFs post that SETI is using "only" sp as well. BTW: ATI does not support dp on all but their highest end chip. They're free to do this, but consider the following: Juniper (HD57x0) is almost exactly half a Cypress (HD58x0). Everything is cut in half, except some fixed fuction hardware like display driving circuitry which is present in both chips. Therefore one would expect the amount of transistors to be about half of each other. And that's indeed the case: 2154 million versus 1040 million. 2 Junipers would need 2080 transistors. That means it would have cost them about (2154-2080)/2 = 37 million transistors to put dp support into Juniper. That would have increased it's transistor count and thereby area and cost (approximately) by 3.5%. And leaving the feature in would not have cost them any development effort: just reuse the same shaders for the entire product family. Removing the feature did require a redesign and thus cause cost. It's a legitimate business decision: reducing cost on all of their mainstream chips does count, even if it's as small as ~4%. And enabling dp on Redwood and Cedar would probably not have provided very little benefit anyway. I would have preferred it in Juniper, though. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480. That's what they say.. but, seriously, it's an effort on their side to disable it. Disabling it costs them something, whereas just leaving it enabled would have required no action at all. Their motivation is clear: pushing Tesla sales. And I doubt it's a simple driver issue. Anyone buying non-ECC Geforces over Teslas for number crunching is running non mission critical app and would probably just use a driver hack (which would undoubtfully follow soon after the cards are released). I read nVidia went to some length to disable the installation of Quadro drivers (better CAD performance) on recent Geforce models the hard way. Not sure if a bios flash could help, though. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
ouch... if GPUGRID uses sp, that's different story :-) but it's really pity that if one day GPUGRID would like to use dp I have to spend $$$
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
In the absolutely best case you're "only" 50% slower in dp than in sp. The only reason to use dp is if the precision provided by sp is not enough - a circumstance GPU-Grid can luckily avoid. Otherwise they wouldn't have been CUDA pioneers ;) MrS Scanning for our furry friends since Jan 2002 |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We don't really avoid it. We do use double precision emulation via software in some specific part of the code where it is needed. It is just implemented in terms of single precision. Of course, if you do have to use it all the time, then it makes no sense. gdf |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Double precision as well as EEC is not needed for gaming. That's behind their choice. They don't expect this to have any impact on the real market of a GTX480. It's not the first time & it's getting really anoying. Everything from rebranding to disabling/crippling with the intent to push Tesla's. As if it wasn't hard enough for them to justify to people to pay $500 for a GTX480 that eats lots of electricity, produces tonnes of heat & only slighly outpreforms a much cheaper Ati. They're shooting themselves in the foot every time they try to pull these tricks. I'm really considdering Ati, but I'd hate to see Nvidia go away. One person is no loss, but I'm not the only one thinking these thoughts.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Again, double precision is not required to run GPUGrid tasks; accuracy & statistical reliability of the models is achieved in different ways here. That said, I broadly agree with what you are saying. If NVidia had the sense to create a single precision core design it would have been less expensive to design, manufacture, sell and might have been in the shops last Nadal. This would also have separated gaming cards from professional cards. This still needs to be done. All your eggs in one basket strategies always have their downfalls. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yet it seems that AMD/ATI has no problem at all implementing fast double precision in their consumer grade cards. |
|
Send message Joined: 16 Aug 08 Posts: 87 Credit: 1,248,879,715 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
From http://forums.nvidia.com/index.php?showtopic=165055 - Tesla products are built for reliable long running computing applications and ok, did you catch that.. Increase reliability by using lower clocks! Now, if you were making a consumer card where all the market information including statements from your competition say that the consumer market does not need double precision floating point, what would you choose to increase reliability ... reduce clocks all around, or just reduce clocks on double precision floating point, a feature not needed by the market? I know which I would select. And we know what ATI selected when they removed double precision floating point from all but the top end models of their mainstream line of GPUs. Maybe they will work out the technical hitches in the future so they can crank up the clock speed on double precision floating point. But for now, all these conspiracy theories are making me remember the grassy knoll.. |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
It's not about WHY Ati removed double precision, but HOW Nvidia crippled the GTX470/480. If the logic behind Tesla's being so great because "memory and core clocks (by using lower clocks) to increase reliability and long life" were true, why didn't they choose to cripple the Tesla's instead? GPU's have warranties, some even longer then people care to have them. Personally, I'd use a GPU for 2-3 years I even OC them, sure they are prone to errors, but GPUGRID.net is for me just a hobby.
|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
A question about ATI removing double precision in the lower cards. As far as I know, Fermi and the 5000 series ATI cards use the same compute cores to compute single and double precision, just processing the data differently. There is not specific hardware for double precision (may a little of control units). I don't think there is much to gain by removing double precision in terms of saving transistor. So, maybe ATI is doing marketing as well when they remove double precision. gdf |
©2025 Universitat Pompeu Fabra