Message boards :
Graphics cards (GPUs) :
Fermi released
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 2 Mar 09 Posts: 28 Credit: 4,975,808 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Now that benchmarks are flourishing all over the net, I hope that someone very soon will put hands on a 470 (or even a 480) and will crunch Gpugrid wus. For sure, gpucomputing isn't part of a "standard" bench...
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Anand says at folding a GTX480 is 3.5 times faster than a GTX285. Impressive! MrS Scanning for our furry friends since Jan 2002 |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Excellent review (as always) from Guru3d at: GTX 480 Review Guru3d The hardware based tessellation is superb, and the new 32AA mode promises much better graphics quality. Power, as widely flagged up pre release is horrendous - adding 250w to base system+single card for each card in SLI is going to eat up PSUs, and will need to go up a PSU in many cases. That will give many pause for thought. GTX 480 Power needs The clear performance improvement of 10% or so over a 5870 is not reflected in the 3DMark - usually with Guru3d reviews that benchmark is not far off the real world game tests they do, no so this time - as its a benchmark its academic as real world use obviously has more credence, but interesting non the less. 3DMark Vantage (DirectX 10) Performance GTX295 appears to be EOL, in practice if not formally as they dont make them any more. So in the world of NVidia and CUDA, Fermi is it. Lets hope there are are others lower down the totem pole (aka 420/430/440 whatever), cant believe there will not be - because without them they will have a hard time against the "average" PC User & ATI due to cost and power. I get the impression its "unfinished business" and there is much more there to come if they can only sort out power needs. At 250w TDP its going to be one hell of a hot mother .... and will need a PSU one step above the "norm", and a review of case cooling for high end users. As for 2/3/4 of them in SLI .... Personaly, given the funds and availability, I'll probably end up getting one as my second box has a 600W PSU anyway - but only because of the impending new GPUGrid Project. Without any CUDA needs, I would not buy one, the power, heating, and cost are just too much in comparison to 5970/5870, and those aspects are not offset enough by performance increases worthwhile taking the pain in those areas in my personal situation - everyone has their own needs and drivers of course. Overall, nice card, performs well, shame about heat and power, and I wish they had come out with this last year .... it will keep them in touch with ATI, but will not set the graphics race alight until mid term refresh or Fermi2. Regards Zy |
|
Send message Joined: 18 May 09 Posts: 10 Credit: 200,701,509 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'll be sticking with my 216-core GTX260 for now because I can't afford an upgrade, but the specs are incredible. It looks like the card is made using 40nm manufacturing tech, maybe they'll be able to shrink the die to 32nm by the time I have enough to $ get myself one. |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
not really excited. Performance wise against my OC'ed GTX275 - from 0% to 50%, mainly - 25-35%. Really not that much. I'm not sure what Anand talking about - just tens %s only. Power consumption - I've got good 720W PSU, so it's not an issue for me. But: moving towards 40nm process ATI half-year ago managed to go down below 200W even for 5870. Nvidia's lower-end GTX470 consumes more... Heat. This is a problem. Running 24/7 that hot GPU IMO requires certain measures for better cooling. I've got CM HAF932, so it should not be that bad for me, but still... Voltage tweaking. Another problem. It's really pity that it I fix heat problem somehow, I can not OC the card beyond granted 100 MHz limit. Price. Well overpriced against ATI. I general - I did not disappointed, coz I was ready :-) see next door thread, that information was right, even about Vantage :-) Will I consider to buy it? At least not A3 revision. And again - it make sense to get dual GPU card for GPUGRID, which will be available in Q3 this year. BTW, early Q3 ATI releasing 28nm cards (5890 & 5990?) and nvidia will release 28nm cards next year only (fermi2 - may be). Zydor thx for the link :-)
|
|
Send message Joined: 2 Mar 09 Posts: 28 Credit: 4,975,808 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Anand says at folding a GTX480 is 3.5 times faster than a GTX285. Impressive! Truly is! But I'll wait for a revision: consumption and heat are too big issues to buy a Fermi now.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The heat block on the GTX480 just begs to have a system fan blowing directly onto it. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I'm not sure what Anand talking about - just tens %s only. When I said "folding" I meant Folding@home. In games it's of course as fast as you said. Regarding power consumption: nVidia is using many more transistors than ATI, that's why their power consumption is naturally higher. Voltage tweaking: with a card like the GTX480 I'd probably consider voltage tweaking.. lowering it to reduce heat, noise and power consumption! There's probably not much room left, though. And I wouldn't increase the voltage without anything less than water cooling, though and Accelero Extreme could manage the heat, if the software doesn't use the card 100% (e.g. Seti) and there's very good case ventilation. BTW, early Q3 ATI releasing 28nm cards (5890 & 5990?) Let's wait and see ;) Sure, it's on some leaked roadmap, but 28 nm is not exactly easier than 40 nm, isn't it? I'll remain sceptical until I see more solid evidence. Besides: I read that ATI plans a refresh around summer. This would still be made at TSMC, as the time to work with GF and to develop their new general purpose processes wasn't enough yet. And 32 nm disappeared from TSMCs roadmaps. So I suppose the refresh will be based on TSMCs 40 nm. And the next release (28 nm, probably 6000 series, since they'd be running out of numbers in the 5000 seriees) would not come until at least a half year after that summer refresh. Whatever they end up doing, it's going to be interesting :) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
ExtraTerrestrial Apes, I know that u r talking about F@H :-) I can not understand from where 3.6 times appearing. I see no reasons for this... But let's wait to see real RAC 480 can provide. nVidia is using many more transistors than ATI, so what? they are 5-10% only faster then 5870... about voltage tweaking. I'm absolutely agree that it's necessary to do smth with heat, but I got water cooling or Accelero Extreme - why I can do nothing? about ATI. Global Foundaries (Fab 1 in Dresden) almost done with 28nm process and they will be ready in Q3 to produce 5890 & 5990. and furthermore - ATI is not happy with TSMC (famous story with 4770 last summer), but at that time Fab1 was not ready for 5xxx cards and ATI could not wait to release them, so they came up TSMC.
|
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1626 NVIDIA is limiting the double-precision speed of the desktop GF100 part to one-eighth of single-precision throughput, rather than keep it at half-speed, as per the Radeon HD 5000-series no comments... and a bit more "sugar in the beer" http://www.youtube.com/watch?v=WOVjZqC1AE4
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I can not understand from where 3.6 times appearing. I see no reasons for this... But let's wait to see real RAC 480 can provide. It's right there ;) The point is that a GTX480 has "only" about twice as much raw single precision shading power as a GTX285, but this power can be used more efficiently: the increased caches should help to hide memory latency and the parallel execution of different kernels could speed things up tremendously. It all depends on the application, though. And games are not where these features give the most benefit. nVidia is using many more transistors than ATI, so what? they are 5-10% only faster then 5870... The amount of transistors directly results in the much higher power consumption (for such rather similar chips), that's the only reason I mentioned it here. but I got water cooling or Accelero Extreme - why I can do nothing? What do you mean? Regarding your link to the MW forums: that would be a catastrophe for us, but go a long way towards pushing the scientific community towards Teslas. At the same time they'd devalue CUDA, as the end user also benefits from double precision. And anyone who runs mission critical apps is probably already running Teslas anyway. They could achieve a similar market segmentation by disabling ECC on the Geforce cards. And to quote myself: That report looks dodgy. The author seems to be confused about the benefits of IEEE compliance and quotes ATIs cards as having 1/2 the sp speed in dp. It should actually be 2/5, which is 40% instead of 50%. Just because I think it would be a stupid move and messenger is unprecise in other points does not necessarily make it untrue. It's certainly a point to keep an eye on, just don't take it for granted yet. MrS Scanning for our furry friends since Jan 2002 |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
The discussion has split into two threads - and I have a feeling neither side has realised the different mindset being used for the figures and logic in response. The figures and discussion re the MW thread and Anand review et al turn around the DirectCompute/CUDA aspects of the card, and the crippling or otherwise of the various elemnts to do with the Compute functions (DirectCompute/CUDA software et al). Hence the figures of 6 or 3.5 or 40% et al. That has nothing to do with the other side of the thread where individuals are taking about the real world make up of the card, shader performance, memory speed, power useage, and end user delivered performance figures. Take one set of figures from the Compute side and refer to them against the user end side of the card (or visa versa) and confusion reigns supreme :) Its chalk and cheese ... Scientific/Developer focus is on the Compute capabilities, by in large us mere crunching mortals focus on the real world end performance of the beast. Two different mindsets, two unrelated sets of figures, and two conversations going on in parrallel where neither side has twigged the basis of the figures from the other side of the conversation :) Regards Zy |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Guys I do understand what u r talking about. I'm really curious to see real RAC of GTX400 in GPUDRID or F@H or whatsoever, coz these reviews all about fps in games and that's not we are looking for, right? :-) Pls post here about real productivity of GTX400.
|
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
So there are two different arguments. But like it or not, they do correlate. How? Many who crunch on GPUDRID.net are said to have middle to high end GPU's. That not everybody has the funds, or interest, in buying a high end GPU, simply to crunch 24/7, brings in mind the motivation to buy a Fermi. GPUGRID.net uses these Nvidia GPU's with superior compute capabilities, & that's great for GPUDRID.net & great for Nvidia. But if it's only good for GPUDRID.net, then unless I'm a super fan, I'm probably not going to buy a Fermi, if the only thing good about it is the compute capability, if I only use that for GPUGRID.net If the selling argument is CUDA, then more software needs to suport & use CUDA in order for a niche to be the reason & motivation behind spending $350-$500 on a new GPU that eats lots of power & doesn't offer something that other GPU's can't offer. That's just my opinion of things.
|
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Me personally - I do not mind to spend $200 on GTX470 (get it for $350 and sell my GTX275 for $150), but if GTX470 will worth these $200, i.e. it will be at least twice faster then GTX275. Other then that - it's just wasting of money and makes no sense at all. Games - I'm not that great gamer, just 2-3 hours on Friday and Saturday nights to shoot bastards - and I'm done :-) But again - heat is a real issue for us. Cooling system is way weaker against GTX480, so I'm in doubt if GTX470 can handle that much. And to buy GTX480 just for cooling system - it's ridiculous :-) Voltage tweaking - I agree, if GPU that hot on stock voltage, I'm pretty scared to increase beyond stock voltage. So, to make long story short - I'm curious to see real result of GTX470 in computation.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I don't see much of a controversy: for games GF100 is clearly not the best solution, period. For compute there's potential, but we don't know enough yet. MrS Scanning for our furry friends since Jan 2002 |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We have not had any benchmarks yet. We are able already to compile for CUDA3.0 though. The ~3x times performance for F@H seems reasonable and it is in line with what we said in the past. For compute, Fermi will be amazing and in particular for GPUGRID. We had heard that double precision and EEC was disabled (or something similar on Geforce). Not a real issue for us anyway. The new ACEMD application (100% faster once we release the new beta) on Fermi should be 6 times faster than what we had just few months ago! GDF |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't see much of a controversy: for games GF100 is clearly not the best solution, period. For compute there's potential, but we don't know enough yet. 101% agree :-) Let's be patient and wait April 12
|
SandroSend message Joined: 19 Aug 08 Posts: 22 Credit: 3,660,304 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
If you test a GTX480 or GTX470, be very carefull. it may kill your Fermi. On a german hardeware-side they tested a GTX470 with folding@home http://www.hartware.de/review_1079_16.html In short: during the tests, the fan turnt to max.(3000rpm) because of the great heat, and during th test the fan-control failed, the rpm dropped down and the chip went up to 112°C, followed by black screen and freeze. fortunately, the chip was not damaged. So, crunching on a Fermi seems to hit the chip very hard, if you have no good cooling in your case you will be in trouble. |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Sandro Thx a lot for the post :-) On top of what u just told, in the bottom of the page there nice table comparing GTX285 and GTX470. I'm not that good in German, but as fas as I understood - GTX470 faster the GTX285 on 17-22% ONLY. That's it, not that much at all. And I'm still not sure what Anand was talking about - "3.6 times faster"...
|
©2026 Universitat Pompeu Fabra