Message boards :
Graphics cards (GPUs) :
GPUGRID and Fermi
Message board moderation
Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · Next
| Author | Message |
|---|---|
|
Send message Joined: 11 Jul 09 Posts: 21 Credit: 3,021,211 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Thanks for the help everyone in getting XP Pro set up. I have completed 2 work units significantly quicker on XP compared to Win7 and that's without the SWAN_SYNC=0 enabled. I'll post some comparisons once a few work units complete with XP Pro and SWAN_SYNC enabled. I'll try to find the same work units to compare between XP and Win7. I'm sure my results will be similar to what others have already posted but maybe there have been some optimizations. It's nice to see XP using over 97% gpu usage compared to the maximum ~70% (if lucky) on Win7. |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The future will be brighter when also ATI works nicely. As I no longer own any NVIDIA Cards I keep waiting, I'm almost Tempted to go out & buy a couple of Fermi's ... NOT ... haha STE\/E |
|
Send message Joined: 11 Jul 09 Posts: 21 Credit: 3,021,211 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Looks like the work unit pool for 6.73 has dried up. I can't get any more on either of my systems. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 2 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Looks like the work unit pool for 6.73 has dried up. I can't get any more on either of my systems. Go fishing for v6.05 instead. The Fermi version seems to work fine. |
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My experience with WinXP 32 has shown that there is only a very minor difference between HT ON and HT OFF for an i7-920 when you have SWAN_SYNC=0. Thanks - Steve |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,388,659 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
What voltage did you set to drive the GTX470 with those settings? At stock voltage the application crashed so I increased it to 1.050 V. Any other recommendations? Thanks in advance :-) ! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Any other recommendations? Well.. yes. Better don't increase the voltage ;) It reduces chip longevity and drives power consumption up. On a chip like Fermi it's also a considerable factor that increased temperature (due to voltage increase) increases the leakage quite a bit, so your card becomes less power efficient (=higher electricity cost). I'd rather try to improve cooling & temperatures. That would also give you an increased frequency headroom. Probably not as much as voltage increases, but without the negative side effects. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My GTX470 is at 704MHz GPU, 1407MHz shaders and 854MHZ RAM (x4). Voltages are not increased! |
|
Send message Joined: 22 Jul 09 Posts: 21 Credit: 195 RAC: 0 Level ![]() Scientific publications
|
Any other recommendations? OCing is not as detrimental as your make it sound. modern processes are designed to handle very high and very low temperatures. a major consideration when designing a chip is robustness. in fact it is so important that they are overly conservative with clocks and volts, especially with server/professional chips. that's free performance for us. 400 series is great for OCing too, partially from architecture and partially from high leakage. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,388,659 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks to all for your help. skgiven, I am using your settings now. Edit: Sorry but with the standard setting of 0,962 V the appliction craches. I have to select at least 0,975 V to have a stabile GTX470 with skgiven's clock settings :-/ |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@roundup: well, if you want to overclock to a given frequency then there's no guarantee that your chip can make it at a certain voltage. Chips vary. The increase to 0.975 V is not too much, though :) The jump from 0.96 to 1.05 V would have increased your power consumption by 19%, i.e. would have brought you from 210 W TDP to 250 W. That's not even factoring in increased leakage due to higher temperatures and the power consumption increase due to frequency. If you go to 0.975 V it's a much more modest 2.7% increase. BTW: the frequency increase itself increases your power consumption by 15% - but that's OK since you're also getting 15% more performance out of the card. @chumbucket843: actually they make the chips more vulnerable to damage by shrinking the transistor dimensions. Having a dopant atom swap place with a neighbouring atom starts to hurt if your channel is only a couple of atoms long. We're not that small yet, but it serves to illustrate the problem. For CPUs I'd agree that they can take quite a beating. So far I've only seen one single chip fail personally (and I've seen many). However, the situation is different for GPUs: the high end chips are already being driven quite hard at the edge of stability. And usually they run at 80 - 95°C, much higher than CPUs. Not because they could take it by design, but rather because it's too expensive and loud to cool them any better. And because noone's gaming 24/7. They're made so that most of them survive the warrenty period under typical loads - which does not include BOINC. And the GTX-Fermi cards are not professional cards. They're meant for gamers (hence the crippling of double precision performance). And the high leakage is actually something nVidia would love to get rid of - however, it's a byproduct of the process variation in TSMCs infamous 40 nm process. So there's not much they could do about it without crippling performance. They've got one thing going for them, though: on an absolute scale their stock voltage is quite low to keep power consumption somewhat in check (one may argue whether they succeed at that - but that's not the point). Hence the chip degradation due to voltage alone (not talking about temperature here) is not as strong as for other chips, and is thus less important upon voltage increases. So practically you'll only have to deal with temperature & power consumption increases upon voltage increases. BTW: I think overclocking and overvolting have to be clearly distinguished. I love overclocking, as it provides performance basically for free and actually increases efficiency. But I don't generally recommend high voltages, as they reduce the chip lifetime and efficiency (past a certain point). MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Fortunately the Fermi's can be over-volted in very small amounts. I initially tried to increase it, when I was struggling to use a Fermi with Win7 (I failed to get reasonable performance, and that is still the picture). I was able to increase my GTX470 to about 750MHz if I remember correctly, and stay under 1V - not that I was ever going to keep it at that! I think a small tweak in Voltage is reasonable if it allows reasonable performance gain and relative power usage. So a 15% increase in performance while increasing power usage by 15% seems fair enough. If you take into consideration the full power used by the system, it might be more like 8% increase in power consumption. I agree that increasing the Voltage too much is not just wasteful in terms of power consumption, but in terms of reducing the longevity of your card. I forked out £320 to crunch with a Fermi; I want to crunch not crash. - Getting burnt is bad, but burning yourself is just stupid! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Agreed.. just want to clarify a detail, if it wasn't clear before: 15% power for 15% performance is only due to frequency increase. Touching the voltage adds to this (or in fact multiplies). So in roundups example he'd get 1.15 * 1.19 = 1.37, i.e. a 37% increase in power consumption if he went for the higher clocks at 1.05 V. MrS Scanning for our furry friends since Jan 2002 |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes, just cuda3.1. When do you plan to release this CUDA3.1 (beta) client? |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Only when CUDA3.1 is out. gdf |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GeForce/ION Release 256 BETA 257.15 May 24, 2010 http://www.nvidia.com/Download/Find.aspx?lang=en-us http://www.nvidia.com/object/winxp-257.15-beta.html http://www.nvidia.com/object/win7-winvista-32bit-257.15-beta.html Adds support for CUDA Toolkit 3.1 which includes significant performance increases for double precision math operations. See CUDA Zone for more details. The XP driver has been working fine for weeks. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 2 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've recently moved my Fermi GTX 470 from host 71413 (Windows 7, 32-bit) to host 43404 (Windows XP, 32-bit, running as a service). I moved the 9800GTX+ in the opposite direction. On both hosts, I started the Fermi with driver 197.75, and then upgraded to driver 257.15_beta to test some CUDA 3.1 stuff for another project. I don't think the speed of the current cuda30 v6.05 ACEMD2 changed significantly with the driver change: if anything, it was slightly slower on the Beta driver (as you might expect). I think we'll have to wait for a new cuda31 app as well before we see any benefit from the driver. What was significant was the increase in speed when I put the Fermi into the WinXP box. Times went down from 19,000+ seconds (with Swan_Sync and 95% CPU usage) to 11,000+ seconds and under 15% CPU. It's difficult to tell how much of that is due to a more modern hardware platform, and how much was the operating system, but it was a dramatic change. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Until they compile using the CUDA 3.1 toolkit you will see no change in using the 257.15 driver. It has been well reported (by me and others) that Fermi cards dont work well under Vista or Win 7. They work, but at 60% speed. I dont know why, but it is either the driver or the app. Basically, if you have a Fermi use XP or Linux. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 2 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Those reports are largely what prompted me to make the swap - I just thought I'd post some more actual figures. The 9800GTX+ which moved in the opposite direction also slowed down, but to a lesser extent - maybe 20% - and started using a lot more CPU. Now, its new host has a slower CPU, so you would expect it to need more seconds - but not a four-fold increase. That must be down to Windows 7, too. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Those reports are largely what prompted me to make the swap - I just thought I'd post some more actual figures. Yep, you got it. The slowdown for Win7 was reported as soon as the OS was officially released. It's specific to GPUGRID and nothing has been done to resolve the problem so far. Here's a thread on the subject: http://www.gpugrid.net/forum_thread.php?id=1729 |
©2026 Universitat Pompeu Fabra