Overclocking

Message boards : Graphics cards (GPUs) : Overclocking
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 23225 - Posted: 3 Feb 2012, 10:34:25 UTC

Hi,

We're interested in hearing about your experiences overclocking cards - in particular the top end *70s and *80s.
* Do you overclock?
* What settings do you use?
* Which clocks are most relevant to gpugrid performance?
* Do you downclock certain clocks?
* How does it impact stability (failed WUs, system crashes)?
*What OS do you have?

Thanks!

MJH
ID: 23225 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JLConawayII

Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23239 - Posted: 3 Feb 2012, 22:01:51 UTC
Last modified: 3 Feb 2012, 22:05:35 UTC

Not exactly a high end card, but here it is:

Zotac GTX 260
base clock 576 core/1242 shader/1000 memory
OC to 701 core/1512 shader/1100 memory
73°C max @ 55% fan

At these settings the card has excellent stability, only had one WU error that I'm not convinced was related to the overclock. TBH I'm not sure the memory OC is even necessary, I don't think this project uses enough memory bandwidtch for it to be relevant. I believe this card can go higher but I haven't tested it yet. This is under Windows 7 using MSI Afterburner.
ID: 23239 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23240 - Posted: 3 Feb 2012, 22:36:55 UTC - in response to Message 23239.  

- increase core & shader clock
- memory plays a minor role, but not so much that downclocking would be worth it
- OS does not really matter for OC
- stability: it depends ;)
clock too high and stability suffers, stay within the limits of what your card can do and RAC will increase

And some more:
- keep temperatures down, fans as high as your noise tolerance permits
- if you really want to go for it you can clock higher by increasing the GPU voltage.. this lowers power efficiency of the GPU, though

MrS
Scanning for our furry friends since Jan 2002
ID: 23240 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23242 - Posted: 4 Feb 2012, 0:21:24 UTC - in response to Message 23240.  

My first Gigabyte GTX470 was ref design and 1st ed. It could barely overclock and due to the single fan exhaust cooling system got very hot very quickly. From 607 to around 615 was about all that was safe. Some tasks could run reliably at 620MHz but some would fail. At stock it was very reliable.

My second Gigabyte GTX470 is 2nd ed. and can OC a bit more (630MHz is safe for all tasks). It uses less amps and power and has a better fan but is still of ref. design. The default fan speed is higher and more powerful so it runs a bit cooler.

I now have a Zotec AMP GTX470 which is factory overclocked to 656MHz and uses a twin frozr fan system. When running a Nathan task at 99% GPU utilization the fans can be run at around 66% and can keep temperatures below 70degC. A Gianni with 88% GPU utilization keeps the card around 65degC at 60% fan speed (~1800rpm). It can comfortably OC to >680MHz (without upping the voltage) which is 12% more than reference.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23242 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23243 - Posted: 4 Feb 2012, 0:45:26 UTC - in response to Message 23225.  
Last modified: 4 Feb 2012, 1:38:54 UTC

I'm using Windows XP x64 (and x86 too).
I've overclocked my GTX 480s to 800MHz@1050mV, GTX 580 to 850MHz@1063mV, GTX 590 to 720MHz@925mV and 913mV (memory clocks at factory settings on all cards)
Also, I've changed the GPU coolers to Arctic Cooling Accelero Xtreme Plus, so the GPU temps are way below 70°C (except on my GTX 590, which still has the standard cooler plus a 12cm cooler directly above the card, it goes up to 83°C sometimes)
A 80+ Gold (or even a 80+ Platinum) certified power supply is recommended for overclocking, with an adequate headroom for the extra power needed. It's not recommended to use a PSU above 75% of its nominal wattage in long term. (Considering efficiency and longevity)
Remember: the power consumption is in direct ratio with the frequency, but in square ratio with voltage (so 10% more voltage causes 21% more power consumption since 1.1*1.1=1.21), and these adds up.
At higher temperatures the power consumption of a chip goes even higher.
At higher temperatures a chip tolerates less overclocking.
10°C (or K) rise in the chip's operating temperature halves its lifetime (actually its MTBF).
Higher overclocking may work with lesser GPU utilizing workunits, but the highly GPU utilizing tasks may fail with those settings.
To make overclocking worth the effort, workunits should not fail at all. It's easy to loose the 10-15% gain of overclocking by failing workunits, because the running time of a workunit is long (even for the short ones)
ID: 23243 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23244 - Posted: 4 Feb 2012, 1:07:23 UTC - in response to Message 23243.  
Last modified: 5 Feb 2012, 14:36:40 UTC

As well as the potential loss, and outages due to continuous task failures, there are 'recoverable' losses, which can be identified by reduced performance; longer run times at higher clocks.

Driver related downclocking is also more likely to occur at higher temperatures and clocks.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23244 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Paul Raney

Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25176 - Posted: 20 May 2012, 13:32:56 UTC - in response to Message 23244.  

All of my GTX cards are overclock with different levels of success.

GTX 570
Voltage 1.1V
924MHz

GTX 570HD
Voltage 1.075V
881MHz

GTX 580
Voltage 1.138V
944MHz

GTX 580
Voltage 1.138V
953MHz

Now looking for a GTX 590!

Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.
ID: 25176 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile TarHeal

Send message
Joined: 24 Sep 11
Posts: 9
Credit: 10,103,862
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 25206 - Posted: 23 May 2012, 5:00:19 UTC - in response to Message 23244.  

Would be very helpful if ACEMD could flag "recoverable" errors in the Event Log. Would that be feasible? When a work unit takes longer than it was initially estimated to take, how can we know when to attribute that to recoverable errors rather than the many other variables that could be to blame? How reliable are those initial time estimates from GPUGrid?

How aware is the GPUGrid app of the hardware it is being run on? If a card's settings are changed, by the user or by the driver, in the middle of a WU, could that information be recorded by BOINC?

On the subject of downclocking - if overclocking tends to be energy inefficient, does downclocking ever improve energy efficiency? That might be a worthwhile area of study, especially if it means your PCs generate less heat and your A/C doesn't have to run 24/7.
ID: 25206 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile TarHeal

Send message
Joined: 24 Sep 11
Posts: 9
Credit: 10,103,862
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 25207 - Posted: 23 May 2012, 6:18:52 UTC - in response to Message 23243.  

Higher overclocking may work with lesser GPU utilizing workunits, but the highly GPU utilizing tasks may fail with those settings.
To make overclocking worth the effort, workunits should not fail at all. It's easy to loose the 10-15% gain of overclocking by failing workunits, because the running time of a workunit is long (even for the short ones)


This is a great point. I've only recently figured out that overclocking to get a higher framerate for games is not the same as overclocking for science. Most video cards are engineered for a consumer market, not to run a 100% processor load 365 days a year. So if you do GPU computing, Nvidia Inspector is your friend. I hope that by closely monitoring GPU utilization and temps for a while I can do a better job figuring out my what my hardware is capable of under the worst case scenario. If I'm gonna have any more errors and wasted hours, I want them to be Nvidia's fault. (Seriously guys, if you're gonna charge $1000 for a graphics card, you should provide drivers that aren't broken.)

Also, having RMA'd a card for a fan that died after only five months of heavy BOiNC usage, I've learned to be very careful when buying low- to mid-range cards; most aren't built to last under such a workload.
ID: 25207 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TheFiend

Send message
Joined: 26 Aug 11
Posts: 100
Credit: 2,863,609,686
RAC: 265
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25208 - Posted: 23 May 2012, 6:23:42 UTC - in response to Message 25206.  


On the subject of downclocking - if overclocking tends to be energy inefficient, does downclocking ever improve energy efficiency? That might be a worthwhile area of study, especially if it means your PCs generate less heat and your A/C doesn't have to run 24/7.


You can improve energy efficiency and reduce heat output by undervolting your cards.
ID: 25208 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25213 - Posted: 23 May 2012, 20:15:16 UTC

@Energy efficiency: there's already been done lot's of reasearch into this.

Pure underclocking does not increase power efficiency. If you only consider the GPU and there wouldn't be any leakage, then power efficiency would stay constant during underclocking: you lower power consumption and performance linearly. However, there's a certain amount of sub threshold leakage which a chip always consumes, regardless of the transistors switching. This is significant. By underclocking you only lower the power consumption for switching the transistors, but not this constant power draw. And there's the system around the GPU: CPU, mainboard, chipset, HDD, memory etc. They all consume a fixed amount of power, no matter how far you underclock your GPU. The result: underclocking actually decreases system power efficiency, whereas overclocking increases it.

However, GPU power consumption scales approximately quadratically with voltage. That even includes leakage currents. Therefore the best way to improve power efficiency is to lower the voltage. The chip needs voltage to reach high frequencies, so the ultimate efficiency of the GPU itself is reached at the minimum voltage, together with the highest clock achievable at this setting (this is going to be lower than stock). The peak power efficiency for the entire system may be reached at higher voltage-clock combinations, though, as the fixed amount of power to drive the GPU is still required.

Another way to slightly increase efficiency is to lower the GPU temperature. This reduces power consumption, as a rough rule of thumb, by a few W for every 10 K difference (for contemporary large GPUs).

BTW: as a German I find the thought of crunching with private PCs in rooms with A/C quite strange. I already pay enough for the power consumption of the PCs, I wouldn't want to pay extra just to remove that heat. But we don't usually use A/Cs at home anyway.

MrS
Scanning for our furry friends since Jan 2002
ID: 25213 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TheFiend

Send message
Joined: 26 Aug 11
Posts: 100
Credit: 2,863,609,686
RAC: 265
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25217 - Posted: 24 May 2012, 9:58:25 UTC - in response to Message 25213.  



BTW: as a German I find the thought of crunching with private PCs in rooms with A/C quite strange. I already pay enough for the power consumption of the PCs, I wouldn't want to pay extra just to remove that heat. But we don't usually use A/Cs at home anyway.

MrS


Living in Germany you don't need A/C in your house, but a lot of crunchers live in countries were A/C is more common in residences.
ID: 25217 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Paul Raney

Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25433 - Posted: 2 Jun 2012, 5:50:28 UTC - in response to Message 25217.  

EVGA GTX 590 @ 664MHZ standard memory clock. I will be pushing the memory clock a bit higher over the next few days.
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.
ID: 25433 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25434 - Posted: 2 Jun 2012, 7:36:09 UTC - in response to Message 25433.  

I will be pushing the memory clock a bit higher over the next few days.

Overclocking memory won't increase the performance of the GPUGrid client much.
ID: 25434 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25435 - Posted: 2 Jun 2012, 7:41:30 UTC - in response to Message 25433.  
Last modified: 2 Jun 2012, 8:00:50 UTC

As Zoltan said, there is Probably no point increasing you GPU's memory clock; for GPUGrid tasks you will gain little or nothing in performance, but could overheat the card, or cause task failures. Leave it as is, or increase the core (607MHz) and shaders very modestly. Probably best to not touch the Voltage, unless you have to. Sometimes you can reduce the voltage slightly, increase the fan rate and increase the clocks slightly (maybe less likely for a GTX590 than a single GPU card).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 25435 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Paul Raney

Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25442 - Posted: 2 Jun 2012, 11:04:40 UTC - in response to Message 25435.  

Thx for the insight on memory clocks and GPUGrid performance. I really did not want to spend hours today tuning memory clocks.
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.
ID: 25442 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Paul Raney

Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25554 - Posted: 7 Jun 2012, 12:46:20 UTC - in response to Message 25442.  

My GTX 580s are stable at 972MHz Core clock and 2000MHZ for memory with 1.138V. I want to go to 1,000MHZ but my initial attempt was a failure.

Has anyone used the new EVGA BIOS that unlocks the fans to 100% and raises the max voltage to 1.150? I think with 1.150V, I could get to 1GHz on my core clock.

What is the BIOS upgrade process? Has anyone had a BIOS upgrade fail? Does the card fall back to a default BIOS or is it an RMA issue?

Any help is appreciated.
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.
ID: 25554 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25555 - Posted: 7 Jun 2012, 15:37:19 UTC - in response to Message 25554.  

1 GHz may work with the additinal voltage, but it's going to be close (any way).

Backup BIOS: as far as I know only recent high-end AMDs provide dual BIOS. So you better not unplug anything during flashing..

MrS
Scanning for our furry friends since Jan 2002
ID: 25555 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25556 - Posted: 7 Jun 2012, 16:34:32 UTC - in response to Message 25555.  
Last modified: 7 Jun 2012, 16:35:11 UTC

85% fan rate is quite high and 972MHz definitely is; 26% OC.
For such a pricy GPU, flashing just to attempt another 3% isn't worth it IMO, at least not until it's got a lot of miles under it's belt.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 25556 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Paul Raney

Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25557 - Posted: 7 Jun 2012, 17:54:14 UTC - in response to Message 25556.  

It isn't worth the 3% gain. Better to wait to buy the big Kepler and then flash them. These are ebay cards so EVGA will likely not want to provide RMAs :-)

The new BIOS opens the fan to 100% and provides a max voltage of 1.150V Just enough to get to 1GHz!
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.
ID: 25557 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Graphics cards (GPUs) : Overclocking

©2025 Universitat Pompeu Fabra