Lowering VDRAM frequency saves energy

Message boards : Graphics cards (GPUs) : Lowering VDRAM frequency saves energy
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26143 - Posted: 3 Jul 2012, 18:21:10 UTC

Thanks SK, I think that's as much analysis as we need :)

@Ratanplan: power consumption obviously depends on load. Some variation in backgroud tasks might cause noticeable differences if average GPU utilization was changed. Different GPUGrid WUs achieve different GPU utilization levels, so this will affect power draw as well. And there's always temperature.. if the ambient temperature varied by 10°C during measurements, leakage currents will change measureably.

MrS
Scanning for our furry friends since Jan 2002
ID: 26143 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Chilean
Avatar

Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27391 - Posted: 23 Nov 2012, 19:28:40 UTC
Last modified: 23 Nov 2012, 19:29:46 UTC

I OC'ed my card from 850 to 1256 MHz (Core) and 2500 to 2902 Mhz (Memory) and shaved about an hour on the normal runs (from 18K seconds to no more than 14K seconds). On the long runs, the performance gain is even better (probably same % of improvement though). I doubt the whole performance gain was from just OC'ing the core. Then again, computers work in mysterious ways.

97-98% Utilization, fed by a single HT thread (3610QM i7 CPU).

GPU: nVidia 660M.

I think I hit a wall @ 1260 MHz for the core (which is why I left it @ 1256), so now I'm gonna bump the memory by a few MHz every day and see if I get anything out of it.

BTW, does the Memory Control Unit (MCU) have anything to do with this?
ID: 27391 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27415 - Posted: 25 Nov 2012, 21:31:00 UTC - in response to Message 27391.  

At linear scaling of GPU-Grid performance with core clock one should have expected a runtime reduction from 18k to 12.2k seconds by increasing the GPU core clock from 850 to 1256 MHz. This seems to be a good approximation of what you're actually seeing, although your quoted value is slightly worse than this. I postulate you'll see this reduction to 14k seconds at 2900 MHz memory as well as at 2500 MHz.

MrS
Scanning for our furry friends since Jan 2002
ID: 27415 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Graphics cards (GPUs) : Lowering VDRAM frequency saves energy

©2025 Universitat Pompeu Fabra