Message boards :
Graphics cards (GPUs) :
Lowering VDRAM frequency saves energy
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks SK, I think that's as much analysis as we need :) @Ratanplan: power consumption obviously depends on load. Some variation in backgroud tasks might cause noticeable differences if average GPU utilization was changed. Different GPUGrid WUs achieve different GPU utilization levels, so this will affect power draw as well. And there's always temperature.. if the ambient temperature varied by 10°C during measurements, leakage currents will change measureably. MrS Scanning for our furry friends since Jan 2002 |
ChileanSend message Joined: 8 Oct 12 Posts: 98 Credit: 385,652,461 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I OC'ed my card from 850 to 1256 MHz (Core) and 2500 to 2902 Mhz (Memory) and shaved about an hour on the normal runs (from 18K seconds to no more than 14K seconds). On the long runs, the performance gain is even better (probably same % of improvement though). I doubt the whole performance gain was from just OC'ing the core. Then again, computers work in mysterious ways. 97-98% Utilization, fed by a single HT thread (3610QM i7 CPU). GPU: nVidia 660M. I think I hit a wall @ 1260 MHz for the core (which is why I left it @ 1256), so now I'm gonna bump the memory by a few MHz every day and see if I get anything out of it. BTW, does the Memory Control Unit (MCU) have anything to do with this? |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
At linear scaling of GPU-Grid performance with core clock one should have expected a runtime reduction from 18k to 12.2k seconds by increasing the GPU core clock from 850 to 1256 MHz. This seems to be a good approximation of what you're actually seeing, although your quoted value is slightly worse than this. I postulate you'll see this reduction to 14k seconds at 2900 MHz memory as well as at 2500 MHz. MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra