Message boards :
Graphics cards (GPUs) :
Low load on GPU
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 9 Aug 11 Posts: 5 Credit: 16,225,539 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
I just noticed that on my GTX580 the load is only 70%. Isn't that very low? IS there any way to increase it? I am not using my CPU, so it's downclocking it self, is that perhaps the reason? When I do use the CPU for another project the GPU load goes to 85%. |
|
Send message Joined: 5 Dec 11 Posts: 147 Credit: 69,970,684 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GPU load will vary from project to project; it also varies from workunit to workunit. Most GpuGrid projects use between 70% and 90% GPU load. <edit> regarding your CPU. It shouldn't matter as GPUgrid only needs .5 of 1 core to operate. For example, on a 4 core machine it will only load the CPU to 12.5%. I would suggest that the increased GPU load you see when you add in another project would be from either a viewer for the other project, or just simply increased load from using your web-browser. It's not actually GPUGrid increasing its' workrate. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Does the crunching time decrease (for similar WUs) at the higher GPU utilization? MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Dec 10 Posts: 115 Credit: 416,576,946 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GPU utilization does affect crunching time. On Windows XP, the GPU usually runs at 98% and the work units complete faster. With Win 7, the GPU usually runs at 89% and the work units run a little longer. XP and Linux appear to offer the highest GPU utilization and thus the best performance. Thx - Paul Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I meant in his specific case, where GPU utilization increases if he's crunching CPU projects along GPU-Grid. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Dec 10 Posts: 115 Credit: 416,576,946 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
When using the computer for both CPU and GPU crunching, use the Swan_Sync=0 environment variable. This will dedicate CPU resources to the GPU(s). In my case, Rosetta @ home and GPUGrid.net cohabitate on all of my computers. The Swan_Sync dedicates about 50% of a CPU to each graphics card. All of my processors are at 100% and none of my GPUs are starved. Thx - Paul Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285. |
|
Send message Joined: 9 Aug 11 Posts: 5 Credit: 16,225,539 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Does the crunching time decrease (for similar WUs) at the higher GPU utilization? Yes, when using the CPU the time for a GPU unit decreases! So it does seems like because the CPU downclocks the GPU workload is becoming lower. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This is interesting, as 1.6 GHz is not exactly slow.. and the CPU quickly increases its clock if performance is demanded. Anyway, you might want to either - crunch in the CPU as well - create an environment variable "Swan_Sync", set its value to 0 and reboot. Thereby you dedicate a logical core to GPU-Grid, GPU performance should increase. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some time ago I speculated that this would be the case. CPU behavior could vary substantially by model. It would be difficult to spot the CPU being used, increasing the clock and then downclocking again, and it might not always occur. I can open large apps without the CPU moving from 1.6GHz to 3.8GHz, as it uses threads from different cores, but sometimes the clock does rise, so it's maybe not even predictable. Using SWAN_SYNC does keep the clocks high. Another thing is that as well as seeing performance reduction from CPU thread saturation, you would see the clocks drop on many models; the various i7 models all have turbo boost steps, which tend to get reduced in increments of usually 100MHz each time. So using one thread might keep full turbo (say 3.8GHz), but this would reduce to 3.4GHz when all the threads are in use (11% drop before even considering thread saturation). Different tasks require different amounts of CPU use too, so the impact of freeing more or less CPU cores, or using SWAN_SYNC varies from task to task. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 9 Aug 11 Posts: 5 Credit: 16,225,539 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Ok so the best option for me is to use the CPU for another project when crunching GPU units. As I understand correctly the other option would prevent the CPU to clock down, so waisting energy anyway, I might aswell use the CPU then. thanx. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well.. yes. However, SWAN_SYNC=0 would only use one (logical) core and might increase your GPU speed. Anyway, you could also easily combine both: 7 other CPU threads and one to hardcore-service your GPU. Although I'm not sure how much this stil helps with the current client. And a new one is being beta tested since a few weeks.. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 9 Aug 11 Posts: 5 Credit: 16,225,539 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
For now I have put Docking on the CPU and GPU time is decreasing, so that is good. I hope that the new app for the GTX6xx-series for Windows 7 is released soon, because I have a GTX 670 coming in tomorrow. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's up and running. Currently short queue only, next week hopefully long as well. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
SWAN_SYNC=0 only made ~3% difference on Windows the last time I checked, CUDA 3.1app. These days it's impact on performance would probably vary more from task to task, but overall be less. I think it was initially ~9%. SWAN_SYNC=0 doesn't presently work with the CUDA4.2app; Michael said he disabled it, and the task results suggest it was re-enabled. If it was available it might help keep the GPU clocks high on the Kepler's and the CPU clock high too. It would certainly need to be tested again, for both Fermi and Kepler cards and for Windows and Linux (when there's a CUDA 4.2 Linux app). BTW the performance improvement I'm seeing on my CC2.0 Fermi between running tasks on the CUDA4.2 and CUDA3.1 app varies from ~23% to ~56%
PAOLA_3EKO ~36% faster MJHARVEY_MJHXA1 ~56% faster FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 9 Aug 11 Posts: 5 Credit: 16,225,539 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
OK, I will start using the GTX670 next week when the long units are available and it's out of beta. Now testing it on Primegrid to see if it's running ok. |
©2025 Universitat Pompeu Fabra