GT240 and Linux: niceness and overclocking

Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Lem Novantotto

Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20399 - Posted: 11 Feb 2011, 19:53:18 UTC

Hi guys!
I've just set up a system with an NVIDIA GT240 GPU card and a dual core CPU. It's running Linux Ubuntu 10.04 64bit.

Nvidia driver is 195.36.24: a 6.12 app has been downloaded, which I think is fine for this cuda card, and crunching is going up of 0.1% every minute, so the whole workunit should take about 60,000 seconds. It seems in average.

I'd like to let you know a thing I've noticed, and to ask you a couple of questions.

First, my "report". LOL.
The gpugrid wu now being crunched wants 0.16 CPUs (however it doesn't go beyond 2%-3% load) and the GPU. The app runs with a niceness of 10 by default - whilst other CPU boinc apps (boincsimap, WCG...) run with a niceness of 19. I've found that 10 vs 19 is not enough: when the CPU is saturated, even by "idle" applications running at 19, the gpu almost stops working, its temperature falls and the gpugrid app becomes ways slower: many many times slower. Renicing gpugrid app to -1 has given it back its normal speed.
I have not tested any other values for now.

So my first question is: is there a simple way to tell boinc to set to -1 the niceness for gpugrid apps?

My second question is about overclocking the GPU. I know about the

Option "Coolbits" "1"

line in the Device section of /etc/X11/xorg.conf.
But it gives the chance to overclock only the GPU and the memory, while I happen to know that is the shader frequency that matters most. How could I rise it?

Thanks in advance for everything,
and bye.
ID: 20399 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20400 - Posted: 11 Feb 2011, 22:30:08 UTC - in response to Message 20399.  
Last modified: 11 Feb 2011, 22:39:51 UTC

If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot:

FAQ: Best configurations for GPUGRID

I don't know of a way to only overclock the shaders from within Linux.

From what I read the problem is that nice/renice settings tend to be lost on restart, but I read about a method that might stick over at WCG. Unfortunately I'm not a Linux expert and I cannot test it at the minute (don't even have a Linux system right now). Have a look and if you can work something out post it up, so others can benefit. This is worth a read.

If anyone has definite answers to these problems please post your methods.

Good luck,
ID: 20400 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Lem Novantotto

Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20402 - Posted: 11 Feb 2011, 23:25:51 UTC - in response to Message 20400.  
Last modified: 11 Feb 2011, 23:42:30 UTC

If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot:


First af all, thanks a lot for your support! :)

Uhm... I think... No, I definitely do not want to waste 98% of a cpu thread (I have two cores without hyper threading), if I can have the exact same GPU efficiency through niceness adjustment (verified) while happily crunching two other CPU tasks that will be a tiny bit slower than usual.


I don't know of a way to only overclock the shaders from within Linux.


I suspected and feared it. :(
I'll keep on searching for a while, but I think I'm going to surrender.


From what I read the problem is that nice/renice settings tend to be lost on restart


Sure, but I've already put in /etc/rc.local the line:

renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda)

and in /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

which is quite enough for me now.

Thanks again. :)
Bye.
ID: 20402 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kirby54925

Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20404 - Posted: 12 Feb 2011, 11:06:15 UTC

Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1.
ID: 20404 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Lem Novantotto

Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20405 - Posted: 12 Feb 2011, 13:20:15 UTC - in response to Message 20404.  

Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1.


Sorry about it. But you're dealing with a gtx570 (fine card!) and 6.13 app, aren't you? Maybe this makes the difference.

The niceness trick is actually working for me with boincsimap 5.10 and WCG (FightAIDS@Home 6.07 and Help Conquer Cancer 6.08) on the CPU side.

You said Rosetta... It works for me with Rosetta Mini 2.17 too.

However my next try, probably tomorrow, will be to test the newest 270.18 nvidia driver, and see what happens with 6.13 gpugrid (someone is getting fine results even with a GT240 and 6.13).

Bye.
ID: 20405 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20406 - Posted: 12 Feb 2011, 16:07:22 UTC - in response to Message 20405.  

When I use swan_sync=0 and free up a CPU core on my GT240’s they now improve performance by around 7.5% (Phenom II-940, compared to running 4 CPU tasks and not using swan_sync). It use to be higher but recent tasks seem less reliant on the CPU (the GPU task priority is set to below normal by the project, while the CPU task priority is less; low). I’m using the 6.12 app. The 6.13 app is substantially slower for the GT240 cards, and while that might have changed, I doubt it. I have not tested the 270 driver, as I don’t have a Linux platform, but partially because none of the 260.x drivers I previously tested offered any improvement for the 6.13app and caused some of my cards to drop their speed. I would be very reluctant to install Linux just to test the 270.18 Beta for a GT240, but let us know how you get on, should you chose to (I suggest you don’t if you are unsure of how to uninstall it and revert to your present driver).

CPU usage depends on the GPU:
If for example you have a GT240 and a Q6600, and it takes 20h to complete one GPUGrid task, using say 350sec of CPU time, then the total amount of CPU time you would need to support a GTX470 would be about 4 times that, as a GTX470 would do 4 similar tasks in 20h. It now appears more important to use swan_sync=0 and free up a CPU core/thread for high end GPUs, but less so for entry level GPU’s.

With high end CPU’s this is not too much of a problem, you just need 1 core/thread from between 6 and 12 cores/threads, and if you have a GTX570 or GTX580 the choice is clear. If I had one single system with a dual core CPU and en entry level GPU I think I would be more likely to try to crunch on both CPU cores, unless it caused the GPU tasks to extend beyond 24h.
ID: 20406 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kirby54925

Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20408 - Posted: 12 Feb 2011, 18:34:56 UTC

To elucidate further, I am currently using Linux Mint 10 with kernel version 2.6.35-25-generic. My GTX 570 is using the latest stable Linux driver version, 260.19.36. And yes, I am using version 6.13 of the GPUGrid CUDA app. It certainly would be nice if the Rosetta app running on that fourth core would slow down a little bit just so that the GPUGrid app can get some CPU time in.
ID: 20408 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20414 - Posted: 13 Feb 2011, 10:50:54 UTC - in response to Message 20408.  

If that was my system I would be inclined to do some calculations.
How much quicker would your GTX570 be if you freed up one of those CPU cores, and how much faster would the other 3 CPU cores be when running Rosetta?
Elapsed Time - CPU Time
ID: 20414 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20417 - Posted: 13 Feb 2011, 12:02:55 UTC - in response to Message 20414.  

Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance.

Personally I would comment that I use the system to move the two tasks of the GPU of the priority level "10 " starts GPUGRID default to "0" when I'm working normal with the PC and move it to "-10" in hours that do not use it anyway with the highest priority is not low on overall response on the computer (i7-930, 6GB RAM). Greetings.
http://stats.free-dc.org/cpidtagb.php?cpid=b4bdc04dfe39b1028b9c5d6fef3082b8&theme=9&cols=1
ID: 20417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Lem Novantotto

Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20418 - Posted: 13 Feb 2011, 12:24:46 UTC - in response to Message 20400.  
Last modified: 13 Feb 2011, 12:25:52 UTC


I don't know of a way to only overclock the shaders from within Linux.


Found this possible workaround (not yet tested): http://www.nvnews.net/vbulletin/showthread.php?t=158620

Bye.
ID: 20418 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20420 - Posted: 13 Feb 2011, 16:23:13 UTC - in response to Message 20417.  

Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance.

Personally I would comment that I use the system to move the two tasks of the GPU of the priority level "10 " starts GPUGRID default to "0" when I'm working normal with the PC and move it to "-10" in hours that do not use it anyway with the highest priority is not low on overall response on the computer (i7-930, 6GB RAM). Greetings.

Same here, it's just every 28h +/-5h, depending on the WU, on my GT240.
All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.
Changing the priority from 10 to 0 or even -3 increases the crunch speed big time, it's the only solution for my GT240 under Linux.

Fortunately Einstein now provides a reliable application for CUDA crunching under Linux, that does worthy science as well, so I manually stop Einstein every other day in the evening, DL a fresh GPUGrid-WU, set it manually to -3 and let it crunch for the next +/-28h, set GPUGrid to NNW asap, and set Einstein working again by hand once the GPUGrid is through.

Unfortunately sometimes Linux decides to set back the nice-factor to 10 during crunching, I don't know why and when, it looks unpredictable, and so I will loose precious crunching time because of the stubbornness of the app not to do what I want. I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20420 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Lem Novantotto

Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20422 - Posted: 13 Feb 2011, 17:33:15 UTC - in response to Message 20420.  
Last modified: 13 Feb 2011, 17:34:09 UTC

I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.


I have put in my /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

So every 5 minutes a cronjob renices to -1 the task having name acemd2_6.12_x86_64-pc-linux-gnu__cuda (if it exists and if its niceness is different, otherwise it does nothing).

Modify the line according to your app name (6.13). You'll probably find the proper name by executing (as root!):

# ls /var/lib/boinc-client/projects/www.gpugrid.net/ |grep ace

You can also choose an other niceness, if -1 doesn't satisfy you. :)

HTH.
Bye.
ID: 20422 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20423 - Posted: 13 Feb 2011, 18:00:35 UTC - in response to Message 20422.  

I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.


I have put in my /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

It seems to work fine, thanks a lot

I still have to manually change between Einstein and GPUGrid, as otherwise I will not make the deadline here if BOINC switches between the apps, but that's nothing GPUGrid can do about (besides setting the deadline to the needed 48h), that's a BOINC problem.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20423 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20424 - Posted: 13 Feb 2011, 18:38:01 UTC - in response to Message 20420.  

Saenger,
All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.

Yet another misleading and disrespectful message!

Your 98.671 ms per step performance for a GIANNI_DHFR1000 is exceptionally poor for a GT240

Using the recommended config (even on vista; over 11% slower than Linux), using swan_sync and the 6.12 app, 22.424 ms per step

Lem Novantotto, thanks for the Linux tips.
That hardware overclock method should work for your GT240, as it was tested on a GT220. Several users at GPUGrid have performed hardware OC's for Linux in the past. If it’s any help I tend to leave the core clock at stock, the voltage at stock and only OC the shaders to around 1600MHz, usually 1599MHz (this is stable on 6 GT240 cards I presently use). Others can OC to over 1640, but it depends on the GPU.
ID: 20424 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20425 - Posted: 13 Feb 2011, 18:53:45 UTC - in response to Message 20424.  
Last modified: 13 Feb 2011, 18:54:09 UTC

Saenger,
All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.

Yet another misleading and disrespectful message!

Your 98.671 ms per step performance for a GIANNI_DHFR1000 is exceptionally poor for a GT240

Using the recommended config (even on vista; over 11% slower than Linux), using swan_sync and the 6.12 app, 22.424 ms per step

Why do you ignore my messages?
I'm using this stupid swan_sync thingy, no f***ing use for it.
I've tried to "free a whole CPU" for it, the only effect was an idle CPU.

So don't talk to me about misleading and disrespectful!
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20425 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20427 - Posted: 13 Feb 2011, 19:32:27 UTC - in response to Message 20425.  

...and yet there is no mention of swan_sync in your task result details.

If it was in use the result details should say,
SWAN: Using synchronization method 0
and you would not have an idle CPU!

For example,
# Total amount of global memory: 497745920 bytes
# Number of multiprocessors: 12
# Number of cores: 96
SWAN: Using synchronization method 0
# Time per step (avg over 795000 steps): 22.424 ms
# Approximate elapsed time for entire WU: 44848.953 s
called boinc_finish

</stderr_txt>
]]>

Validate state Valid
Claimed credit 7491.18171296296
Granted credit 11236.7725694444
application version ACEMD2: GPU molecular dynamics v6.12 (cuda)

Your configuration is more suited to running Einstein and CPU tasks than GPUGrid tasks. So that is what you should do. What is the point in messing about every other day to run a different project, and at half efficiency or less?
ID: 20427 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20429 - Posted: 13 Feb 2011, 20:52:15 UTC - in response to Message 20427.  
Last modified: 13 Feb 2011, 20:54:45 UTC

...and yet there is no mention of swan_sync in your task result details.

I don't know how this stupid swan_sync stuff is supposed to work, it's your invention, not mine.

As I have posted in this post 66 days ago, and before that as well 82 days ago, and as I just tested again, my swan _sync is "0".

saenger@saenger-seiner-64:~$ echo $SWAN_SYNC
0


So if your precious swan_sync isn't working with my WU, as you claim, it's not my fault.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20429 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kirby54925

Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20430 - Posted: 14 Feb 2011, 4:03:55 UTC

I'm beginning to suspect that the reason swan_sync isn't working is because the environment variable is associated with the wrong user. GPUGrid tasks don't run at the user level. Rather, the user is boinc.
ID: 20430 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20431 - Posted: 14 Feb 2011, 7:14:01 UTC - in response to Message 20429.  
Last modified: 14 Feb 2011, 7:23:41 UTC

Using Linux is not easy, especially if you use several versions. As you know you can add export SWAN_SYNC=0 to your .bashrc file, but that is easier said than done, and depends on how/where you install Boinc. With 10.10 versions it is especially dificult; when I tried using it the repository only had the 260 driver, and some of the familiar commands did not work.
If you can't overclock or tune the fans properly and have niceness/swan_sync problems, the lure is not so strong - but this is down to lack of Linux knowledge/detailed instruction.
ID: 20431 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kirby54925

Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20433 - Posted: 14 Feb 2011, 10:38:19 UTC - in response to Message 20431.  

I installed BOINC using the package manager. Rather than adding
export SWAN_SYNC=0
to my .bashrc file, I added it to /etc/bash.bashrc instead. I changed it because when I looked back at all of the tasks I did, even though I set swan_sync in my .bashrc file, the SWAN synchronization message has never showed up in any of the tasks I have done. That tells me that the GPUGrid task is not taking up the set environment variable in .bashrc. Perhaps placing it in /etc/bash.bashrc would help.
ID: 20433 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · Next

Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking

©2025 Universitat Pompeu Fabra