Advanced search

Message boards : Graphics cards (GPUs) : GPUGRID and app_info

Author Message
CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21618 - Posted: 6 Jul 2011 | 4:42:25 UTC

GPU load varies from 65% to 80% on my OC'd GTX570. I think different WUs using GPU more or less.

I wonder if app_info trick exists on GPUGRID in order to run 2 WUs concurrently and thus maximize the output.

And one more question. Does linux app speed up calculation like in those days? I was out from GPUGRID for some time and now I'm looking to come back :-)
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21619 - Posted: 6 Jul 2011 | 5:03:32 UTC - in response to Message 21618.

I get about 85% to 95% usage on my GTX 570, no OC, running Linux with SWAN_SYNC=0. Yes, it seems the usage depends on the app or task.

I haven't done any solid comparisons between OS's but they say the Linux app is 15% faster than the Windows app on Win7.

I've heard it said that 2 tasks running concurrently don't run in parallel, they run serially. If that's true then I don't see much benefit from running 2 tasks concurrently. I could be wrong. I've never tried it.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21621 - Posted: 6 Jul 2011 | 8:15:47 UTC - in response to Message 21619.

I've tested the app_info in the past on tasks with various GPU utilization.
By the time you get to about 85% utilization there is little or no point.
When you go up to about 95% you would see an overall loss.
At 65% there should be some gain, but then you have to consider task turn around time and the credit system. The problem with having multiple tasks is that you could gain from running one type, but lose from running the other.
I'm not sure what the ratio of task types are, and the utilization might change when new batches are created.
It may also be the case that different operating systems benefit differently. So for example, there might be less gain on Linux than W7.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21623 - Posted: 6 Jul 2011 | 12:40:27 UTC - in response to Message 21619.

I get about 85% to 95% usage on my GTX 570, no OC, running Linux with SWAN_SYNC=0. Yes, it seems the usage depends on the app or task.

I haven't done any solid comparisons between OS's but they say the Linux app is 15% faster than the Windows app on Win7.

I've heard it said that 2 tasks running concurrently don't run in parallel, they run serially. If that's true then I don't see much benefit from running 2 tasks concurrently. I could be wrong. I've never tried it.

I've got couple of questions:
1. SWAN_SYNC=0 - where I should input that? AFAIK it's on in linux by default...
2. How I can check GPU load on linux? AFAIK, nvclock project is dead since G200 series (beta support, my GTX5275 never worked properly though) and year 2008 :-(

That's good news that linux is faster. I need to spend some time on w7 though to find stable clocks/voltage and then to flash them and then - "home, sweet home" :-)

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21624 - Posted: 6 Jul 2011 | 12:48:21 UTC - in response to Message 21621.

I've tested the app_info in the past on tasks with various GPU utilization.
By the time you get to about 85% utilization there is little or no point.
When you go up to about 95% you would see an overall loss.
At 65% there should be some gain, but then you have to consider task turn around time and the credit system. The problem with having multiple tasks is that you could gain from running one type, but lose from running the other.
I'm not sure what the ratio of task types are, and the utilization might change when new batches are created.
It may also be the case that different operating systems benefit differently. So for example, there might be less gain on Linux than W7.

please, please where I can get that app_infO? I wanna try it - both on w7 x64 and linux (ubuntu 10.04 x64).

that's weird. in my understanding while you increasinf GPU load you getting more. That approach works on any other GPU project and I see no reason why it should not work on GPUGRID. But that's thoery, I'd like to try that in real life.

SO, if u've got that app_info, could u pls PM it to me and give the link where ir been discussed on the forum?
____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,377,015,558
RAC: 3,486,453
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21625 - Posted: 6 Jul 2011 | 15:52:06 UTC - in response to Message 21623.
Last modified: 6 Jul 2011 | 15:54:24 UTC

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.

That's correct, but the wu turnaround (return) time is more important in GPUGrid, than the overall output.

that's weird. in my understanding while you increasinf GPU load you getting more. That approach works on any other GPU project and I see no reason why it should not work on GPUGRID. But that's thoery, I'd like to try that in real life.

GPUGrid is different than other GPU projects. Here in GPUGrid a new workunit continues the calculation from where the previous wu finished. Therefore the new workunit depends on the result of the previous workunit, that's why turnaround time is very important, and honored by +50% bonus credit if you return the result within 24 hours, (and by +25% bonus credit if you return the result within 48 hours). If you run two workunits simultaneously on the same GPU, your wu turnaround time will miss the 24 hour deadline (with the long workunits), and you will lose the +50% bounus, and you will receive only +25% bonus. All in all, there is no point to increase the overall output by 5-15% to lose 25% credit bonus.

BTW your wus are failing because you're overclocking your GTX570 too much. The higher the GPU utilization is, the lower the overclocking can be. GPUs cannot be overclocked for 5-8 hours long GPUGrid tasks as much as for gaming (or for shorter workunits). You should also increase your GPU's fan speed. The lower the GPU temperature is, the more stable the GPU will be.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21626 - Posted: 6 Jul 2011 | 17:54:43 UTC - in response to Message 21623.

I've got couple of questions:
1. SWAN_SYNC=0 - where I should input that? AFAIK it's on in linux by default...


It might be on by default in Linux, I don't know. The stderr on the website for each task shows the value for SWAN_SYNC. If it says there the value is 0 and you have no "export SWAN_SYNC=0" statement anywhere then assume it's 0 by default. I put "export SWAN_SYNC=0" in ~/.bashrc. Note the period, it's a hidden file. I'm the only user on the system so that's adequate for me. If you have multiple users you'll want to put the statement in a script that runs for every user.

2. How I can check GPU load on linux? AFAIK, nvclock project is dead since G200 series (beta support, my GTX5275 never worked properly though) and year 2008 :-(


Yah, nvclock doesn't work here either. The nvidia-settings utility doesn't give GPU load either. I've been able to read GPU usage only with the nvidia-smi utility and only with the 270.xx drivers. It doesn't report usage here with the 260.xx driver.


That's good news that linux is faster. I need to spend some time on w7 though to find stable clocks/voltage and then to flash them and then - "home, sweet home" :-)

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.


There's only one way to be sure and that's to try it and see. I'm sure skgiven and others have been down that road before so keep in mind what they say when you're testing. And as Retvari said, make sure your tasks return within 24 hrs. if you want the credit bonus. I think a GTX 570 will have no problem returning 2 concurrently running tasks in 24 hrs. but ya never know.

Also, I had troubles with the temperature on my GTX570 when I started running it on Linux. The fan would not speed up as the temp rose so I had to manually force it to run at high speed permanently. That required some research and playing around. I expect you and others will run into the same problem so I wrote up what I did and posted it here on the BOINC dev forum. I recommend getting a handle on that BEFORE you run your first CUDA task on Linux.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21627 - Posted: 6 Jul 2011 | 18:38:01 UTC - in response to Message 21625.

That's correct, but the wu turnaround (return) time is more important in GPUGrid, than the overall output.

No doubt about return time :-)

GPUGrid is different than other GPU projects. Here in GPUGrid a new workunit continues the calculation from where the previous wu finished. Therefore the new workunit depends on the result of the previous workunit

Not really - MilkyWay is pretty much the same. They use the WUs results to correlate their model and based on new model they issuing new batch of WUs.

that's why turnaround time is very important, and honored by +50% bonus credit if you return the result within 24 hours, (and by +25% bonus credit if you return the result within 48 hours). If you run two workunits simultaneously on the same GPU, your wu turnaround time will miss the 24 hour deadline (with the long workunits), and you will lose the +50% bounus, and you will receive only +25% bonus. All in all, there is no point to increase the overall output by 5-15% to lose 25% credit bonus.

Let’s take the worst case scenario (theoretical case) when GPU load is 100%. If I'm running the 2nd WU concurrently, duration for both of them is twice longer then for single WU. Am I right?

Let's come back to the real world. I finished one WU in 6hrs, so if I'm running 2 WUs it should take me 12 hrs to complete both of them. But remember - GPU load is less than 100%, so in reality it will be not 12hrs, but, let's say, 9-11hrs. All in all, I'm pretty much OK to be within 24hrs limit to get +50% bonus. That’s why IMHO it makes sense to go with app_info trick.

In this case the project is OK in terms of return time as well as productivity from this video card.

BTW your wus are failing because you're overclocking your GTX570 too much. The higher the GPU utilization is, the lower the overclocking can be. GPUs cannot be overclocked for 5-8 hours long GPUGrid tasks as much as for gaming (or for shorter workunits). You should also increase your GPU's fan speed. The lower the GPU temperature is, the more stable the GPU will be.

I was failing due high OCing, that's correct. It's not a big secret that different projects allow different level of OCing. On PrimeGrid this card is solid rock stable for 7 months now @880/0.988V and another two - @900/1.0000V

That task I completed @860/1.0000V - way lower then PrimeGrid, but that's OK. I'll play around to figure out the balance between higher clocks, voltage and heat on these long run WUs. While GPU load is way less than 100% I can slightly increase voltage and clock and still get adequate temps. When I'm been done I'll flash the cards with clocks/voltage and move to linux.

So, my question is: where I can get app_info? Don’t get me wrong: I’m NOT trying to BS the project. In fact, I wanna do more.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21628 - Posted: 6 Jul 2011 | 19:08:08 UTC - in response to Message 21626.

It might be on by default in Linux, I don't know. The stderr on the website for each task shows the value for SWAN_SYNC. If it says there the value is 0 and you have no "export SWAN_SYNC=0" statement anywhere then assume it's 0 by default. I put "export SWAN_SYNC=0" in ~/.bashrc. Note the period, it's a hidden file. I'm the only user on the system so that's adequate for me. If you have multiple users you'll want to put the statement in a script that runs for every user.

Here’s the link on my completed task
But I cannot see any mentioning of SWAN_SYNC…

I’ll put "export SWAN_SYNC=0" in ~/.bashrc when I’ll be on linux. Thanks a lot, man :-)

Yah, nvclock doesn't work here either. The nvidia-settings utility doesn't give GPU load either. I've been able to read GPU usage only with the nvidia-smi utility and only with the 270.xx drivers. It doesn't report usage here with the 260.xx driver.

I never heard about this utility. I’ll try it for sure. Thanks again :-)
I installed the latest 275.xx.xx driver, so hopefully it will work with this version.

BTW, what’s version of drivers is the fastest? 260, 270 or 275?

There's only one way to be sure and that's to try it and see. I'm sure skgiven and others have been down that road before so keep in mind what they say when you're testing. And as Retvari said, make sure your tasks return within 24 hrs. if you want the credit bonus. I think a GTX 570 will have no problem returning 2 concurrently running tasks in 24 hrs. but ya never know.

That’s exactly what I want – to try by myself. No doubts that skgiven and other guys did that before, but look - we all helping science, so we all are some sort of scientist. So, let’s use scientific approach and try to repeat the results :-)

Me also think that I should be pretty much OK to meet 24hrs limit.

Also, I had troubles with the temperature on my GTX570 when I started running it on Linux. The fan would not speed up as the temp rose so I had to manually force it to run at high speed permanently. That required some research and playing around. I expect you and others will run into the same problem so I wrote up what I did and posted it here on the BOINC dev forum. I recommend getting a handle on that BEFORE you run your first CUDA task on Linux.

That’s nice manual. I tried coolbits 1, but never 4. May be that is why either fan speed either GPU OCing never worked for me. When I’m selecting "Thermal Settings" – nothing really happens. But I found my way – finding proper clocks/voltage in windows using MSI AfterBurner, modify BIOS using NiBiToR and then flash it. BTW, you cannot set fan speed less than 40% and greater than 85% on GTX5x0 cards. But starting from NiBiToR version 6.0 you can adjust fan speed in BIOS as well.

If you need modded BIOS for your card –just let me know, I’ll do it for u.

____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21629 - Posted: 6 Jul 2011 | 21:12:12 UTC - in response to Message 21628.
Last modified: 6 Jul 2011 | 21:13:05 UTC

BTW, what’s version of drivers is the fastest? 260, 270 or 275?


I'm not sure, I've never tested to see which is fastest.

If you need modded BIOS for your card –just let me know, I’ll do it for u.


Thank you for the offer :)

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21630 - Posted: 6 Jul 2011 | 23:19:42 UTC

OK, I'll use 275.xx drivers, NP at all :-)
____________

Kenneth Larsen
Send message
Joined: 11 Feb 09
Posts: 6
Credit: 162,131,296
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21631 - Posted: 7 Jul 2011 | 8:51:33 UTC - in response to Message 21628.

But I cannot see any mentioning of SWAN_SYNC…


It only says so if you are running SWAN_SYNC, it doesn't mention anything if not (at least when looking at my results).

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21632 - Posted: 7 Jul 2011 | 12:56:35 UTC - in response to Message 21631.

It only says so if you are running SWAN_SYNC, it doesn't mention anything if not (at least when looking at my results).

Stupid me... That rig is on win7 now and that's why SWAN_SYNC is no there :-)

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/

that's not a problem coz there's one user account on that PC.

BUT: I think it worth the efforts to put that in FAQ. May be it's necessary to talk to some1.

All in all, where I can get app_info to try it?
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21633 - Posted: 7 Jul 2011 | 16:16:35 UTC - in response to Message 21632.
Last modified: 7 Jul 2011 | 16:18:08 UTC

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/

that's not a problem coz there's one user account on that PC.


If you installed BOINC from repositories then it is NOT setup to run on your account. It will run on a special boinc user's account which is usually named boinc or something similar. In that case the boinc user will not see the SWAN_SYNC environment variable if you put it in your .bashrc. If you installed BOINC from the Berkeley installer (the .sh script) then putting it in your .bashrc is adequate.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21635 - Posted: 7 Jul 2011 | 16:54:14 UTC - in response to Message 21633.

If you installed BOINC from repositories then it is NOT setup to run on your account. It will run on a special boinc user's account which is usually named boinc or something similar. In that case the boinc user will not see the SWAN_SYNC environment variable if you put it in your .bashrc. If you installed BOINC from the Berkeley installer (the .sh script) then putting it in your .bashrc is adequate.

normally I'm using repos, but BOINC is that rare exception. I'm downloading it from Berkeley and then runnin .sh script. So, it should work for me :-)
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21639 - Posted: 8 Jul 2011 | 3:39:17 UTC

Try this message from the Einstein forums. It has an app_info.xml used for running 4 Einstein tasks on a GTX480. It should give you the general idea. Maybe you can modify it to work with GPUgrid.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21640 - Posted: 8 Jul 2011 | 4:39:22 UTC - in response to Message 21639.

Try this message from the Einstein forums. It has an app_info.xml used for running 4 Einstein tasks on a GTX480. It should give you the general idea. Maybe you can modify it to work with GPUgrid.

I'll try it, but... different apps got different arguments and not necessary they are the same all over the projects... I've got app_info for MilkyWay, PrimeGrid, Collatz, but i'm not sure it will work.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21641 - Posted: 8 Jul 2011 | 10:57:53 UTC - in response to Message 21640.
Last modified: 8 Jul 2011 | 14:00:41 UTC

You could try this,

<app_info>
<app>
<name>acemd2</name>
</app>
<file_info>
<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>tcl85.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>acemd2</app_name>
<version_num>615</version_num>
<avg_ncpus>0.025</avg_ncpus>
<max_ncpus>0.050</max_ncpus>
<flops>1089000000000</flops>
<plan_class>cuda31</plan_class>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
<gpu_ram>1280000000</gpu_ram>
<file_ref>
<file_name>acemdlong_6.15_windows_intel86__cuda31.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>tcl85.dll</file_name>
</file_ref>
</app_version>
</app_info>

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,377,015,558
RAC: 3,486,453
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21642 - Posted: 8 Jul 2011 | 18:35:28 UTC - in response to Message 21641.

<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>

There is no ".exe" extension at the end of the filename.

The correct line is:

<name>acemdlong_6.15_windows_intel86__cuda31</name>

Back in February I've tried something similar you've just posted, and all I get is errors.

ps: also there is no ".dll" files on Linux, so this app_info.xml is only for Windows

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21643 - Posted: 8 Jul 2011 | 20:10:47 UTC - in response to Message 21642.
Last modified: 8 Jul 2011 | 20:11:18 UTC

I did run using an app_info file for a week or more on a dual GPU setup, but that was over 6months ago. Others also did this. For a short time it was worth it, but that was before 6.14 and only when there were lots of less GPU utilizing tasks around (~50%). I did post a bit about this and sent several PM's about my findings. Others concurred. I did not start a thread about this because using app_info at GPUGrid is definitely not the recommended way to go; by in large the tasks are fast enough. You have to be savvy and hands on when setting it up, and then the opposite - hands off, just let it run.

You could create a Linux app_info file, but you would have to alter several things. Obviously the app, which is still the 6.14 (cuda31) version. That was not changed as thread priority does not work the same way on Linux, AFAIK. No idea if or when the team will update the Linux app, but given that Linux is faster anyway, I don't see the point in even trying an app_info setup.

PS. You can tell my app_info was Windows only by the app name; acemdlong_6.15_windows_intel86__cuda31 [despite the Windows executable file extension] :))

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,377,015,558
RAC: 3,486,453
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21644 - Posted: 8 Jul 2011 | 20:24:47 UTC - in response to Message 21627.

Let’s take the worst case scenario (theoretical case) when GPU load is 100%. If I'm running the 2nd WU concurrently, duration for both of them is twice longer then for single WU. Am I right?

You're right. Theoretically. :)

Let's come back to the real world. I finished one WU in 6hrs, so if I'm running 2 WUs it should take me 12 hrs to complete both of them. But remember - GPU load is less than 100%, so in reality it will be not 12hrs, but, let's say, 9-11hrs. All in all, I'm pretty much OK to be within 24hrs limit to get +50% bonus.

There is a prerequisite for this scenario: Your PC should not keep a spare pair of WU's in it's queue for too long, because if these two spare WUs are sitting in the queue for 12 hours, and then they are processed for another 12 hours they will just miss the 24 hours deadline. But if your queue is short, and a WU fails, your GPU will run only one WU (or in worst case: nothing) until a new one (or two) is downloaded.

That’s why IMHO it makes sense to go with app_info trick.

You forget about the reason behind low GPU utilization:
GPUGrid uses both the GPU and the CPU for processing a WU. The more the CPU usage of a WU is, the less its GPU usage is, because of the heavy data transmission on the PCIe bus. A GIANNI_KKFREE WU (82% GPU usage) uses 5.5 times more CPU than a TONI_AGGsoup WU (98% GPU usage). I learned from my experience, that I have to overclock a Core2Duo by 33% to get the performance (say 5-10% more GPU usage) of a Core i3 (its integrated PCIe controller is 33% faster than the X48 chipset's). I think the existence of a bus between the GPU and the CPU is the bottleneck, and this bus (ie. the PCIe) can be overloaded even by processing a single GIANNI_KKFREE-like WU. If this is right, there will be no significant rise in GPU usage by running two of them simultaneously (although there can be significant rise in GPU usage by running low and high GPU utilizing tasks at the same time, but maybe the lower GPU utilizing would be more slow than expected).
Don't get me wrong, I'm curious about this, but at the same time I'm very skeptical. Let's find out...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,377,015,558
RAC: 3,486,453
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21645 - Posted: 8 Jul 2011 | 20:29:44 UTC - in response to Message 21643.

I did run using an app_info file for a week or more on a dual GPU setup, but that was over 6months ago. Others also did this. For a short time it was worth it, but that was before 6.14 and only when there were lots of less GPU utilizing tasks around (~50%).

Then maybe I'm wrong about low GPU utilizing tasks are overloading the PCIe bus, and in this case it's worth it to make an app_info.xml
I'm getting totally confused.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21646 - Posted: 8 Jul 2011 | 20:45:50 UTC - in response to Message 21645.
Last modified: 8 Jul 2011 | 20:51:05 UTC

You have to remember this was six months ago and then SWAN_SYNC was much more important (a full CPU core/thread vs 2.5% of the total CPU; 16% of an i7 thread). A lot has changed and now running 2 tasks without SWAN_SYNC migh be different, but only so long as these tasks are relatively low GPU Utilizing tasks. There will be no benefit when just one task is a 95% utilizing task. At 85% who knows (things have changed so you would need to check). At 50% (and there is nothing near that) we would benefit, and I would be running on an app_info setup already.

PS. This is definately for Fermi Only, CC1.3 cards will fail if you try this!

--
Double checked and I did have the .exe in my old app_info file (might not make any difference in Windows).

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,377,015,558
RAC: 3,486,453
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21648 - Posted: 8 Jul 2011 | 22:34:44 UTC - in response to Message 21646.

Double checked and I did have the .exe in my old app_info file (might not make any difference in Windows).

The .exe has been omitted since version 6.14.
I know this because I'm using eFMer's Priority.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21649 - Posted: 9 Jul 2011 | 1:45:22 UTC - in response to Message 21644.

Nice discussion. guys :-) Rite now I'm waiting WU to complete and then I'll try app_info - both with and w/o .exe

You're right. Theoretically. :)

That's good news:-)

There is a prerequisite for this scenario: Your PC should not keep a spare pair of WU's in it's queue for too long, because if these two spare WUs are sitting in the queue for 12 hours, and then they are processed for another 12 hours they will just miss the 24 hours deadline. But if your queue is short, and a WU fails, your GPU will run only one WU (or in worst case: nothing) until a new one (or two) is downloaded.

Anyway rite now I've got 0 tasks in queue, so I'm really worry about that.

You forget about the reason behind low GPU utilization:
GPUGrid uses both the GPU and the CPU for processing a WU. The more the CPU usage of a WU is, the less its GPU usage is, because of the heavy data transmission on the PCIe bus. A GIANNI_KKFREE WU (82% GPU usage) uses 5.5 times more CPU than a TONI_AGGsoup WU (98% GPU usage). I learned from my experience, that I have to overclock a Core2Duo by 33% to get the performance (say 5-10% more GPU usage) of a Core i3 (its integrated PCIe controller is 33% faster than the X48 chipset's). I think the existence of a bus between the GPU and the CPU is the bottleneck, and this bus (ie. the PCIe) can be overloaded even by processing a single GIANNI_KKFREE-like WU. If this is right, there will be no significant rise in GPU usage by running two of them simultaneously (although there can be significant rise in GPU usage by running low and high GPU utilizing tasks at the same time, but maybe the lower GPU utilizing would be more slow than expected).
Don't get me wrong, I'm curious about this, but at the same time I'm very skeptical. Let's find out...

That's a good point. I was wondering what the hell is going on: CPU and GPU usage is way under 100%, so what's the bottleneck? Now it's clear. If the reason for the is PCIe, then u r rite - nothing I can gain from that trick.

From other hand - PCIe 2.0 x16 is hell fast bus and I'n not really sure it is the problem. But anyways - let's wait for 20 minutes and we'll get the answer.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21650 - Posted: 9 Jul 2011 | 2:59:37 UTC
Last modified: 9 Jul 2011 | 3:00:29 UTC

OK, I tried both version (with .exe and w/o .exe). They deleted everything (including .exe and all .dll) file from the BOINC folder, so smth wrong with app_info

Guys, don't get me wrong - I'm not complaining, I'm just reporting :-)
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21653 - Posted: 9 Jul 2011 | 11:10:54 UTC - in response to Message 21650.
Last modified: 9 Jul 2011 | 11:37:34 UTC

I don't know what the problem is but I guess the issue may be related to the fact that priority is now being controlled at the thread level rather than Process level.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1626
Credit: 9,379,166,723
RAC: 18,990,592
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21656 - Posted: 9 Jul 2011 | 12:13:59 UTC - in response to Message 21653.

I don't know what the problem is but I guess the issue may be related to the fact that priority is now being controlled at the thread level rather than Process level.

Eh?

App_info will work if both the structure and contents of the file are accurate. Just "guessing" the name of a file, library or DLL is never going to work - you need to apply some serious comprehension to the issue too. Then it works - and whatever an application does to control its own thread priority will work just the same, whether launched under app_info or otherwise.

skgiven posted an app_info for Windows yesterday. I haven't tested it, but it looks OK to me. Linux users can study it, see what the basic shape is, and adapt it to their own needs. You need:

<app>
The name GPUGrid uses internally to identify the application. 'Long' and 'normal' length tasks may use different app names.

<file_info>
One section for each executable, library, or other supporting file you're going to use. (You also need to have the actual files themselves, of course!)

<app_version>
A control structure which links the components together and tells BOINC how to use them - e.g. the <coproc><count> values which started this conversation.

Have a look at the app_info documentation. Note in particular the line at the bottom of that page:

Generally this should match the corresponding elements in a scheduler reply message (sched_reply_URL.xml)

If you are already running the project successfully in its normal, automatic download, mode you can read all the information you need to construct an app_info.xml file that will work on your own machine (including the urls for downloading the necessary files) from either sched_reply...xml or client_state.xml

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21663 - Posted: 10 Jul 2011 | 11:50:00 UTC - in response to Message 21656.
Last modified: 10 Jul 2011 | 13:43:40 UTC

The app name is wrong:

<app>
<name>acemd2</name>
</app>

should be,

<app>
<name>acemd_long</name>
</app>


Corrected app_info file:

<app_info>
<app>
<name>acemd_long</name>
</app>
<file_info>
<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>tcl85.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>acemd_long</app_name>
<version_num>615</version_num>
<avg_ncpus>0.025</avg_ncpus>
<max_ncpus>0.050</max_ncpus>
<flops>1089000000000</flops>
<plan_class>cuda31</plan_class>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
<gpu_ram>1280000000</gpu_ram>
<file_ref>
<file_name>acemdlong_6.15_windows_intel86__cuda31.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>tcl85.dll</file_name>
</file_ref>
</app_version>
</app_info>

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1626
Credit: 9,379,166,723
RAC: 18,990,592
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21664 - Posted: 10 Jul 2011 | 12:00:48 UTC - in response to Message 21663.

If you're going to change it in one place, you have to change it in the other, too.

The app name is wrong:

<app>
<name>acemd2</name>
</app>

should be,

<app>
<name>acemd_long</name>
</app>


Corrected app_info file:

<app_info>
<app>
<name>acemd_long</name>
</app>
<file_info>
<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>tcl85.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>acemd_long</app_name>
<version_num>615</version_num>
<avg_ncpus>0.025</avg_ncpus>
<max_ncpus>0.050</max_ncpus>
<flops>1089000000000</flops>
<plan_class>cuda31</plan_class>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
<gpu_ram>1280000000</gpu_ram>
<file_ref>
<file_name>acemdlong_6.15_windows_intel86__cuda31.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>tcl85.dll</file_name>
</file_ref>
</app_version>
</app_info>

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21665 - Posted: 10 Jul 2011 | 14:07:05 UTC - in response to Message 21664.

Thanks Richard.

If anyone tests this, post your findings.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21670 - Posted: 11 Jul 2011 | 11:16:16 UTC - in response to Message 21665.

Thanks Richard.

If anyone tests this, post your findings.

I'll test it later 2day
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21673 - Posted: 13 Jul 2011 | 23:55:26 UTC

same story - almost everything gone from www.gpugrid.net folder...
____________

Profile Mad Matt
Send message
Joined: 29 Aug 09
Posts: 28
Credit: 101,584,171
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21833 - Posted: 14 Aug 2011 | 0:28:52 UTC

Probably too late, but maybe helpful for future tests:

Make a copy of all files needed for the app_info, e.g. cudart32_31_9.dll and so on, put them all into a folder with the app_info and copy all into your GPUGRID folder, overriding the existing files. Alternately you can get the files somewhere from the GPUGRID download server (sorry, but I forgot the link).

Not sure why, but BOINC does not recognize those files present even if they have the same file name.

Hope this helps.


____________

Profile Mad Matt
Send message
Joined: 29 Aug 09
Posts: 28
Credit: 101,584,171
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21834 - Posted: 14 Aug 2011 | 9:43:30 UTC

Obviously it does not, I tried myself. But I also tried to translate a working app_info from 6.14 to 6.15 and it did not work either. :(
____________

Post to thread

Message boards : Graphics cards (GPUs) : GPUGRID and app_info

//