app_config.xml

Message boards : Graphics cards (GPUs) : app_config.xml
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Dylan

Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 29247 - Posted: 25 Mar 2013, 20:11:21 UTC

I haven't really gone through this thread, so I don't know if someone covered this or not, however it seems that many workunits use a bit more of CPU time then one core or thread. Therefore, I leave open an additional core so that any of the workunits, whether they are CPU or GPU based, won't compete for time.


Here is my configuration:

I have 8 cores(i7-3820, hyperthreaded), and two 670's.

Furthermore, I tell BOINC to use 75% of the processor, or 5 cores, and 100% CPU time. I also use the Swan sync variable that allows one core per GPU, so then 2 more cores are allocated to the 670's. This means I have one core not being used, however it seems that BOINC still uses it, as my total CPU usage is 97%, not 87.5%, like it should be if I had one core still free.

Looking in Task Manager, I see that both the GPUGRID workunits, and the CPU workunits (World Community Grid Clean Energy Project)use somewhere around 13.4-13.8 CPU time, which is more than one core.

To conclude, a one core per workunit configuration, whether the workunits are a CPU or GPU one, doesnt' always mean that the workunits won't have to compete, and another solution might be needed, such as having 1 unused core to make sure every workunit gets as much CPU time that it needs.

Sorry if this seemed confusing, if you have questions, please ask and I will try to answer them. I also want to add that I didn't check the GPU usage to see if this solution made any difference in GPU performance.
ID: 29247 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Operator

Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29248 - Posted: 25 Mar 2013, 20:19:53 UTC

Thanks to both Carlesa25 and JacobKlein I managed to coax my machine with the 2x GTX 590s to process 2 WUs per GPU.

But there is a problem. And so far it has something to do with Heisenberg's Uncertainty principle because it only happens when I'm not looking.

To whit, everything is crunching along just fine with 8 WUs. I log off and come back later to find that several of the previously happy WUs that were more than half way through (in most cases) have decided they are finished and they upload without the 'output file' that can't be found.

I have gone back to crunching just one WU per GPU until I have time to investigate further.

Hey, it was worth a shot....

Operator
ID: 29248 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29249 - Posted: 25 Mar 2013, 21:29:16 UTC - in response to Message 29248.  

For stability reasons I almost always tell Boinc to crunch using one or two less CPU's than the processor has, and usually dedicate one to the GPU. So for an 8thread system with a GPU, I tell Boinc to use 75% of the CPU's (6).

If testing app_config with more than 1 GPUGrid task per GPU, I suggest you don't use the CPU for anything else for a few days, to properly test stability. Then add 2threads/cores, then another 2, but don't saturate the CPU.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29249 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29251 - Posted: 25 Mar 2013, 22:03:42 UTC - in response to Message 29249.  
Last modified: 25 Mar 2013, 22:04:11 UTC

For the record, I've had no stability problems saturating the CPU, telling BOINC to use 100% of the processors (all 8/8) while using a <cpu_usage> value of 0.001 for the GPUGrid applications. So, for me, 8 other CPU tasks run just fine alongside the GPUGrid GPU task.

I haven't tried doing multiple GPUGrid tasks on the same GPU yet.
ID: 29251 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Operator

Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29252 - Posted: 26 Mar 2013, 1:41:55 UTC - in response to Message 29249.  
Last modified: 26 Mar 2013, 1:43:00 UTC

For stability reasons I almost always tell Boinc to crunch using one or two less CPU's than the processor has, and usually dedicate one to the GPU. So for an 8thread system with a GPU, I tell Boinc to use 75% of the CPU's (6).

If testing app_config with more than 1 GPUGrid task per GPU, I suggest you don't use the CPU for anything else for a few days, to properly test stability. Then add 2threads/cores, then another 2, but don't saturate the CPU.



If this was a response to my post (not sure if it was meant for me or JacobKlein), I leave all 24 threads available to GPU support. I realize this makes me a bit of a slacker in the 'multi-tasking' area, but I had some difficulty early on when first running GPUGrid and other projects together, so I just focused on GPUGrid.

I am a bit disappointed that I can't run more than one GPUGrid long WU per GPU without things going sideways, and I'm not that familiar with the error logs to conduct a post mortem. So it will take a while to figure out.

Operator
ID: 29252 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29260 - Posted: 27 Mar 2013, 11:45:25 UTC - in response to Message 29231.  
Last modified: 27 Mar 2013, 11:59:03 UTC

Carlesa, I know what I'm talking about, and your syntax is wrong. I've done some local testing on my machine to conclusively prove it, too.

Basically, by supplying multiple <name> tags within the same <app> block, the program only applies the requested <gpu_usage> and <cpu_usage> tags to the LAST <name> tag that was provided. The prior <name> tags are essentially ignored. You can prove it by testing this, yourself. Please don't spread incorrect information.

The correct syntax is the one that I've provided, where the different apps each have their own <app> block, and each <app> block only has 1 <name> tag.

Thanks,
Jacob



Hello Jacob: I do not intend to create a sterile polemic, simply confirm that my syntax is correct, you can put multiple labels app_config.xml <name> and reads well, is easy to verify by putting false names, for example.

I works perfectly with:

<name>acemd2</name>
<name>acemdshort</name>

I detect changes as short assignment of a name or another and applies the settings that I have in my app_config.xml.

Let everyone draw their conclusions. Greetings.
ID: 29260 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29262 - Posted: 27 Mar 2013, 13:08:13 UTC - in response to Message 29260.  
Last modified: 27 Mar 2013, 13:16:08 UTC

Carlesa:

This isn't about drawing conclusions. It's about proving that your syntax is wrong, and showing that if you put 2 name tags in a row, the first ones are ignored.

I didn't want to do this, but I feel I must, in the name of science. I encourage you to use the steps here to prove that what I am saying is correct. Here goes:

I am currently processing a GPUGrid.net task, of type "acemdlong". In the BOINC Manager, it currently says:
"Running (0.001 CPUs + 1 NVIDIA GPU (device 0))"
... because of my current app_config.xml file.
... But let's say I wanted to change those settings.

If I change the app_config.xml file in the \projects\www.gpugrid.net directory, and I change it to:
<app_config>
    <app>
      <name>acemdlong</name>
      <name>acemdbeta</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.002</cpu_usage>
      </gpu_versions>
    </app>
</app_config>
... and close BOINC, and restart BOINC, the end result is:
"Running (0.001 CPUs + 1 NVIDIA GPU (device 0))"
The CPU usage did NOT change, because the syntax is WRONG.

If I change it to:
<app_config>
    <app>
      <name>acemd2</name>
      <name>acemdlong</name>
      <name>acemdbeta</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.003</cpu_usage>
      </gpu_versions>
    </app>
</app_config>
... and close BOINC, and restart BOINC, the end result is:
"Running (0.001 CPUs + 1 NVIDIA GPU (device 0))"
The CPU usage did NOT change, because the syntax is WRONG.

If I change it to:
<app_config>
    <app>
      <name>acemd2</name>
      <name>acemdbeta</name>
      <name>acemdlong</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.004</cpu_usage>
      </gpu_versions>
    </app>
</app_config>
... and close BOINC, and restart BOINC, the end result is:
"Running (0.004 CPUs + 1 NVIDIA GPU (device 0))"
The CPU usage DID CHANGE, but only because the last <name> tag was my app's name.

If I change it to:
<app_config>
    <app>
      <name>acemdlong</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.005</cpu_usage>
      </gpu_versions>
    </app>
</app_config>
... and close BOINC, and restart BOINC, the end result is:
"Running (0.005 CPUs + 1 NVIDIA GPU (device 0))"
The CPU usage DID CHANGE, because the syntax is correct.

Here's the CORRECT SYNTAX for the GPUGrid.net project's app_config.xml file, per the documentation found here:
http://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration
<app_config>
   <app>
      <name>acemdbeta</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.001</cpu_usage>
      </gpu_versions>
    </app>
   <app>
      <name>acemdlong</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.001</cpu_usage>
      </gpu_versions>
    </app>
   <app>
      <name>acemd2</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.001</cpu_usage>
      </gpu_versions>
    </app>
   <app>
      <name>acemdshort</name>
      <max_concurrent>9999</max_concurrent>
      <gpu_versions>
          <gpu_usage>1</gpu_usage>
          <cpu_usage>0.001</cpu_usage>
      </gpu_versions>
    </app>
</app_config>


... and then you can adjust the <max_concurrent> and <gpu_usage> and <cpu_usage> values to whatever you desire, for each of the 4 different apps.

I still recommend the values:
<max_concurrent>9999</max_concurrent>
<gpu_usage>1</gpu_usage>
<cpu_usage>0.001</cpu_usage>

Regards,
Jacob
ID: 29262 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Richard Haselgrove

Send message
Joined: 11 Jul 09
Posts: 1639
Credit: 10,159,968,649
RAC: 318
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29263 - Posted: 27 Mar 2013, 13:34:09 UTC - in response to Message 29262.  

Exactly. The whole purpose of the nested XML syntax is to make it absolutely clear which settings apply to which application. Everything from

<app>
...
</app>

has to be bracketed together - with a name, and one or more settings.

If you want to apply more settings to more applications, you have to add a second <app> ... </app> block.

You can test that even more easily, by using BOINC v7.0.54 or later, and re-reading config files while BOINC is running - no need to keep stopping and starting BOINC's settings.
ID: 29263 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29266 - Posted: 27 Mar 2013, 20:06:58 UTC - in response to Message 29246.  
Last modified: 27 Mar 2013, 20:46:14 UTC

I have tested it already. While watching Task Manager's Details tab, sorted by CPU descending, I see that the acemd process is never starved for CPU, even though other running CPU tasks are (at times) starved.

So the ACEMDs are still grabbing CPU time. Now the important question is: are they able to do this just at the right time to avoid the GPU running dry, i.e. does GPU performance suffer (I suppose so) and by how much? The latter would be needed for people to decide if this trade-off is worth it for them.

Watching GPU utilization, maybe averaging it, would be a nice indicator. Although the best performance indicator would be WU runtimes for similar tasks.

MrS



So, there is a trade-off, as you say. I think the best way to determine how to maximize GPUGrid performance, while not sacrificing CPU task performance, is to test running the GPUGrid tasks, at various levels of BOINC x% of the processors, and then comparing task run times. I haven't done this yet, and might not get around to it, as I believe that keeping the CPU busy outweighs slightly overloading the CPU and slightly sacrificing % GPU Load.


Richard,
I have concluded some actual performance testing using my eVGA GTX 660 Ti 3GB FTW. Here is what I found:

========================================================================
Running with no other tasks (every other BOINC task and project was suspended, so the single GPUGrid task was free to use up the whole CPU core):

Task: 6669110
Name: I23R54-NATHAN_dhfr36_3-17-32-RND2572_0
URL: http://www.gpugrid.net/result.php?resultid=6669110
Run time (sec): 19,085.32
CPU time (sec): 19,043.17

========================================================================
Running at <cpu_usage>0.001</cpu_usage>, BOINC set at 100% processors, along with a full load of other GPU/CPU tasks:

Task: 6673077
Name: I11R21-NATHAN_dhfr36_3-18-32-RND5041_0
URL: http://www.gpugrid.net/result.php?resultid=6673077
Run time (sec): 19,488.65
CPU time (sec): 19,300.91

Task: 6674205
Name: I25R97-NATHAN_dhfr36_3-13-32-RND4438_0
URL: http://www.gpugrid.net/result.php?resultid=6674205
Run time (sec): 19,542.35
CPU time (sec): 19,419.97

Task: 6675877
Name: I25R12-NATHAN_dhfr36_3-19-32-RND6426_0
URL: http://www.gpugrid.net/result.php?resultid=6675877
Run time (sec): 19,798.77
CPU time (sec): 19,606.33
========================================================================

So, as expected, there is some minor CPU contention whilst under full load, but not much (Task Run time is maybe ~3% slower). It's not affected much because the ACEMD process actually runs at a higher priority than other BOINC task processes, and therefor, are never starved for CPU, and are likely only minorly starved for contention during CPU process context switching.

This, to me, is conclusive reasoning to keep my CPUs loaded, by using the following settings:
BOINC setting "% of the Processors": 100%
GPUGrid app_config <cpu_usage> setting: 0.001

Kind regards,
Jacob
ID: 29266 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29267 - Posted: 27 Mar 2013, 22:26:15 UTC - in response to Message 29266.  

Your findings apply to NATHAN_dhfr36 WU's...
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29267 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29269 - Posted: 27 Mar 2013, 23:51:32 UTC - in response to Message 29267.  

Ah, so are you saying I should test against the Short Run units also, to see if I get similar results?
ID: 29269 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29273 - Posted: 28 Mar 2013, 15:38:33 UTC - in response to Message 29267.  

Your findings apply to NATHAN_dhfr36 WU's...

Well, to further test against short units, I changed my account settings to only receive "Short 4.2" units.
Here are the results:

========================================================================
Running with no other tasks (every other BOINC task and project was suspended, so the single GPUGrid task was free to use up the whole CPU core):

Task: 6678769
Name: I1R110-NATHAN_RPS1_respawn3-10-32-RND4196_2
URL: http://www.gpugrid.net/result.php?resultid=6678769
Run time (sec): 8,735.43
CPU time (sec): 8,710.61

Task: 6678818
Name: I1R42-NATHAN_RPS1_respawn3-12-32-RND1164_1
URL: http://www.gpugrid.net/result.php?resultid=6678818
Run time (sec): 8,714.75
CPU time (sec): 8,695.18

========================================================================
Running at <cpu_usage>0.001</cpu_usage>, BOINC set at 100% processors, along with a full load of other GPU/CPU tasks:

Task: 6678817
Name: I1R436-NATHAN_RPS1_respawn3-13-32-RND2640_1
URL: http://www.gpugrid.net/result.php?resultid=6678817
Run time (sec): 8,949.63
CPU time (sec): 8,897.27

Task: 6679874
Name: I1R414-NATHAN_RPS1_respawn3-7-32-RND6785_1
URL: http://www.gpugrid.net/result.php?resultid=6679874
Run time (sec): 8,828.17
CPU time (sec): 8,786.48

Task: 6679828
Name: I1R152-NATHAN_RPS1_respawn3-5-32-RND8187_0
URL: http://www.gpugrid.net/result.php?resultid=6679828
Run time (sec): 8,891.22
CPU time (sec): 8,827.11
========================================================================

So, again, as expected, there is only slight contention while under full CPU load, because the ACEMD process actually runs at a higher priority than other BOINC task processes, and therefor, are never starved for CPU, and are likely only minorly starved for contention during CPU process context switching.

If you'd like me to perform some other test, please let me, and I'll see if I can try it.

To maximize my crunching efforts, I still plan on keeping my settings of:
BOINC setting "% of the Processors": 100%
GPUGrid app_config <cpu_usage> setting: 0.001

Kind regards,
Jacob
ID: 29273 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29275 - Posted: 28 Mar 2013, 20:10:07 UTC - in response to Message 29273.  

Thanks Jacob,
That's a fairly solid set of results for Nathan's present WU's, Long and Short.

I'm not seeing any other task types at present, so it acts as a good guide, but Noelia, Gianni and Toni WU's would also need to be tested as and when they arrive.

The driver might have a slight influence (up to 3% with older drivers tending to be faster - more noticeable on lesser systems).
A lesser processor would also have a negative impact as would slower RAM and disk I/O. The CPU type and the way the CPU handles multiple threads might also have a small impact, as might the PCIE bus. Together these things could add up (5 or 10%). More of a concern if you have multiple GPU's in the one system.

From what the researchers said about the present apps I expected something similar, but wouldn't be surprised if there is a slightly increased difference (5 to 8%) when new tasks arrive.

The super-scalar cards are now better catered for, so perhaps there is more of a difference with the older CC2.0 cards?

Last time I looked there was still a large (11%+) performance difference between XP/Linux and Win Vista/7/8. On the newer Win servers it was less (2008/2008R2/2012 - 3 to 8%).

The CPU apps that are running can have a massive or minimal impact on GPU performance. You really don't want to be running 8 climate models and a GPU task, or perhaps a full suite of WCG's CEP tasks.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29275 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29276 - Posted: 28 Mar 2013, 20:44:47 UTC - in response to Message 29275.  

Thanks Jacob,
That's a fairly solid set of results for Nathan's present WU's, Long and Short.

I'm not seeing any other task types at present, so it acts as a good guide, but Noelia, Gianni and Toni WU's would also need to be tested as and when they arrive.
You're welcome. When I see tasks of type Noelia, Gianni, or Toni, I'll see if I can try to isolate some results on them, but I'm betting the results will be the same.

The CPU apps that are running can have a massive or minimal impact on GPU performance. You really don't want to be running 8 climate models and a GPU task, or perhaps a full suite of WCG's CEP tasks.
I don't agree. From what I am seeing, performance wise, it doesn't matter what type of CPU tasks are running; the Below Normal Windows priority given to the ACEMD process ensures that it gets the CPU whenever it wants it. If the CPU tasks somehow ran at a higher Windows priority than Low, then that might make a difference. Are you suggesting I should setup a test where I run a GPUGrid GPU task alongside 8 CPU tasks of a certain type? I really don't think it would make a difference.

-- Jacob
ID: 29276 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29277 - Posted: 28 Mar 2013, 20:58:48 UTC - in response to Message 29276.  

The aforementioned CPU task types are quite extreme; require massive I/O. So much so that they interfere with their own performances never mind other tasks. The RunTime to CPU Time delta tends to expand rapidly with such tasks, and in the past has impacted upon GPU performance for several projects (albeit more so the OpenCL apps) and other CPU projects. Anything that hammers the CPU kernel, memory and hard drive will have a negative impact on almost any crunching. It's my understanding the disk I/O is OS controlled and high priority irrespective of the app requiring disk I/O.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29277 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29279 - Posted: 28 Mar 2013, 21:42:32 UTC - in response to Message 29277.  
Last modified: 28 Mar 2013, 21:54:58 UTC

You really don't want to be running 8 climate models and a GPU task, or perhaps a full suite of WCG's CEP tasks.

The aforementioned CPU task types are quite extreme; require massive I/O. So much so that they interfere with their own performances never mind other tasks. The RunTime to CPU Time delta tends to expand rapidly with such tasks, and in the past has impacted upon GPU performance for several projects (albeit more so the OpenCL apps) and other CPU projects. Anything that hammers the CPU kernel, memory and hard drive will have a negative impact on almost any crunching. It's my understanding the disk I/O is OS controlled and high priority irrespective of the app requiring disk I/O.
Could you please be more specific? I'd like to test your theory if I could.

You mention "8 climate models".. Does that mean project "climateprediction.net" and if so does it mean any specific app? Or are all of their apps intensive?

You mention "WCG CEP"... Does that mean project "World Community Grid" and if so does it mean app "cep2" which is "The Clean Energy Project - Phase 2"?
ID: 29279 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29282 - Posted: 29 Mar 2013, 13:50:24 UTC - in response to Message 29279.  

This is going way off topic, and is more to do with other projects than GPUGRID!

My theory was fact, but might no longer be the case; apps and WU's get improved all the time, such as the apps for here which now perform much better for the super scalar cards - It use to be the case that CC2.0 was much better than CC2.1. That's no longer the case, and when the apps development/testing is complete would need to be looked at again.

You mention "8 climate models".. Does that mean project "climateprediction.net" and if so does it mean any specific app? Or are all of their apps intensive?

Yes, I mean 8 climate models from climateprediction.net. I don't know what apps/models are currently available, if any, and I've recently had mixed results running WU's (lots of failures), as have most others. Note there are some site issues.

You mention "WCG CEP"... Does that mean project "World Community Grid" and if so does it mean app "cep2" which is "The Clean Energy Project - Phase 2"?

Yes, phase 2 of the CEP supported by WCG. Haven't picked up any CEP2 tasks recently, so they might be out of work? You could check/ask in their forums. It's possible that recent development in the app has changed things but in the past when I ran lots of these tasks it was noticeable (I could hear the hard drive clicking away). WCG did a lot to support that project. You have to specifically select the project, set the number of tasks to run (default is 1). Plenty or RAM and LAIM on is recommended,
See The Clean Energy Project - Phase 2 technical project details and CEP2 Thread List and the app_config CEP2 thread.

Note that your LGA1366 CPU and motherboard supports 3 Memory Channels. This helps with such projects. So does fast memory, and a SATA 3 drive. In many ways your rigs design is better design than an i7-3770K based system (the main disadvantage is CPU/motherboard performance/power).

Good luck,
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29282 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29283 - Posted: 29 Mar 2013, 14:09:47 UTC - in response to Message 29282.  
Last modified: 29 Mar 2013, 14:10:08 UTC

You're right, we're off topic, and I apologize. I greatly appreciate your inputs, and if I manage to perform some tests on those applications, I'll reply privately, and I'll consider making a public post.

Back to the original "app_config.xml" topic, until I have any conclusive proof/reason to believe otherwise...

To maximize my crunching efforts, I'm using and recommending:

BOINC settings:
% of the processors: 100%
% CPU time: 100%

app_config settings for each of the 4 GPUGrid projects:
<max_concurrent>9999</max_concurrent>
<gpu_usage>1</gpu_usage>
<cpu_usage>0.001</cpu_usage>

Best of luck,
Jacob
ID: 29283 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
nanoprobe

Send message
Joined: 26 Feb 12
Posts: 184
Credit: 222,376,233
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29284 - Posted: 29 Mar 2013, 22:25:43 UTC - in response to Message 29283.  

FWIW you can do away with the max_concurrent tag for running GPU tasks. It's only needed when running CPU tasks. The gpu_usage tag controls how many tasks each GPU uses.
ID: 29284 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29285 - Posted: 29 Mar 2013, 23:33:51 UTC - in response to Message 29284.  
Last modified: 29 Mar 2013, 23:45:29 UTC

Through my testing, I have been able to conclude that it DOES have an affect even on GPU tasks. ie: If you specify <max_concurrent> of 1 for a given GPU app type, it will only run 1, regardless of having <gpu_usage> of 0.5 or less. So, the setting DOES apply to GPU apps; feel free to test and verify.

I understand that it still appears to work even without setting a <max_concurrent> value, but... the documentation doesn't say that it's optional.

If that tag is optional, or if there are default values for tags not specified by the user, then the documentation should say so.

As it is, it doesn't say, and so I assume that it'll use whatever the project chooses as its default, unless the user specifies otherwise. And so, I explicitly specify otherwise, 9999.

http://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

- Jacob
ID: 29285 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Graphics cards (GPUs) : app_config.xml

©2025 Universitat Pompeu Fabra