Message boards :
Number crunching :
Does anyone have an app_config.xml that works?
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Does anyone have an app_config.xml that works for GPUGRID? I'm running multiple machines with dual GTX690's, dual GTX590's, and two mixed machines with GTX690/660Ti, and GTX590/GTX660Ti. I would like to experiment with .5/gpu configurations to see if I could squeeze more work out of all this computing power. Thanks in advance to anyone/everyone who responds. Rick
|
|
Send message Joined: 9 May 13 Posts: 171 Credit: 4,594,296,466 RAC: 127 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would like to experiment with .5/gpu configurations to see if I could squeeze more work out of all this computing power. Hi Rick. We've been through this extensively. Short answer: not a good idea. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
See this post just from this morning - doesn't seem like a good idea on typical long-run WUs. And you wouldn't want to run short-runs on this hardware, as long as any long-runs are left. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Try this one: Thanks for the help CaptinJack, and the comments from the rest of you. I tried the app_config from your link and it does work. Based on your comments,rather than try to run multiple GPUGRIDs per processor, I'm trying to run grugrid ~ .7, and Other projects such as seti, Einstein, and Milkyway at ~ .3. Well see if this works, but with the first 24Hrs. under my belt, it seems to be worth the experiment. Rick
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Don't overburden the GPU with other apps or the time it takes the GPUGrid WU will increase dramatically. Going by GPU usage is a bad indicator, go by runtime, tasks returned/day and credit. Overall you might return more work or get better GPU credit running 2 tasks (1 GPUGrid and 1 at another light project), but when I tried more than 1 WU from another project the GPUGrid WU times increased massively (two or three times as long). You should also note that if you over-commit the CPU, GPU performance may be reduced exponentially. If for example you wanted to run POEM WU's + GPUGrid WU's on the same card don't use any more than 50% of the CPU cores. If you start seeing failures, all gains will be lost. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I can confirm; a 30% reduction in GPU core allocation (from 1.0 to 0.7) results in a 100% increase in time required to complete a GPUGRID WU. Basically the same results on my GTX590's, GTX660Ti's, and GTX690's. I can't explain why, but that's what happened in my tests. Rick
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi Rick, what exactly did you run along GPU-Grid set to 0.7 GPU? Note that this value is only used to help the BOINC scheduler decide which WUs to launch. It does not affect the actual GPU usage. So if you run e.g. GPU-Grid at 0.7 and Einstein at 0.3, they'll probably share the GPU 0.5 : 0.5. Or each might get as many time slices as the other one, but their length would depend on the WU and app, so sharing would be asynchronous in some hard-to-predict way. Hence just looking at the GPU-Grid time is only half of the story. If the throughput of the other project increased, it might still be worth it. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi Rick, Wow, I was not expecting that behavior (50/50) in BOINC with a .7 GPU allocation to GPUGrid. I've run GUOPGrid with , Seti, Einstein, and Milkyway. All these other projects seem to benefit from sharing a GPU, that's why I was trying the same for GPUGrid. I'll continue to experiment, but it doen't look good. Thanks for the explanation MrS. Regards, Rick
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Actually I suspect this 50/50 only applies to running 2 similar WUs at once. Even for different WUs from GPU-Grid it probably differs. The problem is that currently not even the OS could enforce GPU usage. A thread is allowed to send its instructions to the GPU, the GPU processes them, sends the results back.. and only then becomes available for other work. That's why the display becomes choppy under heavy load; when these individual "work chunks" take too long. This is the "time per step" reported by GPU-Grid. And when not even the OS has proper scheduling and multitasking for GPUs yet, what could BOINC do about this :/ I suppose mixing one GPU-Grid and one Einstein will be fine, as GPU utilization is significantly higher this way. Although the new Nathan KIXors kick so much ***, eh GPU, that running anything along can not possibly help. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ok, I'm getting wu's again, BUT I give up on what new app names to use in the app_config.xml file. Can anyone help me? Thanks in advance, Rick
|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ok, I'm getting wu's again, BUT I give up on what new app names to use in the app_config.xml file. Can anyone help me? Thanks in advance, Rick acemd.800-55.exe |
|
Send message Joined: 24 Oct 11 Posts: 4 Credit: 433,680,314 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You can get the app_name by looking in the BOINC Manager Event Log for "Starting task" events. Following "using" is app_name. |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That was what I thought, but when I put "acemd.800-55.exe" or "acemd.800-55" or "acemd_800_55" and about 10 other variations, BOINC did not recognize the app name. Couriously, the new long app seems to accept the variables of the old "acemdlong" app name in my app_config.xml file. Rick
|
nenymSend message Joined: 31 Mar 09 Posts: 137 Credit: 1,429,587,071 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You must use <name> of <app> section (=acemdlong, acemdbeta), not <name> of application. http://boinc.berkeley.edu/trac/wiki/ClientAppConfig If you want to change anything depending on application name,you must use app_info.xml. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That was what I thought, but when I put "acemd.800-55.exe" or "acemd.800-55" or "acemd_800_55" and about 10 other variations, BOINC did not recognize the app name. Couriously, the new long app seems to accept the variables of the old "acemdlong" app name in my app_config.xml file. Rick I get the app names from looking in the client_state.xml file, within the <app> blocks. |
|
Send message Joined: 20 Jan 09 Posts: 52 Credit: 2,518,707,115 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That was what I thought, but when I put "acemd.800-55.exe" or "acemd.800-55" or "acemd_800_55" and about 10 other variations, BOINC did not recognize the app name. Couriously, the new long app seems to accept the variables of the old "acemdlong" app name in my app_config.xml file. Rick Thanks Jacob. That's exactly what I was looking for. Regards, Rick
|
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
No problem. For full disclosure, here is my GPUGrid app_config.xml file (including tons of personal notes in xml comments at the top that briefly describe my testing and results over the course of the last several months) <!-- GPUGrid.net -->
<!-- GPU tasks do properly use higher process and thread priorities, compared to CPU tasks. -->
<!-- GPU tasks sometimes use CPU sometimes don't, based on type of GPU task runs on. -->
<!-- Recommend 1 gpu_usage, if user also has CPU projects. -->
<!-- Recommend 0.001 cpu_usage, but might try 0.5, since if 2 are running, I KNOW the Kepler is using CPU -->
<!-- Also might try 1 cpu_usage, so as not to overcommit per Task Manager's CPU Utilization -->
<!-- Although x-at-a-time provides the best per-task-throughput, it ends up using a lot more CPU -->
<!-- Switching to 0.4995, such that if an 8-CPU MT job is running, 2 GPUGrid jobs and 1 0.001 GPU job can all run together -->
<!-- 0.5 cpu_usage so that 2+ GPU tasks will intentionally reserve at least 1 core -->
<!-- 1.0 cpu_usage because, when SETI tasks run on 3rd GPU reserving a core, they still aren't getting enough CPU -->
<app_config>
<!-- Short runs (2-3 hours on fastest card) -->
<app>
<name>acemdshort</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<!-- Long runs (8-12 hours on fastest card) -->
<app>
<name>acemdlong</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<!-- ACEMD beta version -->
<app>
<name>acemdbeta</name>
<max_concurrent>0</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
</app_config>
|
©2025 Universitat Pompeu Fabra