Advanced search

Message boards : Number crunching : Python apps for GPU hosts 4.03 (cuda1131) using a LOT of CPU

Author Message
lohphat
Send message
Joined: 21 Jan 10
Posts: 44
Credit: 1,348,062,611
RAC: 1,301,048
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59124 - Posted: 18 Aug 2022 | 9:35:11 UTC

My recent WUs are using 60-100% CPU on a Ryzen 9 3900X (12 core). Running BOINC 7.20.2 on win11 (64GB DDR4) using 7GB.

The GPU (GTX 980Ti) is using 4GB most of its local VRAM (6GB total), but the CPU_0 and Copy graphs are "spikey" running low (20%) then spiking to 80% every 7 seconds.

KAMasud
Send message
Joined: 27 Jul 11
Posts: 138
Credit: 526,534,547
RAC: 186,848
Level
Lys
Scientific publications
watwat
Message 59128 - Posted: 18 Aug 2022 | 14:35:09 UTC

Same on my Intel six core. The WU is consuming five cores.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59129 - Posted: 18 Aug 2022 | 17:12:41 UTC - in response to Message 59128.

this is normal for the Python tasks. they use a lot of CPU resources.
____________

lohphat
Send message
Joined: 21 Jan 10
Posts: 44
Credit: 1,348,062,611
RAC: 1,301,048
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59131 - Posted: 18 Aug 2022 | 23:24:32 UTC - in response to Message 59129.

Well then the status text should be updated to set expectations.

Resources
0.983 CPUs + 1 NVIDIA GPU

Traditionally this SEEMED to mean CPU core not 98% of the CPU.

csbyseti
Send message
Joined: 4 Oct 09
Posts: 6
Credit: 1,109,425,695
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59274 - Posted: 19 Sep 2022 | 16:05:52 UTC

Started GPUGrid after a long period of pausing and get 4.03 Python apps for GPU hosts (cuda1131) on my RTx 3080

But this WU's uses 60% of the CPU Cores and 7% GPU Load.

This is not a GPU-WU, it's a CPU WU wich is wasting power on a GPU.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59275 - Posted: 19 Sep 2022 | 16:50:41 UTC - in response to Message 59274.

Started GPUGrid after a long period of pausing and get 4.03 Python apps for GPU hosts (cuda1131) on my RTx 3080

But this WU's uses 60% of the CPU Cores and 7% GPU Load.

This is not a GPU-WU, it's a CPU WU wich is wasting power on a GPU.


it uses both. but you need a powerful CPU to properly feed a GPU. my system has 2x RTX 3060 and each GPU sees 25-100% load.
____________

csbyseti
Send message
Joined: 4 Oct 09
Posts: 6
Credit: 1,109,425,695
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59279 - Posted: 20 Sep 2022 | 8:16:04 UTC

Ok, you'll mean a 3900X is to slow to feed a RTX3080.

Perhaps your RTX3060 is slow enough for seeing some GPU Load.

kksplace
Send message
Joined: 4 Mar 18
Posts: 53
Credit: 2,721,742,065
RAC: 3,011,750
Level
Phe
Scientific publications
wat
Message 59282 - Posted: 20 Sep 2022 | 12:11:32 UTC

As another reference, I have an i7-7820x (8 core, 16 thread) overclocked to 4.4 with an EVGA 3080 (memory clock set to +500, no other overclock) running 2x of these WUs. The GPU is loaded between 50 and 85% (rare dips down to 35%). (Linux Mint OS).

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59283 - Posted: 20 Sep 2022 | 14:34:38 UTC - in response to Message 59279.

Ok, you'll mean a 3900X is to slow to feed a RTX3080.

Perhaps your RTX3060 is slow enough for seeing some GPU Load.


well my 3060 is vastly out producing your 3080 (and everyone else), so maybe I'm on to something?

switch to Linux, and use a CPU that's strong enough. you'll get the best production running multiples so that you can increase GPU utilization, but you'll need enough CPU and memory and GPU memory to handle more tasks. my 24-core CPU feeds 3x tasks to each of my 3060's. effective task time are 4-4.3 hrs each.

you'll be able to run 2x tasks on your 3080, but probably wont be able to run 3. with the higher number of cores, it will want to reserve more memory than my 3060, and even the 12GB model wouldn't be enough for 3 tasks. IMO the power of a 3080 goes to waste. "slower" lower end cards are just as productive because you're ultimately CPU bound.
____________

gemini8
Avatar
Send message
Joined: 3 Jul 16
Posts: 31
Credit: 2,224,254,348
RAC: 662,014
Level
Phe
Scientific publications
watwat
Message 59300 - Posted: 23 Sep 2022 | 8:06:53 UTC - in response to Message 59283.

Good morning folks.

my 24-core CPU feeds 3x tasks to each of my 3060's. effective task time are 4-4.3 hrs each.

You're talking about six Pythons running on your system, right?
Do you use four threads per workunit, or is it actually eight threads?
When I run a Python with eight threads on my Ryzen 7 2700 with GTX 1080 the cpu isn't at its full potential. I just switched to only four threads to see what's happening.
I also run other GPU work aside GPUGrid to see more stable GPU temperatures, as the GPU isn't fully used either.
____________
- - - - - - - - - -
Greetings, Jens

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59301 - Posted: 23 Sep 2022 | 13:07:28 UTC - in response to Message 59300.
Last modified: 23 Sep 2022 | 13:15:10 UTC

Good morning folks.
my 24-core CPU feeds 3x tasks to each of my 3060's. effective task time are 4-4.3 hrs each.

You're talking about six Pythons running on your system, right?
Do you use four threads per workunit, or is it actually eight threads?
When I run a Python with eight threads on my Ryzen 7 2700 with GTX 1080 the cpu isn't at its full potential. I just switched to only four threads to see what's happening.
I also run other GPU work aside GPUGrid to see more stable GPU temperatures, as the GPU isn't fully used either.


Yes, 6 total pythons running on the system. 3x on each GPU.

I’m not running any other projects on this system, only GPUGRID Python, so I haven’t changed the CPU used value from the default. Since no other work is running changing that value will have no effect.

This is a common misconception about this setting. You cannot control how much CPU is used by an application through this setting. The application will use what it needs regardless. All this setting does is tell BOINC how many threads to set aside or reserve for this task. Your system isn’t using less CPU for the Python tasks by lowering this value. You’re just allowing more work from other projects to run.

Each Python task will spawn 32x multiprocess spawn processes plus n*times processes from the main run.py program, where n is the number of cores. If you have enough cores, each process will run on a separate thread, if not then they will get timesliced by the OS scheduler and tetris’d in with the other processes. No setting in BOINC can change this.
____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 59302 - Posted: 23 Sep 2022 | 15:46:48 UTC - in response to Message 59283.

my 3060 is vastly out producing your 3080 (and everyone else)
I'm up for the challenge :-)
What figure of merit are you using to decide the winner?

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 59303 - Posted: 23 Sep 2022 | 15:52:40 UTC - in response to Message 59283.

my 24-core CPU feeds 3x tasks to each of my 3060's.
How do you get GG to send you 3 WUs per GPU???
I have no control over how many WUs GG sends me, it's 2 per GPU even when I only want one. What's the trick?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59304 - Posted: 23 Sep 2022 | 17:03:14 UTC - in response to Message 59303.

Use an app_config.xml file.

<app_config>
<app>
<name>acemd3</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
<app>
<name>acemd4</name>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>

</app_config>

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59305 - Posted: 23 Sep 2022 | 17:12:08 UTC - in response to Message 59303.

I'm up for the challenge :-)
What figure of merit are you using to decide the winner?


I guess total RAC per GPU? or shortest effective task completion time? my longest task times are ~13hrs, but I'm running 3x so it completes a task about every 4.3hrs, excluding early completions. I haven't seen anyone able to beat that. bare minimum if i were to only get full-run tasks, each of my GPUs can do about 580,000/day. and based on current trends, maybe up to ~800,000/GPU including the early completion bonus tasks.

these are not normal tasks. there are a lot of factors that will determine overall production. not all run for the same length of time, but all are awarded the same. so daily reward can vary a lot depending on how many short running tasks you're lucky enough to get. so I've really only been paying attention to how long the longest full-run tasks take, excluding early completions. if you just look for the longest running tasks in your list it will give you an idea of how long the longest ones are.

the other factor is CPU power. these tasks really are primarily CPU tasks with a small GPU component. that's why a lower end GPU pairs nicely with a beefy CPU. CPU power is the ultimate bottleneck. even when the CPU isnt maxed out, you can start to hit a limit in how many processes the CPU can handle when it doesn't have enough threads to service them all. if you were to run 3x on your 18-core intel system on a single GPU. the CPU would have to juggle about 150 running processes and time slicing them into your 36 available threads, not even accounting for other tasks or processes running.

another factor is what, if any, other projects you are running and how much of the CPU is available. these tasks will spin up more processes than you have threads available. so if you running any other work, it will fight for priority over other projects and likely hurt your production. that's why on my system I dedicated it to these python task and doing nothing else.

you're free to try, but I don't see your systems being able to overtake mine with the less powerful Intel CPUs you have. look at the last few days production numbers, my single system with 2x 3060's have outproduced all of yours combined. if you stop running other projects you can probably overtake me with all of them, but not with any single one. unless you put together a different system.

How do you get GG to send you 3 WUs per GPU???
I have no control over how many WUs GG sends me, it's 2 per GPU even when I only want one. What's the trick?


probably my custom BOINC client, which gives me better control over how many tasks a project sends. you can't run more than 2x on any of your GPUs anyway. not enough VRAM. yes your 3080Ti has the same 12GB as my 3060, but a 3080Ti requests more VRAM than a 3060 does because of the larger number of cores, so 3x on a 3080Ti will exceed the maximum VRAM where it doesnt on a 3060.

you could "trick" GG into sending you more tasks with a stock BOINC client by making it look like you have more than one GPU. but that is another can of worms that will require additional configuration to prevent tasks from running on the "fake" GPU. It's possible, but it wont be an elegant solution and would likely have impacts to other projects.

____________

gemini8
Avatar
Send message
Joined: 3 Jul 16
Posts: 31
Credit: 2,224,254,348
RAC: 662,014
Level
Phe
Scientific publications
watwat
Message 59306 - Posted: 23 Sep 2022 | 20:07:44 UTC - in response to Message 59301.

Yes, 6 total pythons running on the system. 3x on each GPU.

I’m not running any other projects on this system, only GPUGRID Python, so I haven’t changed the CPU used value from the default. Since no other work is running changing that value will have no effect.

This is a common misconception about this setting. You cannot control how much CPU is used by an application through this setting. The application will use what it needs regardless. All this setting does is tell BOINC how many threads to set aside or reserve for this task. Your system isn’t using less CPU for the Python tasks by lowering this value. You’re just allowing more work from other projects to run.

Thanks for your answer.
Even if your explanation technically is much more precise than the idea of using some amount of threads per workunit, setting aside some threads to keep CPU capacities reserved for Python work is what I'm doing to be able to do other things on my system which utterly lacks the kind of GPUs yours is featuring. Running two Python beside two Milkyway workunits was pushing the VRAM limits of the GPU, and at one point a lot of the Milkyway tasks errored out, probably because of too few GPU memory beside the Pythons. Since then I've been running one Python and one other GPU task side by side without encountering further problems.
Maybe I'll have to adjust the number of CPU threads again until I get some sort of 'best' performance out of my system.
And maybe I'll just use another system for GPUGrid Python. I ran some tasks on an i7-6700k combined with a 1060 3GB some time ago. Don't know if this will fit anymore (possibly not), so I'm prepared to swap a 1050Ti into the system.
Thanks again for giving me additional ideas for this kind of Boinc work!
____________
- - - - - - - - - -
Greetings, Jens

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 59313 - Posted: 25 Sep 2022 | 15:00:46 UTC

Bug: PythonGPU does not know how to share two GPUs. If the CPU is 18c/36t it shares all 36 threads between two WUs but will not start a third because of a lack of CPU threads. Fine. But, the problem is that it assigns both WUs to GPU d0 and ignores the second GPU d1. Maybe this is not actually a problem if it's impossible that both WUs may need to process work on a GPU at the same time. It seems that at some points over the long course of completing a PythonGPU WU they will want a GPU at the same time. It should use the next available GPU or at least assign one WU to d0 and the second WU to d1.

This is another example of how denying us any control over how many WUs get downloaded is inefficient. With two GPUs GG insists on DLing four WUs when only two can run. Those WUs could be running on another computer rather than wasting half a day idle. This may be harder to remedy than the other case where only a single PythonGPU WU is allowed to run on a computer. It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59314 - Posted: 25 Sep 2022 | 15:38:04 UTC - in response to Message 59313.
Last modified: 25 Sep 2022 | 15:48:54 UTC

You CAN do that. You just don’t know how. It has nothing to do with GPUGRID, it’s in how you configure the BOINC settings.

If you want to run two tasks on the same GPU, but different projects? And never having two GPUGRID on the same GPU? Easy.

Set the App config for GPUGRID for 0.6 gpu_usage
Set the app config for the other project to 0.4 gpu usage.

That way it will mix the projects, one from each on one GPU. 0.6+0.4=1. But 0.6+0.6>1 so it won’t start two from GPUGRID on the same GPU. It will go to the next GPU with open resources.
____________

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59315 - Posted: 25 Sep 2022 | 15:45:51 UTC - in response to Message 59313.

It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.


I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?

____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 59319 - Posted: 25 Sep 2022 | 17:10:55 UTC - in response to Message 59314.

You CAN do that. You just don’t know how. It has nothing to do with GPUGRID, it’s in how you configure the BOINC settings.

If you want to run two tasks on the same GPU, but different projects? And never having two GPUGRID on the same GPU? Easy.

Set the App config for GPUGRID for 0.6 gpu_usage
Set the app config for the other project to 0.4 gpu usage.

That way it will mix the projects, one from each on one GPU. 0.6+0.4=1. But 0.6+0.6>1 so it won’t start two from GPUGRID on the same GPU. It will go to the next GPU with open resources.
You did not read what I actually wrote before making your snide remark. That will not fix anything.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 59320 - Posted: 25 Sep 2022 | 17:12:25 UTC - in response to Message 59315.

It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.


I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?

WCG and LHC have it.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59321 - Posted: 25 Sep 2022 | 17:50:05 UTC - in response to Message 59319.
Last modified: 25 Sep 2022 | 17:58:23 UTC

You did not read what I actually wrote before making your snide remark. That will not fix anything.


I read it. Maybe you didn’t explain the problem well enough or include enough relevant details about your current configuration.

GPU assignment happens through BOINC. It’s nothing to do with the science app. The application gets assigned which GPU to use and sticks to it, it cannot just “decide” to use a different GPU if the one pre-selected is already is use by another process. This is the case for all projects and not something specific to GPUGRID, so it’s a little strange that you would think that GPUGRID could somehow act this way when no other project does.

If you want the Python tasks to use separate GPUs, you can employ a strategy like I outlined in my previous post. Or just reconfigure it to use 1 task per GPU.

Not sure what issue you’re having with CPU resources. In earlier testing with a 8-core Intel CPU, I had no problem running 3 tasks on a single GPU. If my 8c CPU had enough resources, surely your 18-c one does as well. So it’s down to your BOINC settings and other projects being run most likely.
____________

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59322 - Posted: 25 Sep 2022 | 18:09:52 UTC - in response to Message 59320.

I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?

WCG and LHC have it.


I can't say for LHC, but I do not see any setting like that for WCG. in the device profiles there is only the similar WU cache setting in terms of days.

if LHC has this setting, it's the exception to what every other BOINC project does.

but of course. BOINC is open source. the code is freely available to change whatever you don't like and compile your own.

____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59323 - Posted: 25 Sep 2022 | 18:22:18 UTC - in response to Message 59315.

It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.


I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?

WCG does that. Only one that I know of.


The app defaults to Device 0 but you can kick it off that device by using an exclude_gpu statement in cc_config.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59324 - Posted: 25 Sep 2022 | 19:00:17 UTC - in response to Message 59323.
Last modified: 25 Sep 2022 | 19:44:32 UTC

It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.


I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?

WCG does that. Only one that I know of.


The app defaults to Device 0 but you can kick it off that device by using an exclude_gpu statement in cc_config.


Again, WHERE? I don’t see anything like that in WCG settings. Only day cache settings.

This app does not default to device 0. It will use whatever device that BOINC tells it to. This isn’t like the SRBase app that is hard coded for device0.

Saying “defaults to 0” in reference to “uses the first GPU available” when all GPUs are available is redundant in a normal system with GPU0 allowed to be used. 0 will always be first and always the default. but it will still use the others as more tasks spin up or the first gpu isnt available
____________

gemini8
Avatar
Send message
Joined: 3 Jul 16
Posts: 31
Credit: 2,224,254,348
RAC: 662,014
Level
Phe
Scientific publications
watwat
Message 59325 - Posted: 25 Sep 2022 | 19:05:06 UTC - in response to Message 59320.

It would really be donor-friendly to give us the ability to specify the number of WUs we want DLed for a given Preferences group as many other BOINC projects do. It just needs to be turned on.

I’ve not seen this functionality on any other BOINC project. They all go by your local cache (number of days) setting in BOINC itself. What project let’s you explicitly specify number of tasks to download?


WCG and LHC have it.

PrimeGrid, too.

You can also set the cc_config tag <fetch_minimal_work>0|1</fetch_minimal_work> (fetch one job per device) to receive no work as long as something's running (at least in theory - didn't try it myself).
____________
- - - - - - - - - -
Greetings, Jens

gemini8
Avatar
Send message
Joined: 3 Jul 16
Posts: 31
Credit: 2,224,254,348
RAC: 662,014
Level
Phe
Scientific publications
watwat
Message 59326 - Posted: 25 Sep 2022 | 19:11:00 UTC - in response to Message 59324.

Again, WHERE? I don’t see anything like that in WCG settings. Only day cache settings.

https://www.worldcommunitygrid.org/ms/device/viewBoincProfileConfiguration.do?name=Default
Just scroll down.
____________
- - - - - - - - - -
Greetings, Jens

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59327 - Posted: 25 Sep 2022 | 19:19:07 UTC - in response to Message 59326.

Again, WHERE? I don’t see anything like that in WCG settings. Only day cache settings.

https://www.worldcommunitygrid.org/ms/device/viewBoincProfileConfiguration.do?name=Default
Just scroll down.

thanks for the precise reply. i finally see it now. (I had landed this page prior, but missed the exact section)

not infinitely configurable, but provides limits up to 64 tasks. wont let you limit to say 100 if you wanted. but better than nothing.

but this is still the exception rather than the rule for BOINC. only projects using customizations of the BOINC server platform are going to be doing this.
____________

gemini8
Avatar
Send message
Joined: 3 Jul 16
Posts: 31
Credit: 2,224,254,348
RAC: 662,014
Level
Phe
Scientific publications
watwat
Message 59328 - Posted: 25 Sep 2022 | 19:29:05 UTC - in response to Message 59319.
Last modified: 25 Sep 2022 | 19:32:30 UTC

That way it will mix the projects, one from each on one GPU. 0.6+0.4=1. But 0.6+0.6>1 so it won’t start two from GPUGRID on the same GPU. It will go to the next GPU with open resources.

That will not fix anything.

I'm not certain what the exact problem is you are referring to, so I can only give as many pointers as I can imagine. This might or might not help:
You might just try to set the GPUGrid tasks to <gpu_usage>0.6</gpu_usage> and don't run a second project.
I actually don't know what happens then, but I'm interested in it.
So, if you try, please tell us. :-)
Else experiment with the tags <fetch_minimal_work>, <ngpus>, <max_concurrent>, try to set <gpu_usage> to 1.1 or 2.0, or whatever else comes to mind regarding client configuration.
You might also setup two Boinc instances and give one GPU to each of them.

*edit*
With Boinc client 7.14.x you could also set resource share and the Boinc caches to 0. That way you should not get any work as long as a device is occupied.
*end edit*
____________
- - - - - - - - - -
Greetings, Jens

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59329 - Posted: 25 Sep 2022 | 19:39:10 UTC - in response to Message 59328.
Last modified: 25 Sep 2022 | 19:42:28 UTC


You might just try to set the GPUGrid tasks to <gpu_usage>0.6</gpu_usage> and don't run a second project.
I actually don't know what happens then, but I'm interested in it.
So, if you try, please tell us. :-)


this is exactly what my suggestion was. I've done it. what happens is that BOINC will run 1 GPUGRID task on the GPU, but not two. and allow another project to run alongside GPUGRID.

in BOINC's resource accounting logic it sees that it has 0.4 of the GPU resources remaining/free. so it can only fill that spot with tasks defined to use 0.4 or less. that is why it wont spin up a second job, because another GPUGRID task is defined to use 0.6 and that's too large to fit in the 0.4 "hole", and also the reason I suggested to run other project's tasks at 0.4.

now if you also want that secondary project to not be able to run 2x on the same GPU, then you're sort of stuck, and probably need to employ the multiple client solution. but personally I don't like to do that.

but you're right, if this doesn't solve his problem, then he should be more specific about what the problem actually is.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59330 - Posted: 25 Sep 2022 | 21:25:46 UTC - in response to Message 59324.
Last modified: 25 Sep 2022 | 21:26:26 UTC


This app does not default to device 0. It will use whatever device that BOINC tells it to. This isn’t like the SRBase app that is hard coded for device0.

On my multi-gpu hosts, the task always run on BOINC device #0 when only running a single task. BOINC assigns device #0 to the most capable card.

But I don't want the task on my best cards because I want them to run other gpu projects.

So I exclude gpus #0 and #1 so that the python tasks run on my least capable card since card capability has almost no effect on these tasks.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59331 - Posted: 25 Sep 2022 | 23:24:16 UTC - in response to Message 59330.

it’s likely that with the crazy long estimated run times, the task is going into high priority mode. In that case it has equal chance to interrupt any GPU BOINC grabs the first one. A BOINC function of high priority mode, not the app specifically trying to run on GPU0 for any particular reason.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59332 - Posted: 26 Sep 2022 | 2:23:46 UTC - in response to Message 59331.
Last modified: 26 Sep 2022 | 2:27:48 UTC

it’s likely that with the crazy long estimated run times, the task is going into high priority mode. In that case it has equal chance to interrupt any GPU BOINC grabs the first one. A BOINC function of high priority mode, not the app specifically trying to run on GPU0 for any particular reason.

My PythonGPU tasks always run in high-priority mode from start to finish.

I've never seen a python task EVER occupy any gpu other than #0 unless I explictly prevent it using a gpu_exclude statement.

Countless tasks have been run through my gpus with countless opportunities to run on gpu's #1 and #2. But I have seen brand new tasks in "waiting to run state", wait until gpu#0 comes free of running Einstein and MW tasks so it can jump onto gpu #0.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59333 - Posted: 26 Sep 2022 | 2:51:45 UTC - in response to Message 59332.
Last modified: 26 Sep 2022 | 3:08:34 UTC

I know. I’m saying it’s the High priority, and the fact that you’re only running one task, that’s putting it on GPU0. Not inherent to the application. High priority will supersede ALL lower priority tasks and make all devices available to it. So it picks the first one. Which is 0.

If you allowed two tasks to download and run (at 1x task per GPU, with no excludes), the second task will run on the next GPU. They won’t all stack up on GPU 0 or anything like that if it were because of the application. It’s just running where BOINC tells it to under the circumstances.

How do you think any tasks run on my second GPU? I’m not doing anything special. Just only running these tasks and everything acts normally as you’d expect since all tasks are equally “high priority” and it’s business as usual.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59334 - Posted: 26 Sep 2022 | 4:33:28 UTC

OK, I understand what you are describing.

I on the other hand have never seen any other behavior so am describing only what I have seen.

I've set the tasks up for 0.5 per gpu and still only see tasks on #0.

So how do you get your tasks to not run high-priority?

I haven't figured out that magic recipe yet.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59338 - Posted: 26 Sep 2022 | 13:20:43 UTC - in response to Message 59334.

I've set the tasks up for 0.5 per gpu and still only see tasks on #0.


That’s not what I meant. Leave it at 1.0 gpu_usage but allow Pandora to download 2 tasks. Since GPU 1 will be occupied, it will spin up a task on GPU 2. Setting 0.5 gpu_usage is the same situation with high priority. They will both go to the first GPU.

So how do you get your tasks to not run high-priority?

I haven't figured out that magic recipe yet.


Mine are running high priority. But when all six tasks are labeled as high priority, it doesn’t really make any difference. They run the same as if there’s no high priority because there’s no low priority tasks for them to fight with. This system ONLY runs PythonGPU. No other tasks and no other projects.

____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59339 - Posted: 26 Sep 2022 | 13:35:39 UTC - in response to Message 59338.

I already tried the download limit set to 2. But I was still at 0.5 gpu_usage because sharing cards with Einstein, MW and WCG.

Just spun up both tasks on gpu#0.

I am not arguing that the tasks will always run on gpu#0 in all situations, just that I have not been able so far to get them to move anywhere else for my computing environments.

I am going to stop messing with the configuration as I have finally reduced the large impact on the cpu task running times when I moved to 1 python task running with 0.5 gpu_usage.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 59342 - Posted: 26 Sep 2022 | 14:37:49 UTC - in response to Message 59339.

there's no problem with you setting your config that way with excludes if that works for you.

I just wanted it to be clear that the reason it's going to GPU0 is because of high priority and BOINC behavior, NOT anything inherent to or hard coded in the application itself. the root cause is the incorrect remaining calculation time, but the direct cause is how BOINC manages tasks in high priority mode. the app is just doing what BOINC tells it to.
____________

captainjack
Send message
Joined: 9 May 13
Posts: 171
Credit: 4,114,170,365
RAC: 12,965,535
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59344 - Posted: 26 Sep 2022 | 20:49:08 UTC

Keith,

I hope I am understanding your question correctly,

Have you set the <use_all_gpus> flag in the cc_config.xml file to "1"?

Requires a restart if the flag needs to be turned on.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59345 - Posted: 26 Sep 2022 | 21:11:18 UTC - in response to Message 59344.

Keith,

I hope I am understanding your question correctly,

Have you set the <use_all_gpus> flag in the cc_config.xml file to "1"?

Requires a restart if the flag needs to be turned on.

Thanks for the reply. No that is all fine.

This has do with some advanced configuration stuff beyond the standard client. Ian and I were discussing stuff having to do with our custom team client.

You don't even had to have that options flag in your cc_config file if your gpus are all the same. They all get used automatically. This daily driver has (3) 2080 cards of the same make and they all get used without a config parameter.

You only need that parameter if your cards are dissimilar enough that BOINC considers them of different enough compute capability. Then BOINC will only use the most capable card if you don't use the parameter.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 11,007,655,976
RAC: 21,537,108
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59346 - Posted: 27 Sep 2022 | 13:33:46 UTC - in response to Message 59304.
Last modified: 27 Sep 2022 | 14:13:29 UTC

Keith Myers wrote:

Use an app_config.xml file.

<app_config>
...
<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>
</app_config>


I put in the above in the app_config.xml, with the intention to run 3 Pythons on one GPU.
However, after downloading 2 tasks (which had started right away), and trying to download a third one, BOINC tells me "the computer has reached a limit on tasks in progress", and it was not possible to download a third one :-(

At this point I remember that it has always been said that only 2 tasks per GPU can be downloaded from GPUGRID

So, how did you manage to download 3 tasks per GPU ?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59349 - Posted: 27 Sep 2022 | 16:59:49 UTC - in response to Message 59346.
Last modified: 27 Sep 2022 | 17:01:14 UTC

Keith Myers wrote:

Use an app_config.xml file.

<app_config>
...
<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>
</app_config>


I put in the above in the app_config.xml, with the intention to run 3 Pythons on one GPU.
However, after downloading 2 tasks (which had started right away), and trying to download a third one, BOINC tells me "the computer has reached a limit on tasks in progress", and it was not possible to download a third one :-(

At this point I remember that it has always been said that only 2 tasks per GPU can be downloaded from GPUGRID

So, how did you manage to download 3 tasks per GPU ?


By spoofing the card count using a custom client, or editing coproc_info.xml and locking it down or having more than one card in a host.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 11,007,655,976
RAC: 21,537,108
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59350 - Posted: 27 Sep 2022 | 18:08:12 UTC - in response to Message 59349.

...
So, how did you manage to download 3 tasks per GPU ?

By spoofing the card count using a custom client, or editing coproc_info.xml and locking it down or having more than one card in a host.

when you say editing coproc_info.xml, you are talking about the entry

<warning>NVIDIA library reports 1 GPU</warning>

near the bottom? After changing this to "2", how would I lock it down?

Profile Bill F
Avatar
Send message
Joined: 21 Nov 16
Posts: 32
Credit: 144,888,974
RAC: 73,037
Level
Cys
Scientific publications
wat
Message 60054 - Posted: 10 Mar 2023 | 4:27:28 UTC

Here is a URL to a BOINC message board tread on Hardware Accelerated GPU scheduling. Is this something that might benefit Windows based systems with a minor improvement ?

https://boinc.berkeley.edu/dev/forum_thread.php?id=14235#104003

Thanks
Bill F

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1360
Credit: 7,912,819,911
RAC: 5,556,242
Level
Tyr
Scientific publications
watwatwatwatwat
Message 60055 - Posted: 10 Mar 2023 | 8:14:57 UTC - in response to Message 59350.

...
So, how did you manage to download 3 tasks per GPU ?

By spoofing the card count using a custom client, or editing coproc_info.xml and locking it down or having more than one card in a host.

when you say editing coproc_info.xml, you are talking about the entry

<warning>NVIDIA library reports 1 GPU</warning>

near the bottom? After changing this to "2", how would I lock it down?

No, not entirely. You would just duplicate the card detection section and the increment the card count in that line.

But you also have to prevent BOINC from changing the file afterwards which will reset the count to the true detection.

You make the edit in the file and then mark the file immutable. In Linux, you would execute:

sudo chattr +i coproc_info.xml

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 11,007,655,976
RAC: 21,537,108
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 60056 - Posted: 10 Mar 2023 | 15:18:53 UTC - in response to Message 60055.

...
So, how did you manage to download 3 tasks per GPU ?

By spoofing the card count using a custom client, or editing coproc_info.xml and locking it down or having more than one card in a host.

when you say editing coproc_info.xml, you are talking about the entry

<warning>NVIDIA library reports 1 GPU</warning>

near the bottom? After changing this to "2", how would I lock it down?

No, not entirely. You would just duplicate the card detection section and the increment the card count in that line.

But you also have to prevent BOINC from changing the file afterwards which will reset the count to the true detection.

You make the edit in the file and then mark the file immutable. In Linux, you would execute:

sudo chattr +i coproc_info.xml

hi Keith, thanks for your reply.
Meanwhile, I had found out via a video on Youtube how to spoof a GPU; in fact, this was months ago. And it's working well since then :-)

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 60057 - Posted: 10 Mar 2023 | 15:56:15 UTC - in response to Message 59328.
Last modified: 10 Mar 2023 | 16:22:58 UTC

That way it will mix the projects, one from each on one GPU. 0.6+0.4=1. But 0.6+0.6>1 so it won’t start two from GPUGRID on the same GPU. It will go to the next GPU with open resources.

That will not fix anything.

I'm not certain what the exact problem is you are referring to, so I can only give as many pointers as I can imagine. This might or might not help:
You might just try to set the GPUGrid tasks to <gpu_usage>0.6</gpu_usage> and don't run a second project.
I actually don't know what happens then, but I'm interested in it.
So, if you try, please tell us. :-)
Else experiment with the tags <fetch_minimal_work>, <ngpus>, <max_concurrent>, try to set <gpu_usage> to 1.1 or 2.0, or whatever else comes to mind regarding client configuration.
You might also setup two Boinc instances and give one GPU to each of them.

*edit*
With Boinc client 7.14.x you could also set resource share and the Boinc caches to 0. That way you should not get any work as long as a device is occupied.
*end edit*
Thanks. My problem with GG is that I want 2 ACEMD, 1 Python, and 4 or more ATM. I know this is just a pipe dream on my part since GG hasn't lifted a finger to improve or repair their UI in years.
I see complaints that I'm not being clear. Note I only install one GPU per computer. Let me try it this way for a single computer:
IF ACEMD Tasks ready to send => 1 THEN DL+RUN 2 ELSE
IF ATM Tasks ready to send => 1 THEN DL+RUN 4 ELSE
IF Python Tasks ready to send = 1 THEN DL+RUN 1
So imagine a miracle occurs and there's a plethora of all manner of GG WUs. Then each of my computers would be running 2 ACEMD WUs. If ACEMD runs low then each computer would run from 1 to 4 ATM WUs. And if there's a dearth of both ACEMD and ATM WUs each computer might run some combination like: 1 ACEMD + 1 ATM, or 1 ACEDMD + 1 Python, or 1 ATM + 1 Python, or 4 ATM.
If GG used the same minimal Project Preferences that LHC does then I could make a very productive compromise to get the most of each generation of my computers.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 60058 - Posted: 10 Mar 2023 | 16:06:21 UTC - in response to Message 60057.
Last modified: 10 Mar 2023 | 16:06:53 UTC

Thanks. My problem with GG is that I want 2 ACEMD, 1 Python, and 4 or more ATM. I know this is just a pipe dream on my part since GG hasn't lifted a finger to improve or repair their UI in years.


only way to do this is run multiple boinc clients and manage caches separately. will require a good bit of manual intervention on your part.
____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 60059 - Posted: 10 Mar 2023 | 16:18:45 UTC - in response to Message 60058.

Thanks. My problem with GG is that I want 2 ACEMD, 1 Python, and 4 or more ATM. I know this is just a pipe dream on my part since GG hasn't lifted a finger to improve or repair their UI in years.


only way to do this is run multiple boinc clients and manage caches separately. will require a good bit of manual intervention on your part.
No, it's not the only way it can be done. I can do that on LHC today and have been able to do it for a long time.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 60060 - Posted: 10 Mar 2023 | 16:37:18 UTC - in response to Message 60059.
Last modified: 10 Mar 2023 | 16:38:24 UTC

Thanks. My problem with GG is that I want 2 ACEMD, 1 Python, and 4 or more ATM. I know this is just a pipe dream on my part since GG hasn't lifted a finger to improve or repair their UI in years.


only way to do this is run multiple boinc clients and manage caches separately. will require a good bit of manual intervention on your part.
No, it's not the only way it can be done. I can do that on LHC today and have been able to do it for a long time.

custom/modified BOINC server software.

but i was saying the only way to do that on GPUGRID. which is true.
____________

Greger
Send message
Joined: 6 Jan 15
Posts: 76
Credit: 25,058,432,682
RAC: 37,528,851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 60061 - Posted: 10 Mar 2023 | 16:48:58 UTC - in response to Message 60059.

If this works in LHC you would need to point out HOW these preferences are set to that project.

I have run LHC for many years and use same logic of user preferences setting just like "GG".
What use is a separated application layer on top of work distribution of units. So you can set vbox or native to separate on each sub projects.

IF THEN ELSE statement would require additional coding and to would not work in distribution of unit to without reject abort script,

To accomplish such thing on specific amount of unit to each host or combined would require app_config but also 3 instances of boinc-clients.

Project can not handle these request by default from server and would not.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 60062 - Posted: 10 Mar 2023 | 17:08:16 UTC - in response to Message 60061.

projects like LHC and WCG (and others) have custom or modified BOINC server software. they give additional functionality not in the base code for normal BOINC projects. usually its in the project preferences somewhere.

but it's really disingenuous to try to compare projects with custom software and say something like "they can do it, why can't you!?". every project has their own priorities and idiosyncrasies (and budgets). each user should just find what works for them to work around project-specific oddities.
____________

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,414,674,518
RAC: 1,069,216
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 60063 - Posted: 11 Mar 2023 | 20:49:13 UTC

i just downloaded my first Python app. Been running 1/2 hour now on a 2080Ti

Tech power up shows GPU load average of 14% and power average is 74 watts.
that seems really low compared to Einstein's 88% and 190 watts

Is this normal?

CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 2080 Ti (driver version 528.24, CUDA version 12.0, compute capability 7.5, 11264MB, 11264MB available, 13448 GFLOPS peak)


____________
try my performance program, the BoincTasks History Reader.
Find and read about it here: https://forum.efmer.com/index.php?topic=1355.0

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 60064 - Posted: 11 Mar 2023 | 21:24:10 UTC - in response to Message 60063.

yes it's normal. these tasks are mostly a CPU/memory app and only uses the GPU intermittently for a small part of the overall computation.

running two concurrently can help overall production.
____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 60077 - Posted: 14 Mar 2023 | 14:58:07 UTC - in response to Message 60061.

IF THEN ELSE statement
Was only used in a futile attempt to explain what I'm asking GDF to do.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,812,252,978
RAC: 2,758,827
Level
Trp
Scientific publications
watwatwat
Message 60078 - Posted: 14 Mar 2023 | 14:59:03 UTC - in response to Message 60062.

projects like LHC and WCG (and others) have custom or modified BOINC server software. they give additional functionality not in the base code for normal BOINC projects. usually its in the project preferences somewhere.

but it's really disingenuous to try to compare projects with custom software and say something like "they can do it, why can't you!?". every project has their own priorities and idiosyncrasies (and budgets). each user should just find what works for them to work around project-specific oddities.
I wasn't even talking to you, I was talking to GDF.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1079
Credit: 40,231,533,983
RAC: 16
Level
Trp
Scientific publications
wat
Message 60079 - Posted: 14 Mar 2023 | 15:34:31 UTC - in response to Message 60078.
Last modified: 14 Mar 2023 | 15:35:45 UTC

projects like LHC and WCG (and others) have custom or modified BOINC server software. they give additional functionality not in the base code for normal BOINC projects. usually its in the project preferences somewhere.

but it's really disingenuous to try to compare projects with custom software and say something like "they can do it, why can't you!?". every project has their own priorities and idiosyncrasies (and budgets). each user should just find what works for them to work around project-specific oddities.
I wasn't even talking to you, I was talking to GDF.

ok. and? my post wasn't even in reply to yours lol.

but the point still stands.
____________

Post to thread

Message boards : Number crunching : Python apps for GPU hosts 4.03 (cuda1131) using a LOT of CPU

//