Experimental Python tasks (beta) - task description

Message boards : News : Experimental Python tasks (beta) - task description
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 28 · 29 · 30 · 31 · 32 · 33 · 34 . . . 50 · Next

AuthorMessage
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59392 - Posted: 3 Oct 2022, 10:53:17 UTC - in response to Message 59389.  

kksplace wrote:

Let me offer another possible "solution". (I am running two Python tasks on my system.) I found I had to change my Resource Share much, much higher for GPUGrid to effectively share other projects. ...

well, my target on this machine, in fact, is not to share Pythons with other projects.
It would simply make me happy if I could run 2 (or perhaps 3) Pythons simultaneously. The hardware requirements should be sufficient.

So, said that, I guess in this case the ressource share would not play any role.

BTW: as mentioned before, until some time early last week I did run two Pythons simultaneously on this PC. I have no idea though what the indicated remaining runtimes were. Most probably not that high as now, otherwise I could not have downloaded and started to Pythons in parallel.

So any idea what I can do to make this machine run at least 2 Pythons (if not 3) ???
ID: 59392 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
kksplace

Send message
Joined: 4 Mar 18
Posts: 53
Credit: 2,815,476,011
RAC: 0
Level
Phe
Scientific publications
wat
Message 59393 - Posted: 3 Oct 2022, 17:05:04 UTC - in response to Message 59392.  

I am limited on any technical knowledge and can only speak how I got mine to work with 2 tasks. Sorry I can't help anymore. As to getting 3 tasks, my understanding from other posts and my own attempt is that you can't without a custom client or some other behind-the-scenes work. The '2 tasks at one time' limit is a GPUGrid restriction somewhere.
ID: 59393 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 662
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59394 - Posted: 3 Oct 2022, 17:48:02 UTC - in response to Message 59393.  

Yes, the project has a max 2 tasks per gpu limit with project max of 16 tasks.

You normally would just implement a app_config.xml file to get two tasks running concurrently on a gpu.

<app_config>

<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>

</app_config>

That has been the same quota since project inception. The only way to get around it is to spoof the gpu count via locking down the coproc_info.xml file in the BOINC folder.
ID: 59394 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59395 - Posted: 3 Oct 2022, 19:19:15 UTC - in response to Message 59394.  

...
<app_config>

<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>

</app_config>
...

Keith, just for my understanding:

what exactly does the entry
<cpu_usage>3.0</cpu_usage>
do?


ID: 59395 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 4,772
Level
Trp
Scientific publications
wat
Message 59396 - Posted: 3 Oct 2022, 19:33:31 UTC - in response to Message 59395.  
Last modified: 3 Oct 2022, 19:34:47 UTC

...
<app_config>

<app>
<name>PythonGPU</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>3.0</cpu_usage>
</gpu_versions>
</app>

</app_config>
...

Keith, just for my understanding:

what exactly does the entry
<cpu_usage>3.0</cpu_usage>
do?




Exactly what I said in my previous message.

adjust your app_config file to reserve more CPU for your Python task to prevent BOINC from running too much extra work from other projects.


What Keith suggested would tell BOINC to reserve 3 whole CPU threads for each running PythonGPU task.
ID: 59396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
abouh

Send message
Joined: 31 May 21
Posts: 200
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 59397 - Posted: 4 Oct 2022, 7:09:44 UTC

Hello!

Today I will deploy the changes tested last week in PythonGPUbeta to the PythonGPU app. The changes only affect Windows machines, and should results in downloading smaller initial files, and slightly less memory requirements.

As we discussed, for now the initial data unpacking still needs to be done in two steps, but using a more recent version of 7za.exe.

I did not detect any error in the PythonGPUbeta tasks, so hopefully this change will no affect jobs in PythonGPU either.
ID: 59397 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 662
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59398 - Posted: 4 Oct 2022, 7:24:25 UTC - in response to Message 59395.  


Keith, just for my understanding:

what exactly does the entry
<cpu_usage>3.0</cpu_usage>
do?


It tells BOINC to take 3 cpus away from the available resources that BOINC thinks it has to work with.

That tells BOINC to not commit resources to other projects that it doesn't have so that you aren't running the cpu overcommitted.

It is only for BOINC scheduling of available resources. It does not impact the running of the Python task in any way directly. Only the scientific application itself deteremines how much cpu the task and application will use.

You should never run a cpu in overcommitted state because that means that EVERY application including internal housekeeping is constantly fighting for available resources and NONE are running optimally. IOW's . . . . slooooowwwly.

You can check your average cpu loading or utilization with the uptime command in the terminal. You should strive to get numbers that are less than the number of cores available to the operating system.

If you have a cpu that has 16 cores/32 threads available to the OS, you should strive to use only up to 32 threads over the averaging periods.

The uptime command besides printing out how long the system has been up and running also prints out the 1 minute / 5 minute / 15 minute system average loadings.

As an example on this AMD 5950X cpu in this daily driver this is my uptime report.

keith@Pipsqueek:~$ uptime
00:15:16 up 7 days, 14:41, 1 user, load average: 30.16, 31.76, 32.03

The cpu is right at the limit of maximum utilization of its 32 threads.
So I am running it at 100% utilization most of the time.

If the averages were higher than 32, then that shows that the cpu is overcommitted and trying to do too much all the time and not running applications efficiently.
ID: 59398 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 662
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59399 - Posted: 4 Oct 2022, 7:28:03 UTC - in response to Message 59397.  

Thanks for the notice, abouh. Should make the Windows users a bit happier with the experience of crunching your work.
ID: 59399 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59400 - Posted: 4 Oct 2022, 9:41:13 UTC - in response to Message 59398.  


Keith, just for my understanding:

what exactly does the entry
<cpu_usage>3.0</cpu_usage>
do?


It tells BOINC to take 3 cpus away from the available resources that BOINC thinks it has to work with.

...

You can check your average cpu loading or utilization with the uptime command in the terminal. You should strive to get numbers that are less than the number of cores available to the operating system.
...

thanks, Keith, for the thorough explanation. Now everything is clear to me.
What concerns CPU loading/utilization, so far I have been taking a look at the Windows Task Manager which shows a (rough?) percentage on top of the column "CPU".

However, for me the question still is how I could get my host with the vast hardware ressources (as described here:
https://www.gpugrid.net/forum_thread.php?id=5233&nowrap=true#59383) to run at least 2 Pythons concurrently - as it was the case already before ???

Isn't there a way go get these much too high "remaining time" figures back to real?
Or any other way to get more than 1 Python downloaded despite of these high figures?
ID: 59400 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 662
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59403 - Posted: 4 Oct 2022, 16:50:27 UTC - in response to Message 59400.  
Last modified: 4 Oct 2022, 16:53:15 UTC


Isn't there a way go get these much too high "remaining time" figures back to real?
Or any other way to get more than 1 Python downloaded despite of these high figures?


There isn't any way to get the estimated time remaining down to reasonable values as far as we know without a complete rewrite of the BOINC client code.

Or ask @kksplace how he managed to do it.

Try to increase your amount of day's cache to 10 and see if you pick up the second task.

Are you running with 0.5 gpu_usage via the app_config.xml file exampleI posted?

You can spoof 2 gpus being detected by BOINC which would automatically increase your gpu task allowance to 4 tasks. You need to modify the coproc_info.xml file and then lock it down to immutable state so BOINC can't rewrite it.

Google spoofing gpus in the Seti and BOINC forums on how to do that.
ID: 59403 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 4,772
Level
Trp
Scientific publications
wat
Message 59404 - Posted: 4 Oct 2022, 17:21:36 UTC - in response to Message 59403.  

Try to increase your amount of day's cache to 10 and see if you pick up the second task.


Counterintuitively, this can actually cause the opposite reaction on a lot of projects.

if you ask for "too much" work, some projects will just shut you out and tell you that no work is available, even when it is. I don't know why, I just know it happens. this is probably why he can't download work.

I would actually recommend keeping this value no larger than 2 days.
ID: 59404 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 662
Level
Tyr
Scientific publications
watwatwatwatwat
Message 59405 - Posted: 4 Oct 2022, 19:52:23 UTC - in response to Message 59404.  

I was assuming that GPUGrid was the only project on his host.

I agree that increasing the value with more than one single project on the host is often deleterious.
ID: 59405 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 4,772
Level
Trp
Scientific publications
wat
Message 59406 - Posted: 4 Oct 2022, 20:01:57 UTC - in response to Message 59405.  

I think GPUGRID is one of the projects that reacts negatively to having the value too high.

but no, based on his daily contributions for this host via FreeDC, he's contributing to several projects.
ID: 59406 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59407 - Posted: 4 Oct 2022, 20:35:18 UTC - in response to Message 59405.  

I was assuming that GPUGrid was the only project on his host.

at the time I was trying to download and crunch 2 Pythons: YES - no other projects running at that time.

Meanwhile, until the problem get's solved, I have running 1 CPU and 1 GPU project on this host.
ID: 59407 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[CSF] Aleksey Belkov

Send message
Joined: 26 Dec 13
Posts: 86
Credit: 1,292,358,731
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59408 - Posted: 4 Oct 2022, 21:25:02 UTC - in response to Message 59397.  
Last modified: 4 Oct 2022, 21:34:41 UTC

Today I will deploy the changes tested last week in PythonGPUbeta to the PythonGPU app. The changes only affect Windows machines, and should results in downloading smaller initial files, and slightly less memory requirements.

Thank you, abouh!
Let's try a new tasks :)

Now that's probably need to adjust disk space requirements for PythonGPU tasks, isn't it?
ID: 59408 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 4,772
Level
Trp
Scientific publications
wat
Message 59409 - Posted: 4 Oct 2022, 21:56:21 UTC - in response to Message 59407.  

I was assuming that GPUGrid was the only project on his host.

at the time I was trying to download and crunch 2 Pythons: YES - no other projects running at that time.

Meanwhile, until the problem get's solved, I have running 1 CPU and 1 GPU project on this host.


even if you solve the problem, you wont get more tasks until you change the GPUGRID task to use 0.5 GPU for 2x.
ID: 59409 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 59410 - Posted: 5 Oct 2022, 3:13:48 UTC - in response to Message 59409.  

even if you solve the problem, you wont get more tasks until you change the GPUGRID task to use 0.5 GPU for 2x.

this is what I did anyway
ID: 59410 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
jjch

Send message
Joined: 10 Nov 13
Posts: 101
Credit: 15,773,211,122
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 59414 - Posted: 9 Oct 2022, 17:10:30 UTC

Good news since the recent changes to the Windows environment. I have seen a great increase of successful tasks. Seems that others have too as my ranking has dropped a bit.
ID: 59414 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
abouh

Send message
Joined: 31 May 21
Posts: 200
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 59416 - Posted: 10 Oct 2022, 6:01:45 UTC - in response to Message 59414.  

So good to hear that!
ID: 59416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
kotenok2000

Send message
Joined: 18 Jul 13
Posts: 79
Credit: 210,528,292
RAC: 0
Level
Leu
Scientific publications
wat
Message 59417 - Posted: 10 Oct 2022, 10:36:38 UTC

When i paused workunit and restarted boinc boinc copied pythongpu_windows_x86_64__cuda1131.txz file in slot directory.
The file was already extracted to pythongpu_windows_x86_64__cuda1131.tar and deleted.
ID: 59417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 28 · 29 · 30 · 31 · 32 · 33 · 34 . . . 50 · Next

Message boards : News : Experimental Python tasks (beta) - task description

©2025 Universitat Pompeu Fabra