Message boards :
Number crunching :
QM jobs in progress increasing?
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 5 Mar 13 Posts: 348 Credit: 0 RAC: 0 Level ![]() Scientific publications ![]() |
So out of curiosity, I've noticed the last few days that from 1000 jobs in progress we went to 5000 and it's increasing. Without an increase in users on the queue though (stable around 100). What's going on? Are people buffering/hogging more WUs or is it an actual increase in computing power by the users releasing CPUs from the GPU queue to the CPU queue? |
|
Send message Joined: 8 May 18 Posts: 190 Credit: 104,426,808 RAC: 0 Level ![]() Scientific publications
|
I have one QC task and one Long run task running on my main Linux box. So far I have made 42 QC tasks and 27 Long run tasks. Tullio |
|
Send message Joined: 3 Jul 18 Posts: 22 Credit: 2,758,801 RAC: 0 Level ![]() Scientific publications
|
I'm crunching several CPU only projects. Not all projects have work available at all times, but rather release batches that are done within a week (e.g. Sixtrack). Other projects take regular maintenance breaks (but not this past week). Consequently my system will hop projects and try to level out quotas. I'm usually buffering fewer than 10 tasks. I never exceeded that number, although I must admit I was playing around with the project (emptying and filling the queue), because it's tasks are peaky and hot on my CPU. If there is anything you could do to fix that without impacting performance, I would appreciate it. EDIT: Keep in mind GPU tasks had issues lately. Perhaps more volunteers tried to setup QC but abandoned their tasks? LAST EDIT: I know several other projects have disappointed CPU volunteers, so maybe these volunteers increased their work share with GPUGRID? DANG IT BOBBY I SAID LAST EDIT: I did switched from the default 4 CPU tasks to 1 CPU tasks a week ago with an app_config.xml modification. Of course the number of tasks I buffer increased accordingly - but I would guess this was last week. |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My guess is some users have their BOINC set to a queue of 10 days and an extra 10 days of WUs which can equate to hundreds if not thousands of WUS. If you have multiple people doing that, it will add up. I keep my queue at 0.5 days so I'm not bombarded/hog all the WUs. |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think the only way to tell if there is an increase of computing power is on your end. You can tell how many WUs are coming back per day vs previously. |
|
Send message Joined: 5 Mar 13 Posts: 348 Credit: 0 RAC: 0 Level ![]() Scientific publications ![]() |
Yes, I was looking at the throughput on my end but it's hard to judge since the batches take varying amounts of time to simulate so it's hard to compare one day with the next. In any case the WUs seem to be indeed finishing up faster than before so I'm happy either way :) |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
What's going on? Are people buffering/hogging more WUs or is it an actual increase in computing power by the users releasing CPUs from the GPU queue to the CPU queue? I saw that myself. My optimistic guess is that when you fixed the multiple-startup problem, then people started running more work units at a time. I am doing so myself, at least when the weather is cooler; I am down to one QC machine now. |
©2025 Universitat Pompeu Fabra