Message boards :
Number crunching :
Bad batch of WU's
Message board moderation
Previous · 1 · 2 · 3 · Next
| Author | Message |
|---|---|
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Why are these work units still in the que? Is anyone running this program? Where are the moderators at? This place is like an airplane on autopilot, it seems like some of these projects have no more enthusiasm. A similar post of mine about this projects participation with its contributors. http://www.gpugrid.net/forum_thread.php?id=4585#47369 another one, http://www.gpugrid.net/forum_thread.php?id=4368&nowrap=true#48039 |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'm under the impression they think were all getting paid for crunching, I will never do data mining, ever!! I'll bet a lot of the dedicated crunchers that have been here for a while are now miners for hire. |
|
Send message Joined: 10 Nov 17 Posts: 7 Credit: 154,876,594 RAC: 0 Level ![]() Scientific publications ![]()
|
Richard, thank you for your post. I have received a single further failing task this morning and following the link as you advise it seems to have originated in yesterday's bad batch at 11:17 (UTC). I notice that by following the links for all ten of my failed tasks, seven arising from the 13 April bad batch (around 17:50 UTC) and three from the 16 April bad Batch (around 11:20 UTC), they now all show exactly eight 'Error while Computing' failures so perhaps there is some automatic mechanism whereby they are automatically pulled after eight failures on different computers? I also notice on the Server Status Page that PABLO_p27_wild_0_sj403_ID currently shows 740 successes and a 92.33% error rate which, if my maths is correct, suggests over 9600 failures, potentially resulting in a large number of computers having been temporarily locked out. Frustrating though that is for we donors it doesn't appear to have created a (GPU) processing backlog and perhaps is an indication that the processing resource offered by donors currently far exceeds the requirements of the available work.[/url] |
Tuna ErtemalpSend message Joined: 28 Mar 15 Posts: 46 Credit: 1,547,496,701 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So, one my one, my seven hosts will be jailed, wasting 12x 1080Ti and 2x TitanX... Something is very non-ideal with that picture. Given that this sort of bad batches are happening with some relevant non-ignorable frequency, there should be a way to unblock machines in bulk that were blocked/limited due to bad batch issues, methinks. Case in point: When my hosts are fully utilized, I would see "State: In progress (28)" under my account's Tasks page (I have a custom config file that tells BOINC that GPUGRID tasks are each 1 CPU + 0.5 GPU, and that works well for these cards, I found, so 14 cards = 28 tasks). Last night I saw it at 26, then 22, went to sleep, then this morning at 14, now it is 12. For instance, when one of my single TitanX machines (http://www.gpugrid.net/results.php?hostid=205349) that has NOTHING ELSE going on in BOINC contacts GPUGRID, it gets: 4/17/2018 8:23:54 AM | GPUGRID | Sending scheduler request: To fetch work. 4/17/2018 8:23:54 AM | GPUGRID | Requesting new tasks for CPU and NVIDIA GPU 4/17/2018 8:23:56 AM | GPUGRID | Scheduler request completed: got 0 new tasks 4/17/2018 8:23:56 AM | GPUGRID | No tasks sent Quite ironic when the Server Status says "Tasks ready to send 34,375"... :( |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 318 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I notice that by following the links for all ten of my failed tasks, seven arising from the 13 April bad batch (around 17:50 UTC) and three from the 16 April bad Batch (around 11:20 UTC), they now all show exactly eight 'Error while Computing' failures so perhaps there is some automatic mechanism whereby they are automatically pulled after eight failures on different computers? Yes, on the workunit page, you should see a red banner above the task list saying Too many errors (may have bug) Once that appears (at this project, after 8 failures), no more are sent out. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Quite ironic when the Server Status says "Tasks ready to send 34,375"... Those are Quantum Chemistery WU's for your CPU.
|
|
Send message Joined: 10 Nov 17 Posts: 7 Credit: 154,876,594 RAC: 0 Level ![]() Scientific publications ![]()
|
Hi Tuna, The 'Tasks ready to send' figure on the Server Status page is the total of all task types ready to send. There is a table beneath this showing a breakdown of 'Tasks by application' (although for some reason the totals always differ by one). You should see that almost all, if not all, of the unsent tasks are currently Quantum Chemistry tasks which do not run on GPUs or Windows; they run on Linux CPUs only. The Unsent Short runs and Unsent Long Runs figures show the work available for GPUs. I cannot remember the exact wording, but when my own PC was temporarily jailed, requests for new work in the event log then reported the reason that the daily quota (3) had been exceeded - as the log you have posted above does not say this I suspect you might not be locked out and the reason you are not receiving tasks is simply that currently, most of the time, there are no GPU tasks available to send. As I understand it, tasks are only sent in response to requests from the client so it is down a matter of luck as to whether tasks are available when your PC makes its requests. |
|
Send message Joined: 2 Jul 16 Posts: 338 Credit: 7,987,341,558 RAC: 193 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
I had 8 fail on the 13th at 18:38:35 UTC and received 4 more on the 15th at 6:17:47 UTC. So if they were jailed it was less than 2 days. |
Tuna ErtemalpSend message Joined: 28 Mar 15 Posts: 46 Credit: 1,547,496,701 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ooops. Yup. I didn't scroll down far enough, I guess... :) |
|
Send message Joined: 10 Nov 17 Posts: 7 Credit: 154,876,594 RAC: 0 Level ![]() Scientific publications ![]()
|
I had 8 fail on the 13th at 18:38:35 UTC and received 4 more on the 15th at 6:17:47 UTC. So if they were jailed it was less than 2 days. Yes, as in these circumstances the event log refers to a daily quota being exceeded, I guess the lockout only lasts for a day or the remaining part of a day. |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Richard wrote on April 16th: All the failed / failing tasks over the last four days have had the exact string These faulty task are still in the queue; I got such one this morning: http://gpugrid.net/result.php?resultid=17472190 again, as before: ERROR: file mdioload.cpp line 81: Unable to read bincoordfile |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
just now, the next faulty PABLO_927_wild task :-((( The fourth one within 6 hours! Which is really annoying. What I don't understand: don't the people at GPUGRID monitor what's happening? This specific problem has been known for 6 days now, and still there are these faulty tasks in the queue. Not nice at all :-((( |
|
Send message Joined: 10 Nov 17 Posts: 7 Credit: 154,876,594 RAC: 0 Level ![]() Scientific publications ![]()
|
Hi Erich, As Richard confirmed earlier in the thread there is a mechanism whereby tasks are automatically withdrawn after eight failures and it seems this is being relied on to remove the faulty batches (for example http://www.gpugrid.net/workunit.php?wuid=13443881). If you look at the history of the Work Unit that you picked up http://gpugrid.net/workunit.php?wuid=13443681 you can see that it originated in the bad batch released on 13 April. After it failed for you, it was reissued to flashawk (who aborted it) and is currently reissued to an anonymous user but still requires three further failures to trigger automatic removal. I guess that because there is currenlty so little GPU work available these remaining bad units are still a significant proportion of available work and agree that it is disappointing, and seems a little disrespectful to donors, that no action has been taken to remove them proactively. |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@STFC9F22, thank you for your explanations and insights. You are right, the way this problem is being handled by GPUGRID is a little dissapointing for us donors, particularly since a donor is being punished by not getting any more tasks for a certain timespan as the host is being considered "unreliable". Exactly this happened to me last Friday with one of my hosts, and I think it's not okay at all. The mechanism of "host punishment" should definitely be suspended in such cases where the cause for the problem is a faulty task. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi Erich, Most of us are and have been aware of this for sometime. The problem is when a computer gets to many errors it's put on a black list for a time, I know right after I get an error I can't download any WU's for a time. We should'nt be taking these hits. It's as though he loaded up these last WU's, packed his bags and left. |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's as though he loaded up these last WU's, packed his bags and left. You know he might be on vacation. Scientists have real lives too. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi Erich, When the tasks are withdrawn after eight failures, they should also no longer be counted as failures for the eight computers that ran them. While this is fixed, compute errors in 2015 and earlier years should be removed from the lists of failures for computers, even for workunits that never had a successful task. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have 8 failed WU's from 2013 through 2014 that won't go away, how do I get those removed? I've asked twice and got no response, are their mods here anymore? |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just notice that many more of these faulty PABLO_P27_wild tasks are distributed, although they have an error rate of 88% (which means that nearly 9 out of 10 are bad). Can anyone explain what sense this makes? |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just notice that many more of these faulty PABLO_P27_wild tasks are distributed, although they have an error rate of 88% (which means that nearly 9 out of 10 are bad). I have 4 over 50% right now and they seem fine, they may have been reworked. |
©2025 Universitat Pompeu Fabra