Message boards :
Number crunching :
Bad batch of WU's
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just shot up to 21 errors, I was watching a group of WU's startup and all of them pushed my 3 1080's to 2100MHz and they failed. They are failing on everyone's computers, they are Adria's WU's. |
|
Send message Joined: 30 Apr 13 Posts: 106 Credit: 3,805,237,860 RAC: 65 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just had two Pablo's crash as soon as they started. e175s2_e44s10p0f2-PABLO_p27_wild_0_sj403-0 e174s112_e63s8p1f198-PABLO_p27_wild_0_sj403_IDP-0 Update - I just checked my tasks list. I've had seven bad Pablos today. Win |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Strange, both their WU's are crashing. You have 7 from today, I was poking around and everyone is getting errors on long WU's.
|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
All tasks error out immediately with: ERROR: file mdioload.cpp line 81: Unable to read bincoordfileThis should be fixed by the staff ASAP. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Any word yet? Has anyone gotten any WU's yet and did they fail? Where are the moderators at, I haven't seen any mods sense I've been back. Also, has anyone heard anything about CPDN that crunches for them? Their forum and everything else has been down for almost 2 weeks now, no WU's, not a word. their main page is up but no mention of what happened, I know there's some people here that crunch for them. just wondering if they might have heard something. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There was a batch of bad PABLO tasks tasks created between about 17:30 - 18:00 UTC yesterday afternoon. I've watched some crash, and I've aborted some others (after checking that they had failed on other machines first). But there are good tasks created before and after. |
|
Send message Joined: 20 Apr 15 Posts: 285 Credit: 1,102,216,607 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I had quite a few today as well... scared me to death. Frankly I suspected my 4 month young ASUS ROG gtx1070 of being defective and was (figuratively) about to throw it out of the window... when I stumbled across the same error ERROR: file mdioload.cpp line 81: Unable to read bincoordfile Saved by the bell :) I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday. |
|
Send message Joined: 10 Nov 17 Posts: 7 Credit: 154,876,594 RAC: 0 Level ![]() Scientific publications ![]()
|
It seems another bad batch of PABLO_p27_wild_0_sj403_ID has just been released. On 13 April around 18:10 I received seven of these which all failed after about 6 seconds (the event log reporting three absent files), and was then locked out, according to the event log, due to exceeding my daily quota. I have just (16 April at 11:34) received two more tasks failing in the same manner, but have temporarily set the Project to 'No New Tasks' to avoid being locked out again. The files shown as absent in the event log are: 16/04/2018 12:33:53 | GPUGRID | Output file e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1_1 for task e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1 absent 16/04/2018 12:33:53 | GPUGRID | Output file e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1_2 for task e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1 absent 16/04/2018 12:33:53 | GPUGRID | Output file e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1_3 for task e174s447_e62s29p1f212-PABLO_p27_wild_0_sj403_IDP-0-2-RND7636_1 absent - although as these are output files I guess it might be that this is a symptom of the failure rather than the cause. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
- although as these are output files I guess it might be that this is a symptom of the failure rather than the cause. Yes, those are symptoms, not causes. If you follow through your account (name link at top of page) / computer / workunit / task, you should be able to see something like workunit 13443713 - that one was well worth aborting. And if you look at one of the errored tasks, the real cause: ERROR: file mdioload.cpp line 81: Unable to read bincoordfile That was the earlier batch. Today's are possibly similar, but we need to see one to be sure. Your computers are hidden, and we don't have the 'find task by name' tool here, so we'll have to ask you to look it up for use. Edit - thanks for the heads up, I've got one of those too. WU 13451812 is indeed the same as before, created 16 Apr 2018 | 11:22:43 UTC. That can go in the bit-bucket with the others. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've just been sent another from today's bad batch e174s49_e48s25p1f219-PABLO_p27_wild_0_sj403_IDP-0-2-RND8766. The files downloaded were 16/04/2018 14:16:44 | GPUGRID | Started download of e174s49_e48s25p1f219-PABLO_p27_wild_0_sj403_IDP-0-LICENSE For comparison, I'm working on an older one, resent this morning but created on 11 April - e82s5_e80s15p1f298-PABLO_p27_W60A_W76A_0_IDP-1-2-RND9196. Those files were called 16/04/2018 09:06:22 | GPUGRID | Started download of e82s5_e80s15p1f298-PABLO_p27_W60A_W76A_0_IDP-1-LICENSE Quite a difference. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Two of my recent PABLO tasks gave Error while computing, with this error message in the stderr file: ERROR: file mdioload.cpp line 81: Unable to read bincoordfile Could you check if this is due to a missing file that should have been sent with the task? |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just got 10 more bad Pablo's, that brings me to 34 total. The server is going to give me the boot for to many errors in such a short amount of time, I hope they figure this out soon. Richard, do you have any idea what's going on over at CPDN? Everything has been down for 2 weeks or so and I was curious when they might be back up, thanks. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Richard, do you have any idea what's going on over at CPDN? Everything has been down for 2 weeks or so and I was curious when they might be back up, thanks. I received the same emails as have been quoted on the BOINC message board at CPDN project going offline this afternoon, but I've had no more specific news that that. Better to consolidate all the news that we do get in that thread, I think. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Could you check if this is due to a missing file that should have been sent with the task? All the failed / failing tasks over the last four days have had the exact string PABLO_p27_wild in their name. I can see that I've completed at least one successfully with that string: also, at least one other with PABLO_p27_O43806_wild I'll go and search the message logs to see what I can find, but I think any completely missing files would show up as a problem at the download stage, and never get as far as attempting to run. I think it's more likely that the contents are badly formatted in some way, and it won't be possible to compare good and bad after the event. Edit - well, e173s16_e149s4p1f23-PABLO_p27_wild_0_sj403_IDP-1-2-RND2043 had file names with the workunit name embedded, like the second example in my comparison example earlier. I think that Pablo, or whoever is submitting the work on Pablo's behalf, might be using the wrong script/template when preparing the workunits. |
|
Send message Joined: 2 Jul 16 Posts: 338 Credit: 7,987,341,558 RAC: 259 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
2nd batch of bad tasks today. 8 on the 13th and 4 more today. |
Tuna ErtemalpSend message Joined: 28 Mar 15 Posts: 46 Credit: 1,547,496,701 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So, one my one, my seven hosts will be jailed, wasting 12x 1080Ti and 2x TitanX... Something is very non-ideal with that picture. Given that this sort of bad batches are happening with some relevant non-ignorable frequency, there should be a way to unblock machines in bulk that were blocked/limited due to bad batch issues, methinks.
|
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Why don't they know what's going on? Don't they monitor their project? Thanks Richard, sorry to bother you, I didn't think to check the BOINC forums. I haven't heard nothing even before they went down on the CPDN forum. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Could you check if this is due to a missing file that should have been sent with the task? I'd expect the download stage to fail if the file was missing on the server, but only if the name of the file was included on the list of files sent with the task to tell the client what files to download before starting the task. If the name of the file was missing from that list, I'd expect download stage to download all the files on the list, report success for that stage, and the problem to become visible only when the application tries to open the file. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Why are these work units still in the que? Is anyone running this program? Where are the moderators at? This place is like an airplane on autopilot, it seems like some of these projects have no more enthusiasm. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Double post |
©2025 Universitat Pompeu Fabra