Message boards :
Graphics cards (GPUs) :
Full-atom ...is not available for your type of computer.
Message board moderation
| Author | Message |
|---|---|
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've run 45 tasks now without error, or any other problems. Last night I started getting this appear in BOINC manager running 6.4.2. 12/12/2008 08:23:55|GPUGRID|Message from server: No work sent So, last night I upgraded to 6.4.5 to see if this would fix the problem, but I'm still getting the same messages. I'm running Win XP Pro 32-bit, Q9450 @3.4GHz, GTX 260 OC, WUs take a little under 6hrs, I usually complete 2 or 3 a day. I still have one task running which should complete in ~1hr. Hopefully it will give me another when this one finishes. I'll keep you posted. |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'm getting this as well across a couple of hosts. I did have some odd DCF values which now seem to have started to rectify. I got more work by suspending all other projects on the PC and then manually updating GPUGrid .... when I got my allocation, I restarted the other work. GPU WUs run normally - they don't wait - when I have some. It doesn't feel like I have a reliable work flow at the moment somewhere between client & application without manual intervention. |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'm getting this as well across a couple of hosts. I did have some odd DCF values which now seem to have started to rectify. Great, thanks Burdett. I manually suspended all other work, reduced cache to 0.2 days and manually updated GPUgrid. I then got another task. I can safely go off to work for a few hours now although, I hope I don't have to do this every time I need a new WU. Mark |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
...... under 6 hours on that Black Edition - that's good going. P. |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
...... under 6 hours on that Black Edition - that's good going. Think I average around 5.85hrs although the last one ran to 6.4hrs. 700MHz core linked to shader @1475MHz, 1200MHz memory clock. Only problem is they don't sell them anymore, so I can't get a pair ;) |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
.... you sure? I bought a second at eBuyer last week - although they don't have any at the moment - Dabs are showing them in stock although at £252 (ouch). Great cards. P. |
|
Send message Joined: 9 Nov 08 Posts: 3 Credit: 226,922 RAC: 0 Level ![]() Scientific publications ![]()
|
Thanks, Burdett, for the tip. The workaround did it here, too, I'm using BOINC client version 6.3.21 for windows_x86_64 and a GeForce GTX 260. However, this message (and its sudden appearence) is a bit odd. I remember I got the same message once when I attached to the project with a pc with no compatible coprocessor at all. I also noticed that the communication delay is something slightly below 24 hrs, so my PC would only poll once a day without forcing it manually. Furthermore, I got only 1 workunit and not 2 on my dualcore as before. When I poll manually after receiving one workunit (and after starting the other cpu-bound project, currently ABC@home, again) I'm getting now the message: 12.12.2008 13:46:52|GPUGRID|Message from server: No work sent 12.12.2008 13:46:52|GPUGRID|Message from server: (won't finish in time) BOINC runs 99.8% of time, computation enabled 100.0% of that When I suspend the cpu-bound project and keep the first GPUGRID workunit running, then I receive a second workunit from GPUGRID and after that was received I'm getting the usual message again that the per-cpu-limit was reached: 12.12.2008 13:52:53|GPUGRID|Message from server: No work sent 12.12.2008 13:52:53|GPUGRID|Message from server: (reached per-CPU limit of 1 tasks) After receiving two GPUGRID workunits and restarting the cpu-bound project again - funny enough - I'm still getting the message about reaching the per-cpu-limit without any ~24 hrs. polling delay. Well, now crunching two ABC and one GPUGRID unit again, for the time being. Seems the project team changed something on the server side, so just for the records: "It does not work well, could you please take a look again, folks?" Thank you.Regards Alex |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes, as mentioned in the other forum thread "BOINC 6.4.5 released....", the DCF seems to be way out. When I downloaded a WU, BOINC manager said 9.30hrs to completion which I thought didn't seem to bad, but, as soon as it started, it jumped to 180hrs. Hopefully this will correct itself fairly quickly and then work will flow a lot smoother. @Burdett: You are right they seem to back in stock. More expensive now though. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
The server should be now ignoring dcf for the time being. I will reuse it in a couple of weeks when things are stabilized. Let me know if the situation improves. gdf |
Krunchin-Keith [USA]Send message Joined: 17 May 07 Posts: 512 Credit: 111,288,061 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The server should be now ignoring dcf for the time being. OK It has started to drop from the 100. Last check shows DCF at 99.011805. I will leave my host 6133 untouched, so as it can be monitored to see if it corrects back to near 1. Although it will be slow as it takes 16 hours per task. I will check again before I leave on vacation, in about 5 days, then it has 10 days to run untouched. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
What is now the estimated time when a Wu start? gdf |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
On my i7 with 2x260's estimated time at download now shows as 259 hours. This has 'dropped' from 278 hours over the past 5 or 6 WUs returned. If this is the pace of correction by WU return then this isn't going to rectify anytime soon - and this particular host normally returns 7 or 8 WUs a day. P. |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think my last WU dropped a little over 2hrs from the previous one: from 180hrs to ~178hrs. At this rate it will be weeks for me before it corrects itself. Is there no way to manually set the estimated time to say 16hrs or 20hrs and then let the DCF correct from there? That would take into account users with slower cards and still benefit those with faster ones. |
NightlordSend message Joined: 22 Jul 08 Posts: 61 Credit: 5,461,041 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Do not do this if you are not comfortable editing xml. Any errors may trash your Boinc installation - Use at your own risk! Manual adjustment of the dcf is not advised, but it is possible: https://setisvn.ssl.berkeley.edu/beta/forum_thread.php?id=1308 Substitute the references to Seti Beta for GPUGrid /edit: a WU running on a 8800GT started with 1375hrs to completion, using the above method, I returned it to 14hrs. |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Tried it on one host - got the estimated time down to about 14 hours - which I thought seemed a reasonable start time to let it correct/refine itself from .... subsequent requests for work are now greeted with the 'no work for your tye of computer and work won't finish in time' message as above. Any ideas as to why? |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
.... take it all back. After a couple of false starts work does again flow. Thanks Nightlord - great tip. Mark, you should try this..... |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
.... take it all back. After a couple of false starts work does again flow. Thanks Nightlord - great tip. Yes, thanks Nightlord, it worked a treat. Took me a few attempts to get it down to a reasonable level. Now estimated time is at 10.5hrs. I'll leave it there for now and see how it goes. Mark As previously stated: anyone not confident with editing the client_state.xml, please make sure that you know what you are doing before attempting this. |
|
Send message Joined: 25 Nov 08 Posts: 51 Credit: 980,186 RAC: 0 Level ![]() Scientific publications ![]()
|
.... subsequent requests for work are now greeted with the 'no work for your tye of computer and work won't finish in time' message as above. I've noticed that too. The problem seems to be that BOINC treats GPU like any other project when it is working out its work buffer. To see this for yourself take a computer running just one project (not GPU) and check the work buffer in the project tab on Boincview. Now connect GPU to that computer. The work buffer will double (assuming the resource share for both projects is the same). The solution (pending any change to BOINC code) is to adjust your maintain enough work parameters down or possiblly fiddle with the resource share (I've not tried that with GPU yet). Phoneman1 |
|
Send message Joined: 21 Dec 07 Posts: 47 Credit: 5,252,135 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
.... subsequent requests for work are now greeted with the 'no work for your tye of computer and work won't finish in time' message as above. Phoneman1 that doesn't work either.....manually changing the rdcf got me 2 tasks for the moment but thereafter get the message won't finish in time...I have ample std and ltd to be able to keep a 4 task cache on a quad but until they fix the BOINC client and/or the task est time my boxes will run out of work unless constantly babysat which can't happen....the 24hour back-off time increases the chances of idle time :( |
|
Send message Joined: 25 Nov 08 Posts: 51 Credit: 980,186 RAC: 0 Level ![]() Scientific publications ![]()
|
I've just been fiddling the resource share between two projects - looks as if it is just going to shift the problem to the other project. Reducing the mainatin enough work time helped me, but if you only have 4 tasks in your queue there doesn't seem much to cut.....Guess we're still waiting on another Boinc change:-( Phoneman1 |
©2025 Universitat Pompeu Fabra