Message boards : Number crunching : PABLO_redo_goal_RMSD2_KIX_CMY tasks is swamping other projects
Author | Message |
---|---|
I have a PABLO_redo_goal_RMSD2_KIX_CMY task I think I have figured out it is swamping my normal Seti work requests on the host it is on. | |
ID: 50969 | Rating: 0 | rate: / Reply Quote | |
I have run those work units but you know I run dedicated machines for each project so I will not be able to comment on the swapping. Hopefully someone else will be able to help you with this. | |
ID: 50970 | Rating: 0 | rate: / Reply Quote | |
Something is major going wrong on this host. I can't maintain my Seti gpu cache. It is not requesting any gpu work. | |
ID: 50971 | Rating: 0 | rate: / Reply Quote | |
This has had me stumped all afternoon after Seti came back from outage yet my task cache continued to fall and my Seti gpu work requests were for pitiful amount of seconds. I did not know whether the upgrade in the graphics drivers had anything to do with it or coincidence. I have similar issues with the CPU jobs running WCG and QC concurrently with equal priorities. WCG tends to swamp out QC over time necessitating me to intervene periodically by suspending the WGC jobs and let the cache refill with QC tasks then unsuspend WCG. The WCG tasks are about and hour each and currently the QC jobs are taking about 1 -2 hours each. My cache level settings are fairly low at 0.5 days. Checking the event log for why QC is not requesting more work when none are in the queue shows WCG has filled the cache hence why I suspend for more QC jobs. Your situation appears to be the opposite of mine in that your Pablo tasks are a lot longer than the Seti jobs implying Seti should swamp GPUGrid except your priorities are 900/50 Seti/GPUGrid maybe why. So my theory is generally: whoever has the shorter jobs (WCG in my case), asks for work more often and gradually fills the cache hence the other project never gets work because the cache is always full. With your priorities and if GPUGrid has more time left to complete than your cache size, Seti won't send more work. I do not know how often Seti requests work but if it is infrequently, GPUGrid hogs the show. I haven't figured a cure for this yet except maybe do what Zalster does. I could be way off base but it is something to look at and hope it helps. | |
ID: 50975 | Rating: 0 | rate: / Reply Quote | |
I normally only have a maximum of 6 long/short tasks on any host at any time. That is what my 1 day cache and 900/50 Seti/GPUGrid resource share split nets me. I either have 400 or 500 Seti tasks on board all the time on each of my hosts. None of my other hosts had any troubles with getting work. Only the 477434 host was having issues. I think the scheduler has messed up that host as I show 12 tasks in progress so 6 of them are 'ghosts' and I am pretty sure that is what messed things up. So BOINC thought I had much more GPUGrid work than I actually have. I only have 6 actual tasks on board that host like normal. | |
ID: 50976 | Rating: 0 | rate: / Reply Quote | |
The "ghost" task issue is perplexing. The GPUGrid task scheduler seems to ignore cache settings and sends a max of two jobs per gpu installed on any given system if tasks are available. In your case with 4 tasks (2 ghosts) per gpu, I can see why you are not receiving WU's from Seti as you pointed out. | |
ID: 50980 | Rating: 0 | rate: / Reply Quote | |
By resetting the project I got rid of the "ghosts" in the client_state.xml file. That allowed BOINC to correctly assess each project credit debt so that I could once again start receiving Seti tasks again. | |
ID: 50982 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : PABLO_redo_goal_RMSD2_KIX_CMY tasks is swamping other projects