Message boards :
Number crunching :
No GPUGRID Tasks Running
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 3 Apr 11 Posts: 11 Credit: 2,052,552 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
I've been running a number of BOINC projects including GPUGRID all with a resource share of 100. I recently noticed that none of my hosts have completed a GPUGRID task in over a week. Furthermore, I discovered by checking my BOINC event log on a host that my hosts aren't even requesting GPUGRID tasks, much less running them. I decided I'd like to prioritize GPUGRID work over everything else, so I set GPUGRID's resource share to 100 and the resource share of all other projects to 0. However, even after forcing the clients to update the shares, they are still requesting and running tasks from other projects and not GPUGRID. Is this some sort of carryover effect from when the resource shares for all projects were equal and BOINC is trying to ensure that all the projects have done an equivalent amount of work before moving on to the updated resource shares? If so, is there any way to expedite the process and get BOINC prioritizing GPUGRID tasks now? Thanks in advance for any replies. |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Your computers are hidden so we can't seen any detail to help. 1st thing is check that it thinks you have a compatable graphics cards (according to BOINC startup messages). Make sure you have the "use GPU always" and make sure you aren't running one of the recent buggy nvidia drivers (ie 295.x or 296.x under windows). BOINC blog |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 958,266,958 RAC: 34 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thats not a problem with gpugrid alone, im interested for some solutions too cos i cant define any backup gpu projects on any of the gpu machines (doesnt matter really). Poem For example is stronger then mw. DSKAG Austria Research Team: http://www.research.dskag.at
|
|
Send message Joined: 3 Apr 11 Posts: 11 Credit: 2,052,552 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Thanks for your reply MarkJ. I checked everything you listed and it all looks good so far. |
|
Send message Joined: 3 Apr 11 Posts: 11 Credit: 2,052,552 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
It looks like one of my hosts did finally download a GPUGRID task, but it failed shortly after starting computation with the error I've posted below. Anyone know what the error message means? <core_client_version>7.0.25</core_client_version> <![CDATA[ <message> The system cannot find the path specified. (0x3) - exit code 3 (0x3) </message> <stderr_txt> # Using device 0 # There are 2 devices supporting CUDA # Device 0: "GeForce GTX 460" # Clock rate: 1.80 GHz # Total amount of global memory: 805306368 bytes # Number of multiprocessors: 7 # Number of cores: 56 # Device 1: "GeForce GTX 460" # Clock rate: 1.44 GHz # Total amount of global memory: 805306368 bytes # Number of multiprocessors: 7 # Number of cores: 56 MDIO: cannot open file "restart.coor" SWAN: FATAL : swanMemcpyDtoH failed Assertion failed: 0, file swanlib_nv.c, line 390 This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. </stderr_txt> ]]> |
DamaralandSend message Joined: 7 Nov 09 Posts: 152 Credit: 16,181,924 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It would be usefull that you give more info of your system. Driver version. OS... A link to any task too. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Off the top of my head I can also see that you have SLI enabled as well as one of your cards is OC much higher than the other. |
©2025 Universitat Pompeu Fabra