GTX 770 won't get work

Message boards : Number crunching : GTX 770 won't get work
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47212 - Posted: 14 May 2017, 7:40:21 UTC

I don't know what's wrong, but on this system, I can't get work since weeks although the server indicates that tasks are available - and I have completed a lot of tasks using this same machine before:

https://www.gpugrid.net/show_host_detail.php?hostid=342877

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47212 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Erich56

Send message
Joined: 1 Jan 15
Posts: 1166
Credit: 12,260,898,501
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47215 - Posted: 14 May 2017, 14:01:06 UTC
Last modified: 14 May 2017, 14:03:24 UTC

Could you please tell us which OS this is, which graphic card (including driver version) and which crunching (acemd ...) software.

When did the machine stop crunching? On April 14?
ID: 47215 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47222 - Posted: 15 May 2017, 11:59:53 UTC - in response to Message 47215.  
Last modified: 15 May 2017, 12:00:52 UTC

Could you please tell us which OS this is, which graphic card (including driver version) and which crunching (acemd ...) software.

Ubuntu Linux 16.04 LTS x64, kernel: 4.4.0-75-generic
BOINC version: 7.6.31
CPU: GenuineIntel Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz [Family 6 Model 30 Stepping 5] (8 Prozessoren)
GPU: NVIDIA GeForce GTX 770 (1998MB)
GPU driver: 375.39

Credits generated on this machine: 73,260,609

When did the machine stop crunching? On April 14?

Unfortunately, I can't specify the date precisely but it is a problem since several weeks (can't find the last successfully processed task in the GPUGRID database of my user account anymore).


Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47222 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Seek the Truth: Jesus Is LORD...
Avatar

Send message
Joined: 21 Mar 15
Posts: 10
Credit: 48,092,354
RAC: 0
Level
Val
Scientific publications
watwat
Message 47242 - Posted: 16 May 2017, 20:34:30 UTC
Last modified: 16 May 2017, 20:37:39 UTC

I have the very same issue (no longer getting any work), only with a different GPU.
I bought a new machine six weeks ago (4 APRIL 2016)
Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz [Family 6 Model 94 Stepping 3]
(8 processors)   
with
NVIDIA GeForce GTX 950M (2048MB) driver: 369.9

http://gpugrid.net/show_host_detail.php?hostid=421103

For about two weeks I was getting WUs just fine. Then suddenly, out of the blue (I made no changes) I was no longer getting ANY tasks.
I have tried about everything I can: I set the computing preferences
store at least __ days and
store up to an additional ___ days
each to various larger amounts (e.g., 2, 3, 4, 6.6 etc, etc)
I have reset the project (several times)
I have even 'Removed' the project and then added it again (I did this twice).
ALL to no avail. I have sort of given up and have been using my GPU for another project.
However, I would prefer to be able to run GPUgrid tasks.
(Since this thread is about 770s, I think I need to post to a new thread.)

Thanks,
LP
Essential biomedical science:
At fertilization, a new and unique member of the species homo sapiens is formed.
Abortion wounds the Mother, and kills a very tiny baby girl or baby boy.
Life!
Les P., PhD Prof. Engr.
ID: 47242 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47248 - Posted: 16 May 2017, 21:37:55 UTC - in response to Message 47242.  
Last modified: 16 May 2017, 21:41:14 UTC

Update the driver and you should get the new app and WU's
ID: 47248 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47258 - Posted: 17 May 2017, 11:35:30 UTC - in response to Message 47248.  

Update the driver and you should get the new app and WU's

In my case, the driver is certainly not the problem: It is the same driver which I use for my GTX 970 and that machine receives one WU after the other without any issues.
Moreover, this propr. NVIDIA driver (375.39) is the latest you get with Ubuntu console update.

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47258 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Seek the Truth: Jesus Is LORD...
Avatar

Send message
Joined: 21 Mar 15
Posts: 10
Credit: 48,092,354
RAC: 0
Level
Val
Scientific publications
watwat
Message 47263 - Posted: 17 May 2017, 20:15:24 UTC - in response to Message 47248.  
Last modified: 17 May 2017, 20:21:48 UTC

The driver is not the problem in my case either.
First, It is the same driver which I used for the first two weeks I had the PC (the first two weeks of April, 2017) when I was getting WUs just fine.
Second, I am getting and running WUs from other projects (primarily Einstein-at-home).
Third, I did update drivers yesterday, and I am still not able to get any work (even when GPUGRID has generated plenty of tasks).

Moreover, I do wish that GPUGRID had better 'diagnostic' (or whatever) messages when no WUs are sent to a host even though work is available.

I really do wish some project administrator would see these messages!

Keeping my fingers crossed,
LP
Essential biomedical science:
At fertilization, a new and unique member of the species homo sapiens is formed.
Abortion wounds the Mother, and kills a very tiny baby girl or baby boy.
Life!
Les P., PhD Prof. Engr.
ID: 47263 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47285 - Posted: 19 May 2017, 13:42:44 UTC
Last modified: 19 May 2017, 13:43:33 UTC

Today I updated to Linux kernel 4.4.0-78-generic keeping NVIDIA driver 375.39. Still no tasks for my GTX 770 even after resetting GPUGRID.

Curiously, when auto-updating the GTX 970 machine to the same kernel, the NVIDIA driver got additionally updated to version 375.51.

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47285 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47293 - Posted: 20 May 2017, 12:51:50 UTC - in response to Message 47285.  

Today I updated to Linux kernel 4.4.0-78-generic keeping NVIDIA driver 375.39. Still no tasks for my GTX 770 even after resetting GPUGRID.

I know little about Linux drivers, except that they must be matched to the Linux version, and work for me when I get them from the Ubuntu software center. And even if the drivers are apparently installed properly, they must implement CUDA properly to work here.

Given that the GTX 770 is now an older card and you are now using the 375.39 drivers, it is much more likely that it is a Linux problem rather than a GPUGrid problem.
ID: 47293 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47294 - Posted: 20 May 2017, 16:23:59 UTC - in response to Message 47293.  

Given that the GTX 770 is now an older card and you are now using the 375.39 drivers, it is much more likely that it is a Linux problem rather than a GPUGrid problem.

No, because otherwise the other machine (GTX 970) with the same Linux kernel would also not receive tasks. But it does on a daily basis.

I know for sure that the GTX 970 box does use CUDA 8.
I am not sure whether the GTX 770 also does (originally that machine was setup using CUDA 7), but assume(d) that the autoupdate (apt update & apt upgrade) will also update CUDA as it does update the NVIDIA drivers.

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47294 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47295 - Posted: 20 May 2017, 17:20:48 UTC - in response to Message 47294.  

You seem to be assuming that at Kepler card runs CUDA, or even has it available in those drivers, the same way that a Maxwell card does, just because the version numbers of Linux and the drivers are comparable for the cards. I would not assume that.
ID: 47295 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47296 - Posted: 20 May 2017, 19:21:19 UTC

Please restart your PC, and check the first lines of the event log of BOINC manager for the GPU report.
It should look similar to this:
2017. 05. 17. 3:44:59  CUDA: NVIDIA GPU 0: GeForce GTX 1080 (driver version 382.05, CUDA version 8.0, compute capability 6.1, 4096MB, 3557MB available, 9654 GFLOPS peak)
2017. 05. 17. 3:44:59  OpenCL: NVIDIA GPU 0: GeForce GTX 1080 (driver version 382.05, device version OpenCL 1.2 CUDA, 8192MB, 3557MB available, 9654 GFLOPS peak)
Could you please post yours?
ID: 47296 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47306 - Posted: 22 May 2017, 9:15:09 UTC - in response to Message 47296.  

Please restart your PC, and check the first lines of the event log of BOINC manager for the GPU report.
It should look similar to this:
2017. 05. 17. 3:44:59  CUDA: NVIDIA GPU 0: GeForce GTX 1080 (driver version 382.05, CUDA version 8.0, compute capability 6.1, 4096MB, 3557MB available, 9654 GFLOPS peak)
2017. 05. 17. 3:44:59  OpenCL: NVIDIA GPU 0: GeForce GTX 1080 (driver version 382.05, device version OpenCL 1.2 CUDA, 8192MB, 3557MB available, 9654 GFLOPS peak)
Could you please post yours?

Here it is:

Mo 22 Mai 2017 11:06:31 CEST |  | CUDA: NVIDIA GPU 0: GeForce GTX 770 (driver version 375.39, CUDA version 8.0, compute capability 3.0, 1999MB, 1948MB available, 3693 GFLOPS peak)
Mo 22 Mai 2017 11:06:31 CEST |  | OpenCL: NVIDIA GPU 0: GeForce GTX 770 (driver version 375.39, device version OpenCL 1.2 CUDA, 1999MB, 1948MB available, 3693 GFLOPS peak)

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47306 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47307 - Posted: 22 May 2017, 10:40:38 UTC - in response to Message 47306.  
Last modified: 22 May 2017, 10:43:29 UTC

You have a CC3 card and need an earlier driver before CUDA 8.0 and you will get Cuda 6.5 app.

I use 359.6 driver for windows but don't know what the equavelent driver is for Linux.

If you look at my computers one of them has an earlier driver (660ti) as it is also a CC3 card
ID: 47307 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47308 - Posted: 22 May 2017, 16:33:04 UTC - in response to Message 47307.  

You have a CC3 card and need an earlier driver before CUDA 8.0 and you will get Cuda 6.5 app.
According to the applications page, there's no CUDA6.5 client for Linux, so this won't work under Linux.
ID: 47308 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47309 - Posted: 22 May 2017, 17:05:50 UTC - in response to Message 47306.  
Last modified: 22 May 2017, 17:07:35 UTC

You should check the venue of your host #342877. Then you should check the "GPUGrid settings" in your profile that the venue of your host #342877 should have the "Use NVidia GPU" selected, also the "Run only the selected applications" should have the "ACEMD long runs (8-12 hours on fastest GPU)" selected.

If the settings are OK, and you still don't receive work, then you should edit/create the cc_config.xml file in the BOINC manager's data folder to include the work_fetch_debug option.
If there's no cc_config.xml, you should create one with the following content:
<cc_config>
   <log_flags>
       <work_fetch_debug>
   </log_flags>
</cc_config>

If there's a cc_config.xml, then you should copy the following after the first line:
   <log_flags>
       <work_fetch_debug>
   </log_flags>

Then click settings -> re-read configuration files, and update the GPUGrid project.
Then post us the messages in the event log after the line:
GPUGRID | update requested by user
ID: 47309 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47310 - Posted: 22 May 2017, 20:05:24 UTC
Last modified: 22 May 2017, 20:09:39 UTC

As said above, all the settings are correct as that machine picked tasks on a daily basis. Nothing was changed on the website's end.

Also, I never used an .xml configuration files for GPUGRID.
What exactly is this work_fetch_debug about? Or is it just reporting more elaborate what's actually happening when not receiving tasks?

Something has changed at GPUGRID's end such that my card does not receive work anymore.
That card is listed as being supported by the project (GTX 770, CC/SM: 3.0).

This Linux machine completed tasks using CUDA 7.5. So, why can't I just backport to CUDA 7.5, reset the project and receive the former app which worked just perfectly?
It appears to me that the auto-downloaded CUDA 8 app just won't work under Linux when shader model 3 is in use. I have no idea why this should be the case, though...

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47310 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47311 - Posted: 22 May 2017, 22:42:51 UTC - in response to Message 47310.  

Also, I never used an .xml configuration files for GPUGRID.
What exactly is this work_fetch_debug about? Or is it just reporting more elaborate what's actually happening when not receiving tasks?
Exactly. You can set options by cc_config.xml which are not accessible trough the GUI of the BOINC manager (this parameter does not change the settings of GPUGrid, or any project). See the BOINC client configuration wiki for details. See this post about how to read the elaborate info provided by work_fetch_debug.

Something has changed at GPUGRID's end such that my card does not receive work anymore.
That card is listed as being supported by the project (GTX 770, CC/SM: 3.0).

This Linux machine completed tasks using CUDA 7.5. So, why can't I just backport to CUDA 7.5, reset the project and receive the former app which worked just perfectly?
The previous GPUGrid client was CUDA6.5, your host has processed 126 (short runs) + 249 (long runs) of them.
The CUDA6.5 client has deprecated, there's only CUDA8.0 client available for Linux.
I think that you wouldn't receive CUDA6.5 tasks either.

It appears to me that the auto-downloaded CUDA 8 app just won't work under Linux when shader model 3 is in use.
I have no idea why this should be the case, though...
The single CUDA8.0 task your host has received finished successfully on your host, so the CUDA 8 app is working on your host.
We need to find the reason why your host does not asks for / provided with new tasks.

BTW how many GPU project your host is attached to?
ID: 47311 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 9 Feb 16
Posts: 78
Credit: 656,229,684
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwat
Message 47313 - Posted: 23 May 2017, 8:33:50 UTC - in response to Message 47309.  
Last modified: 23 May 2017, 8:42:49 UTC

You should check the venue of your host #342877. Then you should check the "GPUGrid settings" in your profile that the venue of your host #342877 should have the "Use NVidia GPU" selected, also the "Run only the selected applications" should have the "ACEMD long runs (8-12 hours on fastest GPU)" selected.

If the settings are OK, and you still don't receive work, then you should edit/create the cc_config.xml file in the BOINC manager's data folder to include the work_fetch_debug option.
If there's no cc_config.xml, you should create one with the following content:
<cc_config>
   <log_flags>
       <work_fetch_debug>
   </log_flags>
</cc_config>

If there's a cc_config.xml, then you should copy the following after the first line:
   <log_flags>
       <work_fetch_debug>
   </log_flags>

Then click settings -> re-read configuration files, and update the GPUGrid project.
Then post us the messages in the event log after the line:
GPUGRID | update requested by user

Here are the messages:

Di 23 Mai 2017 10:27:18 CEST |  | Re-reading cc_config.xml
Di 23 Mai 2017 10:27:18 CEST |  | Config: GUI RPCs allowed from:
Di 23 Mai 2017 10:27:18 CEST |  | log flags: file_xfer, sched_ops, task, work_fetch_debug
Di 23 Mai 2017 10:27:18 CEST |  | [work_fetch] Request work fetch: Core client configuration
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:27:22 CEST | GPUGRID | [work_fetch] REC 6457.612 prio -1.000 can request work
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] shortfall 299527.82 nidle 0.00 saturated 24073.06 busy 0.00
Di 23 Mai 2017 10:27:22 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] shortfall 84909.97 nidle 0.00 saturated 1490.03 busy 0.00
Di 23 Mai 2017 10:27:22 CEST | GPUGRID | [work_fetch] share 0.000 project is backed off  (resource backoff: 693.27, inc 600.00)
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:27:22 CEST |  | [work_fetch] No project chosen for work fetch
Di 23 Mai 2017 10:27:26 CEST | GPUGRID | update requested by user
Di 23 Mai 2017 10:27:26 CEST |  | [work_fetch] Request work fetch: project updated by user
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | [work_fetch] REC 6457.612 prio -1.000 can request work
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] shortfall 299556.98 nidle 0.00 saturated 24061.56 busy 0.00
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] shortfall 84914.93 nidle 0.00 saturated 1485.07 busy 0.00
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | [work_fetch] share 1.000
Di 23 Mai 2017 10:27:27 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | [work_fetch] set_request() for NVIDIA GPU: ninst 1 nused_total 0.00 nidle_now 0.00 fetch share 1.00 req_inst 1.00 req_secs 84914.93
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | [work_fetch] request: CPU (0.00 sec, 0.00 inst) NVIDIA GPU (84914.93 sec, 1.00 inst)
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | Sending scheduler request: Requested by user.
Di 23 Mai 2017 10:27:27 CEST | GPUGRID | Requesting new tasks for NVIDIA GPU
Di 23 Mai 2017 10:27:29 CEST | GPUGRID | Scheduler request completed: got 0 new tasks
Di 23 Mai 2017 10:27:29 CEST | GPUGRID | No tasks sent
Di 23 Mai 2017 10:27:29 CEST |  | [work_fetch] Request work fetch: RPC complete
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:27:34 CEST | GPUGRID | [work_fetch] REC 6457.612 prio 0.000 can't request work: scheduler RPC backoff (25.93 sec)
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] shortfall 299601.76 nidle 0.00 saturated 24047.75 busy 0.00
Di 23 Mai 2017 10:27:34 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] shortfall 84921.89 nidle 0.00 saturated 1478.11 busy 0.00
Di 23 Mai 2017 10:27:34 CEST | GPUGRID | [work_fetch] share 0.000
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:27:34 CEST |  | [work_fetch] No project chosen for work fetch
Di 23 Mai 2017 10:27:45 CEST |  | Contacting account manager at https://bam.boincstats.com/
Di 23 Mai 2017 10:27:47 CEST |  | Account manager: BAM! User: 3739, Michael H.W. Weber
Di 23 Mai 2017 10:27:47 CEST |  | Account manager: BAM! Host: 653689
Di 23 Mai 2017 10:27:47 CEST |  | Account manager: Number of BAM! connections for this host: 7467
Di 23 Mai 2017 10:27:47 CEST |  | Account manager contact succeeded
Di 23 Mai 2017 10:28:00 CEST |  | [work_fetch] Request work fetch: Backoff ended for GPUGRID
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] REC 6457.612 prio -1.000 can request work
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] shortfall 299801.40 nidle 0.00 saturated 23987.94 busy 0.00
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] shortfall 84953.52 nidle 0.00 saturated 1446.48 busy 0.00
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] share 1.000
Di 23 Mai 2017 10:28:04 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] set_request() for NVIDIA GPU: ninst 1 nused_total 0.00 nidle_now 0.00 fetch share 1.00 req_inst 1.00 req_secs 84953.52
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] request: CPU (0.00 sec, 0.00 inst) NVIDIA GPU (84953.52 sec, 1.00 inst)
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | Sending scheduler request: To fetch work.
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | Requesting new tasks for NVIDIA GPU
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | Scheduler request completed: got 0 new tasks
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | No tasks sent
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | [work_fetch] backing off NVIDIA GPU 740 sec
Di 23 Mai 2017 10:28:05 CEST |  | [work_fetch] Request work fetch: RPC complete
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:28:10 CEST | GPUGRID | [work_fetch] REC 6457.612 prio 0.000 can't request work: scheduler RPC backoff (25.92 sec)
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] shortfall 299843.54 nidle 0.00 saturated 23976.43 busy 0.00
Di 23 Mai 2017 10:28:10 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] shortfall 84959.51 nidle 0.00 saturated 1440.49 busy 0.00
Di 23 Mai 2017 10:28:10 CEST | GPUGRID | [work_fetch] share 0.000 project is backed off  (resource backoff: 734.50, inc 600.00)
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:28:10 CEST |  | [work_fetch] No project chosen for work fetch
Di 23 Mai 2017 10:28:37 CEST |  | [work_fetch] Request work fetch: Backoff ended for GPUGRID
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] --- project states ---
Di 23 Mai 2017 10:28:41 CEST | GPUGRID | [work_fetch] REC 6457.298 prio -1.000 can request work
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] shortfall 300039.40 nidle 0.00 saturated 23916.62 busy 0.00
Di 23 Mai 2017 10:28:41 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] shortfall 84989.53 nidle 0.00 saturated 1410.47 busy 0.00
Di 23 Mai 2017 10:28:41 CEST | GPUGRID | [work_fetch] share 0.000 project is backed off  (resource backoff: 704.19, inc 600.00)
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:28:41 CEST |  | [work_fetch] No project chosen for work fetch

Remember: All GPUGRID projects are chosen for this machine.

Michael.
President of Rechenkraft.net - Germany's first and largest distributed computing organization.
ID: 47313 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47318 - Posted: 25 May 2017, 12:35:09 UTC - in response to Message 47313.  

Thank you for posting the details, but I didn't get any smarter.

The GPUGrid project was in resource backoff (=it won't ask for work), but at 10:28:00 it has ended:

Di 23 Mai 2017 10:27:45 CEST | | Contacting account manager at https://bam.boincstats.com/
Di 23 Mai 2017 10:27:47 CEST | | Account manager: BAM! User: 3739, Michael H.W. Weber
Di 23 Mai 2017 10:27:47 CEST | | Account manager: BAM! Host: 653689
Di 23 Mai 2017 10:27:47 CEST | | Account manager: Number of BAM! connections for this host: 7467
Di 23 Mai 2017 10:27:47 CEST | | Account manager contact succeeded
Di 23 Mai 2017 10:28:00 CEST | | [work_fetch] Request work fetch: Backoff ended for GPUGRID
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] ------- start work fetch state -------
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] target work buffer: 86400.00 + 0.00 sec
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] --- project states ---
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] REC 6457.612 prio -1.000 can request work
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] --- state for CPU ---
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] shortfall 299801.40 nidle 0.00 saturated 23987.94 busy 0.00
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] share 0.000 account manager prefs

Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] --- state for NVIDIA GPU ---
Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] shortfall 84953.52 nidle 0.00 saturated 1446.48 busy 0.00
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] share 1.000

Di 23 Mai 2017 10:28:04 CEST | | [work_fetch] ------- end work fetch state -------
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] set_request() for NVIDIA GPU: ninst 1 nused_total 0.00 nidle_now 0.00 fetch share 1.00 req_inst 1.00 req_secs 84953.52
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | [work_fetch] request: CPU (0.00 sec, 0.00 inst) NVIDIA GPU (84953.52 sec, 1.00 inst)
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | Sending scheduler request: To fetch work.
Di 23 Mai 2017 10:28:04 CEST | GPUGRID | Requesting new tasks for NVIDIA GPU
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | Scheduler request completed: got 0 new tasks
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | No tasks sent
Di 23 Mai 2017 10:28:05 CEST | GPUGRID | [work_fetch] backing off NVIDIA GPU 740 sec


The BOINC manager asked for 84953.52 seconds of work for NVidia GPU 1 instance, but the project did not send tasks (while there was work in the queue).

Some other project is using your GPU, so should GPUGrid.

Remember: All GPUGRID projects are chosen for this machine.

That's ok.
Do you have the "Use NVidia GPU" and the "Use Graphics Processing Unit (GPU) if available" selected in GPUGrid preferences?
Do you have at least 8 GB disk space in the partition the BOINC data directory resides?
How many other GPU project this host is attached to?

You could try to increase the work buffer (it is set to 1 day now) for testing.

If nothing works try the following:
1. detach this host from all projects
2. uninstall BOINC manager
3. restart your host
4. install BOINC manager
5. attach this host only to GPUGrid, and test it
6. attach this host to other projects one by one, only one at a day.
ID: 47318 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · Next

Message boards : Number crunching : GTX 770 won't get work

©2025 Universitat Pompeu Fabra