no cuda work requested

Message boards : Graphics cards (GPUs) : no cuda work requested
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
HPew

Send message
Joined: 7 Apr 09
Posts: 10
Credit: 534,714
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwat
Message 8686 - Posted: 21 Apr 2009, 19:41:12 UTC

I've just installed boinc 6620 on WinXP, whenever boinc asks for new work the server returns these three messages:

No work sent
Full-atom molecular dynamics on Cell processor is not available for your type of computer.
cuda app exists for Full-atom molecular dynamics but no cuda work requested.


Why would an absolutely default set-up not work properly?
ID: 8686 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8694 - Posted: 21 Apr 2009, 21:56:36 UTC - in response to Message 8686.  

Your driver may be too old or, more likely, your GPU is not supported. Can't say if your computers are hidden, though.

MrS
Scanning for our furry friends since Jan 2002
ID: 8694 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
HPew

Send message
Joined: 7 Apr 09
Posts: 10
Credit: 534,714
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwat
Message 8696 - Posted: 21 Apr 2009, 22:01:07 UTC

I. Am. Not. An. Idiot.

The card is a G92, the driver is the latest from nvidia as of yesterday--182.50. The system has crunched two WUs but is being refused further units.
ID: 8696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8699 - Posted: 21 Apr 2009, 22:06:56 UTC - in response to Message 8696.  

I. Am. Not. An. Idiot.

The card is a G92, the driver is the latest from nvidia as of yesterday--182.50. The system has crunched two WUs but is being refused further units.

No one said you were.

But when you hide your computers some of the questions ETA would have answered with a peek there.

In that these are the most common problems, they are also the most offered solutions.

There are issues with the 6.6.20 and 6.6.23 versions that affect some and not others. Next suggestion is to do a project reset on GPU Grid. If that does not work, reset all debt ...
ID: 8699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8700 - Posted: 21 Apr 2009, 22:07:13 UTC - in response to Message 8696.  

I. Am. Not. An. Idiot.


Well then, sorry.. but from your post there was no way to tell this.
Do you still ahve some WUs running or are you dry?

MrS
Scanning for our furry friends since Jan 2002
ID: 8700 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Alain Maes

Send message
Joined: 8 Sep 08
Posts: 63
Credit: 1,696,957,181
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8704 - Posted: 21 Apr 2009, 22:18:55 UTC - in response to Message 8696.  

I. Am. Not. An. Idiot.



Of course you are not, since you are capable of asking a perfectly acceptable question. But please also accept that ETA is one of the most respected persons, as anyone else here is respected by definition untill prov otherwise; he and all others are just trying to help within means and capabilities, no offence intended.
His first reaction is also pretty standard for the ones that followed this forum, since the possible reasons for failure he mentioned are pretty commun even for "not idiots".
So if you really want serious help, please describe your system and problem in more detail. Unhiding your computers will help a lot in this since it will allow to see the results of the failing WUs including any error messages.
Hope we will be able to help you to help science and humanity.

kind regards.

Alain
ID: 8704 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
HPew

Send message
Joined: 7 Apr 09
Posts: 10
Credit: 534,714
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwat
Message 8705 - Posted: 21 Apr 2009, 22:27:13 UTC - in response to Message 8700.  
Last modified: 21 Apr 2009, 22:29:11 UTC

The afflicted PC will run out of WUs around 4 AM. The message tab is filled with red.

At some more reasonable hour I'll detach gpugrid and re-attach to see if that fixes it.

Maes: The WUs are not failing, the server is refusing to give me more.

Apology to ETA: Sorry for my abruptness.
ID: 8705 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Phil Klassen

Send message
Joined: 6 Sep 07
Posts: 18
Credit: 14,764,147
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 8707 - Posted: 22 Apr 2009, 1:57:03 UTC - in response to Message 8705.  

I had the same message a few hours ago on my I7 with 3 gtx cards, I just played a game, rebooted, and then it downloaded some work??? Not sure cuz I have ps3's and gpu's running. The message came up on my i7 then it fixed itself. Maybe the reboot had something to do with it.
ID: 8707 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile (_KoDAk_)
Avatar

Send message
Joined: 18 Oct 08
Posts: 43
Credit: 6,924,807
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwat
Message 8712 - Posted: 22 Apr 2009, 9:07:45 UTC

GPU Results ready to send 0
ID: 8712 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
HPew

Send message
Joined: 7 Apr 09
Posts: 10
Credit: 534,714
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwat
Message 8723 - Posted: 22 Apr 2009, 14:45:51 UTC

*Sigh* The message tab shows an 'ask & refusal' every hour or so, but when this machine had completely run out of work and I manually updated it was given a single new WU amid all the refusals.
ID: 8723 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8739 - Posted: 22 Apr 2009, 20:17:50 UTC - in response to Message 8705.  

Apology to ETA: Sorry for my abruptness.


You're welcome :)

Let's try to solve your problem then. Today I remember that I also got the message "no cuda work requested" when I tried 6.6.20. I quickly reverted to 6.5.0 and the box is running fine since then. You could also try 6.6.23, which supposedly fixed some of the issue of 6.6.20.

Until now I didn't see anyone else posting such behaviour of 6.6.20, so it seems to be a rare case.

MrS
Scanning for our furry friends since Jan 2002
ID: 8739 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8758 - Posted: 23 Apr 2009, 5:11:53 UTC - in response to Message 8739.  
Last modified: 23 Apr 2009, 5:13:58 UTC

Apology to ETA: Sorry for my abruptness.


You're welcome :)

Let's try to solve your problem then. Today I remember that I also got the message "no cuda work requested" when I tried 6.6.20. I quickly reverted to 6.5.0 and the box is running fine since then. You could also try 6.6.23, which supposedly fixed some of the issue of 6.6.20.

Until now I didn't see anyone else posting such behaviour of 6.6.20, so it seems to be a rare case.

Um, no...

I am becoming less and less convinced that it is isolated. Sorry ... I thought I was being clear.

It looks like both 6.6.20 and 6.6.23 have a problem with debt accumulating in one direction and never being properly updated. The eventual result on GPU Grid is that you get fewer and fewer tasks in pending till you start to run dry. Version 6.6.20 had some other problems with suspending tasks and something else that could really mess things up and that I think was the source of the tasks that took exceptionally long times to run. 6.6.23 seems to have fixed that. This 6.6.20 problem may mostly affect people running multiple GPU setups.

But, that means that 6.6.20 and 6.6.23 are not; in my opinion, ready for prime time.

I *DO* like the new time accounting so that you can more accurately see what is happening with the GPU Grid tasks so for the moment I am personally sticking with 6.6.20 on one system and 6.6.23 on my main but that is also because I am trying to call attention to these issues and the only way to collect the logs is to run the application. Sadly, as usual, the developers don't seem to be that responsive to feedback ...

To put it another way, they are very good at ignoring answers to questions they don't want asked.

{edit}

FOr those having any kind of problem with 6.6.x, try 6.5.0 and if the problem goes away, stay there. Sadly I will stay on the point and will be sending reports from the front as I get them. Failing that, you can always ask directly ...
ID: 8758 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8797 - Posted: 23 Apr 2009, 19:27:15 UTC - in response to Message 8758.  

No work sent
Full-atom molecular dynamics on Cell processor is not available for your type of computer.
cuda app exists for Full-atom molecular dynamics but no cuda work requested.


I understand this message in the way that BOINC does request work from GPU-Grid, but it does not request CUDA work (which wo9uld be extremely strange / stupid) and hence the server is not sending CUDA work.
Am I totally wrong here?

MrS
Scanning for our furry friends since Jan 2002
ID: 8797 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Stefan Ledwina
Avatar

Send message
Joined: 16 Jul 07
Posts: 464
Credit: 298,573,998
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwat
Message 8799 - Posted: 23 Apr 2009, 19:36:55 UTC - in response to Message 8797.  

I also understand it that way...

pixelicious.at - my little photoblog
ID: 8799 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Bymark
Avatar

Send message
Joined: 23 Feb 09
Posts: 30
Credit: 5,897,921
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 8801 - Posted: 23 Apr 2009, 19:57:32 UTC - in response to Message 8799.  
Last modified: 23 Apr 2009, 20:52:35 UTC

I think this is normal if chase 0.
My one computer with a 250:

Network usage Computer is connected to the Internet about every (Leave blank or 0 if always connected. BOINC will try to maintain at least this much work.) 0 days Maintain enough work for an additional Enforced by version 5.10+ 0 days

Seti is more polite:
Running GPUGRID and Seti 20 / 1 and I always have GPU work on both.


23.4.2009 03:11:27 GPUGRID Sending scheduler request: To fetch work.
23.4.2009 03:11:27 GPUGRID Requesting new tasks
23.4.2009 03:11:32 GPUGRID Scheduler request completed: got 0 new tasks
23.4.2009 03:11:32 GPUGRID Message from server: No work sent
23.4.2009 03:11:32 GPUGRID Message from server: Full-atom molecular dynamics on Cell processor is not available for your type of computer.
23.4.2009 03:11:32 GPUGRID Message from server: CUDA app exists for Full-atom molecular dynamics but no CUDA work requested
23.4.2009 03:44:08 malariacontrol.net Sending scheduler request: To fetch work.
23.4.2009 03:44:08 malariacontrol.net Requesting new tasks
23.4.2009 03:44:13 malariacontrol.net Scheduler request completed: got 0 new tasks
23.4.2009 05:30:30 malariacontrol.net Sending scheduler request: To fetch work.
23.4.2009 05:30:30 malariacontrol.net Requesting new tasks
23.4.2009 05:30:35 malariacontrol.net Scheduler request completed: got 0 new tasks
23.4.2009 05:30:35 malariacontrol.net Message from server: No work sent
23.4.2009 05:30:35 malariacontrol.net Message from server: No work is available for malariacontrol.net
23.4.2009 05:30:35 malariacontrol.net Message from server: No work is available for Prediction of Malaria Prevalence
23.4.2009 06:07:50 SETI@home Sending scheduler request: To fetch work.
23.4.2009 06:07:50 SETI@home Requesting new tasks
23.4.2009 06:07:55 SETI@home Scheduler request completed: got 0 new tasks
23.4.2009 06:07:55 SETI@home Message from server: No work sent
23.4.2009 06:07:55 SETI@home Message from server: No work available for the applications you have selected. Please check your settings on the web site.
23.4.2009 06:07:55 SETI@home Message from server: CPU jobs are available, but your preferences are set to not accept them
"Silakka"
Hello from Turku > Åbo.
ID: 8801 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8803 - Posted: 23 Apr 2009, 21:16:20 UTC - in response to Message 8797.  

No work sent
Full-atom molecular dynamics on Cell processor is not available for your type of computer.
cuda app exists for Full-atom molecular dynamics but no cuda work requested.


I understand this message in the way that BOINC does request work from GPU-Grid, but it does not request CUDA work (which wo9uld be extremely strange / stupid) and hence the server is not sending CUDA work.
Am I totally wrong here?

No, but the BOINC client is.

We may be chasing two bugs here. I am seeing unconstrained growth of GPU debt which essentially causes BOINC to stop asking for work from GPU Grid (another guy on Rosetta has it stopping asking for work from Rosetta so it is not simply a GPU side issue) ... Richard Haselgrove has been demonstrating that the client may be dry of work for GPU but insists on asking for CPU work, the inverse of what it is supposed to be doing.

I am running 6.6.23 where the problem seems to be more acute than 6.6.20, which I am running on my Q9300 where I don't seem to be seeing the same issue, yet.

Sorry ETA I am not doing well and the brain is slightly mushy so I may not be as clear as usual... I keep thinking I have explained this ...

I am going to PM you my e-mail address so you can send me wake-up idiot calls (we can also Skype if you like ... hey that rhymes) ...

My bigger point is that AT THE MOMENT ... I cannot recommend either 6.6.20 or 6.6.23 wholeheartedly. 6.6.20 I am pretty sure has a bug that really causes issues on multi-GPU systems and may cause improper suspensions and long running tasks (though it does not seem to be doing that on the Q9300 at the moment (single GPU Though). 6.6.23 has fixes for a couple GPU things but seems to have a broken debt issue (which MAY also exist in 6.6.20, just that the bug fix for one thing exposed the bug ... or the bug fix is buggy ... or the bug fix broke something else ... you get the idea ...

Which is why I suggest if anyone is having work fetch issues, fall back to 6.5.0 and if they go 'way, then stay ... or get used to resetting the debts every day or so ... (which causes other problems) ...
ID: 8803 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
jrobbio

Send message
Joined: 13 Mar 09
Posts: 59
Credit: 324,366
RAC: 0
Level

Scientific publications
watwatwatwat
Message 8809 - Posted: 23 Apr 2009, 23:44:33 UTC - in response to Message 8803.  


We may be chasing two bugs here. I am seeing unconstrained growth of GPU debt which essentially causes BOINC to stop asking for work from GPU Grid (another guy on Rosetta has it stopping asking for work from Rosetta so it is not simply a GPU side issue) ... Richard Haselgrove has been demonstrating that the client may be dry of work for GPU but insists on asking for CPU work, the inverse of what it is supposed to be doing.

My bigger point is that AT THE MOMENT ... I cannot recommend either 6.6.20 or 6.6.23 wholeheartedly. 6.6.20 I am pretty sure has a bug that really causes issues on multi-GPU systems and may cause improper suspensions and long running tasks (though it does not seem to be doing that on the Q9300 at the moment (single GPU Though). 6.6.23 has fixes for a couple GPU things but seems to have a broken debt issue (which MAY also exist in 6.6.20, just that the bug fix for one thing exposed the bug ... or the bug fix is buggy ... or the bug fix broke something else ... you get the idea ...

Which is why I suggest if anyone is having work fetch issues, fall back to 6.5.0 and if they go 'way, then stay ... or get used to resetting the debts every day or so ... (which causes other problems) ...


Have you read this about GPU Work Fetch in 6.6.* aslo GpuSched from 6.3.*

On the face of it, it looks to me that this design harms those that dedicate 100% effort to an individual project as the LTD will eventually become too little.

If something has happened between 6.6.20 and 6.6.23 its probably worth looking at the changesets from 17770 to 17812. I didn't see anything that struck me as obvious.

An earlier commit 17544 looks potentially interesting, which came out in 6.6.14.

Rob
ID: 8809 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JAMC

Send message
Joined: 16 Nov 08
Posts: 28
Credit: 12,688,454
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 8813 - Posted: 24 Apr 2009, 1:58:21 UTC
Last modified: 24 Apr 2009, 2:49:23 UTC

One of my quads running XP Home, 6.5.0, (2) GTX 260's has stopped requesting new work and also spitting these messages... manual updates with other projects suspended still requests 0 new tasks... this rig has been running for many weeks without problems, down to 1 task running- what's up??


4/23/2009 8:50:44 PM|GPUGRID|Sending scheduler request: Requested by user. Requesting 0 seconds of work, reporting 0 completed tasks
4/23/2009 8:50:49 PM|GPUGRID|Scheduler request completed: got 0 new tasks

4/23/2009 8:51:41 PM|GPUGRID|Started download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3
4/23/2009 8:51:42 PM|GPUGRID|Temporarily failed download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3: HTTP error
4/23/2009 8:51:42 PM|GPUGRID|Backing off 42 min 59 sec on download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3

4/23/2009 8:52:07 PM|GPUGRID|Started download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_2
4/23/2009 8:52:08 PM|GPUGRID|Temporarily failed download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_2: HTTP error
4/23/2009 8:52:08 PM|GPUGRID|Backing off 3 hr 42 min 3 sec on download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_2
4/23/2009 8:52:09 PM|GPUGRID|Started download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_1

4/23/2009 8:52:10 PM|GPUGRID|Temporarily failed download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_1: HTTP error
4/23/2009 8:52:10 PM|GPUGRID|Backing off 3 hr 35 min 18 sec on download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_1

4/23/2009 8:52:14 PM|GPUGRID|Started download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3
4/23/2009 8:52:15 PM|GPUGRID|Temporarily failed download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3: HTTP error
4/23/2009 8:52:15 PM|GPUGRID|Backing off 2 hr 47 min 53 sec on download of m110000-GIANNI_pYIpYV1604-7-m110000-GIANNI_pYIpYV1604-6-10-RND_3

Well I guess not requesting any new work as there are 3 tasks stuck repeatedly trying to download in Transfers- 'HTTP error'??

Never mind... aborted all 3 downloads and got 3 new ones...
ID: 8813 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8831 - Posted: 24 Apr 2009, 12:51:41 UTC

Posted this this morning:

Ok, I have a glimmer, not sure if I got it right ... but let me try to put my limited understanding down on paper and see if one of you chrome domes can straighten me out.

In the design intent (GpuWorkFetch) we have the following:

A project is "debt eligible" for a resource R if:

• P is not backed off for R, and the backoff interval is not at the max.
• P is not suspended via GUI, and "no more tasks" is not set
Debt is adjusted as follows:

• For each debt-eligible project P, the debt is increased by the amount it's owed (delta T times its resource share relative to other debt-eligible projects) minus the amount it got (the number of instance-seconds).
• An offset is added to debt-eligible projects so that the net change is zero. This prevents debt-eligible projects from drifting away from other projects.
• An offset is added so that the maximum debt across all projects is zero (this ensures that when a new project is attached, it starts out debt-free).

What I am seeing, and my friend on GPU Grid/Rosetta is seeing, is a slow by inexorable growth of debt that eventually "chokes" off one project or another. I THINK I can explain why we are seeing different effects. His is easier.

He is dual project, Rosetta and GPU Grid. His ability to get Rosetta work is choking off.

The problem is that his debt is growing on Rosetta because of GPU Grid's lack of CPU work. So, BOINC "thinks" that GPU Grid is "owed" CPU time and is vainly trying to get work from that project. Eventually, because RS is now biased by compute capability, the multiplier drives his debt into the dirt pretty fast and soon he has trouble getting a queue of CPU work from Rosetta. Because the client wants to get CPU work from GPU Grid to restore "balance".

I have the opposite problem for the same reason. But, mine is because I have 4 GPUs in an 8 core system so my bias is in the other direction ... eventually driving my GPU debt because I am accumulating GPU debt against all 30 other projects ...

My Q9300 sees less of this because the quad core is likely fairly balanced against the GTX280 card so the debt driver is acting more slowly because the GPU is fast enough that the debts stay sort of in balance (best guess), or to put it another way, the 30 projects are building up GPU debt at about the same rate that GPU Grid is running up CPU debt in the other direction ... sooner or later though I do hit walls there are have had to hit debt reset to get back on balance.

This may ALSO partly explain Richard's observation on nil calls to projects (which I also see) where the system is trying manfully to get work from a project that cannot supply it. In my case it is often a call to GPU Grid to get CPU work. Not going to happen.

Not sure how to cure this in that for one thing I think there is at LEAST two problems buried in there if not three.

In effect, we really, really, need to track which projects supply CPU work and which GPU and which both ... and by that I mean the ones that the participant has allowed. So, the debt for me for GPUs should only reflect activities on GPU Grid my sole attached project with GPU work and GPU Grid should never be accumulating CPU debt.
ID: 8831 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Sleepy_63

Send message
Joined: 27 Mar 09
Posts: 1
Credit: 5,292,694
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8841 - Posted: 24 Apr 2009, 14:50:33 UTC
Last modified: 24 Apr 2009, 15:01:08 UTC

I've been getting the 'No cuda work requested' messages too. Since it has been days since I got a GPUGrid WU, but SETI-cuda is running fine, I knew my hardware and drivers were okay.

I reset the GPUGrid project and immediately got workunits.

FWIW.

Edit: but all is not well, with a quad-core cpu and dual-core GPU (Nvidia 9800GX2), I should be running 6 tasks: 4 cpu and 2 cuda. It just paused a seti-cuda task to run the GPUgrid, leaving only 5 tasks active... <sigh>
ID: 8841 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Graphics cards (GPUs) : no cuda work requested

©2025 Universitat Pompeu Fabra