Message boards :
Number crunching :
What a silly idea
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I attached my 8600gt 512mb to see how long it would take to complete a unit. I already knew it would complete a unit within the 5 day deadline so was no danger in losing the unit or wasting time. However. it's running on a machine with a Q9300 so 4 CPU's. What's silly is instead of this project giving me one WU for one GPU it gave me one, then another one and then another one, it would have given me a fourth one but realising it was going to give me a WU for every CPU core I stopped it getting new work. So, having stopped the fourth I aborted the third because there was no way it was going to finish 3 WU's in five days. that means it would have been sitting on my HDD for 5 days and then have to be resent so I saved project time by aborting and it has been resent. I may still have to abort the second unit if the first takes more than 2.5 days because it will not finish in time. That's what makes it silly because the project is delaying it's own results by sending a WU for every CPU core when it's a GPU project. I am clear in conscience that my card can easily complete one WU in at least 5 but probably 3 days so why can't the administrators of this project change the preferences to one unit per GPU at a time by default and then give users the option to increase it when they have fast cards. You know it makes sense! Saves project and user time. EDIT to add; May I also have a reponse to this post please. I need to hear the words Radio Caroline, the world's most famous offshore pirate radio station. Great music since April 1964. Support Radio Caroline Team - Radio Caroline |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 351 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I support Betting Slip wholeheartedly. This is long-standing, idiotic behaviour: and he has accurately attributed it to 'one work request per CPU core'. But I don't think it's something that GPUGrid can cure by reconfiguration. GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects. |
|
Send message Joined: 10 Apr 08 Posts: 254 Credit: 16,836,000 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects. It is a well-known issue. They are already working on it. i |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 351 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects. Sorry, that was a bit blunt for a first post on a new project! Yes, I'm aware that CUDA support in BOINC is very much a 'work in progress' - I'm active on both the alpha and dev mailing lists, and putting in bug reports and suggestions for improvement where I can. But you're probably aware that the BOINC developers took a bit of a time-out to concentrate on Facebook/GridRepublic development. During that period, I think it would be fairer to say that this issue was on the "to do" list, rather than in active development. Since the over-fetching that Betting Slip reports will hurt this project (by delaying the return of the supernumerary results), I thought you might be in a good position to give them a bit of a nudge. While we're here, may I point out a related issue? My first task on my Q9300/9800GTX+ combo, p1030000-IBUCH_3_pYEEI_com_0907-1-3-RND5364, reported: <rsc_fpops_est>250000000000000.000000</rsc_fpops_est> <fpops_cumulative>3436310000000000.000000</fpops_cumulative> Those figures are out of kilter by a factor of 12 to 1 or thereabouts. The 71-GIANNI_BINDX I reported yesterday is even worse. Since BOINC uses <rsc_fpops_est> plus DCF to estimate runtimes, new users (DCF=1) are in danger of severe over-allocation until the completion of their first task - at which point their DCF will jump somewhere near my (current) 16.7332, and the estimates will be corrected accordingly. You may be finding a large number of aborts/late reports from newly attached hosts with the current settings. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
we tried and tried to fix the number of WUs to GPUs only, but the current BOINC code is still quite buggy in this sense. Until BOINC works properly for this issue, there is nothing more we can do that try to give feedbacks to the developers. As they will be coming to Barcelona in September, I think that just a little more patience will be required. gdf |
EdboardSend message Joined: 24 Sep 08 Posts: 72 Credit: 12,410,275 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Since yesterday I realized that there is a new limit of WUs: Something like: number_cores + 2 * number_GPUs In a PC with a Quad and 2 GPUs I got yesterday 8 WUs (now it has less because I stopped downloading) In a PC with a Core 2 Duo (2 cores) and one GPU I'm getting 4 WUs In a Pc with a Core 2 Duo (2 cores) and two GPUs I'm getting 6 WUs All these can be seen in my account. |
EdboardSend message Joined: 24 Sep 08 Posts: 72 Credit: 12,410,275 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't know whether there is any relation, but it happened after I tried the "Cojatz Conjecture" CUDA project in all three PCs. |
|
Send message Joined: 21 Mar 09 Posts: 35 Credit: 591,434,551 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I am seeing the same thing - a C2D that used to download 2 WUs (1 running, 1 waiting) now downloads 4 (1 running 3 waiting - I assume 2 WUs per CPU core). This will result in all WUs now missing the 2-day 'bonus' window. Nothing has changed on this machine. It is running BOINC 6.6.28 and NVIDIA 185.85. This is the computer: 29936 |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This will result in all WUs now missing the 2-day 'bonus' window. Lower your cache and/or GPU-Grid ressource share. It's somewhat of a pain as these settings are not really seperated for CPUs and co-processors yet, but it can be done. With 6.5.0 and a quad core I have GPU-Grid at ~25% ressource share and a cache of 0.2 days. That's enough so that BOINC only fetches GPU-Grid tasks for about 1 day and I can easily get the bonus. MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra