Message boards :
Number crunching :
Credits calculations
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
| Author | Message |
|---|---|
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Many workunits can run on a 9600GT under 24 hours, but the latest larger units (the 42xx credit ones) really push the limit. I have seen one OC 9600GT (1800 shader) which took about 25 hours to finish one of these. Your OC of 1850 might get you under 24 hours, but it will be very close. So, for most who are not willing to push their 9600 to the edge in OC, the larger units will edge just over the 24 hour mark. Indeed, even a 96 shader card needs to be pushed up to the 1650-1700 shader clock range to break the 24 hour deadline. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We use reliable hosts (return < 2 days) to speed up the slower batch of WUs. Still, we need all the cards even the one returning after 3 days to do all the work. We will probably extend the 1 day limit to 1.5 days to give at least the possibility to finish to more people, not just the top cards. gdf |
|
Send message Joined: 18 Sep 08 Posts: 65 Credit: 3,037,414 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Many workunits can run on a 9600GT under 24 hours, but the latest larger units (the 42xx credit ones) really push the limit. I have seen one OC 9600GT (1800 shader) which took about 25 hours to finish one of these. Your OC of 1850 might get you under 24 hours, but it will be very close. So, for most who are not willing to push their 9600 to the edge in OC, the larger units will edge just over the 24 hour mark. Indeed, even a 96 shader card needs to be pushed up to the 1650-1700 shader clock range to break the 24 hour deadline. of course - then it will be up to the project to keep them in the run and distribute the shorter WUs to the slower hosts. information is available, and the scheduler can make use of it. |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
[Thus my question about simply shortening the deadline... The screams of anguish if the total deadline was shortened ... :) Even with the 4 day deadline there are folks complaining that they would like it extended ... to my mind this is better in that it is gentle suasion to have people move to faster GPUs ... The other good news on the horizon is that we are seeing more activity on the GPU front with The Lattice Project trying their application on an unsuspecting world and Milky Way looks to be making additional moves too ... heck, MW may be the first project that has a GPU application for the Mac ... of course, with my luck I won't be able to use it ... {edit} fixed teh quote to the correct one ... {/edit} |
Michael GoetzSend message Joined: 2 Mar 09 Posts: 124 Credit: 124,873,744 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
...instead of buying a new $500(US) GPU, why not spend much less than that on on older single core machine with PCIe slots? If all you want to do is get GPUGRID to not download so large a WU queue so that it can return WUs in <24 hours, there's several zero cost options available that don't require you to swap out hardware. Especially if you use the computer for 'normal' purposes, you certainly don't want to swap out your quad-core for a single core cpu. Note that I've never tried any of these options, so it's possible they might not work as expected. Some options are: 1) Change your BOINC configuration to only use 1 CPU core. There's two ways of doing this: 1a) Use the BOINC Manager option to limit # of CPU cores to 25%, (quad-core), 33% (triple core), or 50% (dual core). 1b) Use the config file option to instruct BOINC to pretend it's a single core system. (I think this is the NCPUS flag, but I could be mistaken.) 2) Lower the work queue size to 0.1 days or similar so that BOINC never requests more than one WU. 3) Wait until a release of BOINC and/or Lattice comes out that assigns WUs based on the number of GPUs instead of CPUs. Note that option 1 (a or b) will reduce (or possibly elliminate) any other CPU BOINC work being done on the computer. This also applies to actually reducing the number of CPU cores with new hardware. Mike |
Michael GoetzSend message Joined: 2 Mar 09 Posts: 124 Credit: 124,873,744 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So, Unless I misunderstand what you're saying, this seems to me like an ill-advised policy. It penalizes people using multi-core CPUs and encourages users to abort WUs. Mike |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Unless I misunderstand what you're saying, this seems to me like an ill-advised policy. It penalizes people using multi-core CPUs and encourages users to abort WUs. Depends on how many GPU cores you have to match ... :) And aborting tasks may not be all that bad of a deal if they can get issued and returned faster that way. We just THINK it is waste to do so ... The only other thing I would say is that I have not noticed a change in the awards yet ... unless I am missing something ... |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. We will try to extend the 1 day deadline as already said and to reduce the queue by matching the number of gpus. gdf |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. Have you considered exposing that indicator in the computer information page? It would sure be nice to know for sure if a system qualifies or not ... |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. I've already lost the extra 50% Credit on a couple WU's, the WU was done in Time (The 24 hr Period) but didn't report back in Time because the Manager didn't send it back promptly after it was finished. Could we please have the Deadline extended to 36 hours at least. All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
LOL ... Just got this WU in by 3 Minutes or the 50% Extra Credit would have been lost. It had been finished 2 hours ealier but sat there until I manually sent it in. I've set the Project to NNW & will have to Babysit the WU's and only get new ones as needed ... |
EdboardSend message Joined: 24 Sep 08 Posts: 72 Credit: 12,410,275 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... You can activate the <report_results_immediately> option in your cc_config.xml file. If you do so, then the WUs are sent as soon as they are done. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
To easy the effort, we are giving now: 60%+ for WUs returned within 1.5 days 20%+ for Wus returned within 2.0 days We will probably also extend the deadline to 5 days for all the others. gdf |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
To easy the effort, we are giving now: Thank you ... Can you give us a chart of what we should expect? Looking at my daily totals it seems that I am getting higher returns, but, when I look at the tasks the individual tasks seem to have the same numbers as in the past. Thanks ... And for those asking for longer deadlines ... see ... sometimes you get your wish ... :) |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
To easy the effort, we are giving now: Thanks a bunch GDF, my Cards only take 5-6 hours to run the WU's but with a Cache of 4 WU's I'm always on the Edge because 4 take 20-23 hr's to do so if they don't report right away then they go over a 24 hour deadline. But with 1 1.5 Day Deadline I'll have no problems now getting them reported ... :) |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... That work for some of the Clients & some it doesn't work, I had thought of that but hadn't got around to it yet with so many other things going on. |
K1atOdessaSend message Joined: 25 Feb 08 Posts: 249 Credit: 444,646,963 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... If for some reason there is a back-off from contacting the server, the report_results_immediately does not override this. However, with a manual update or the countdown to contact goes to 0, it will report immediately then. I've had this set for a while because I have 3 GPU's and *only* a Quad, often leaving one not running (if 2 WU's are finished and pending sending to server, 1 of the 3 GPU's will not be running until another WU downloaded). |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
An intersting change. It's actually in the folding@home spirit, where their points don't neccessarily represent FLOPS, but rather the value of the calculations. E.g. a CPU-FLOP is worth more than a GPU-FLOP, as it's more flexible. And in the case of GPU-Grid a "fast-FLOP" is worth more, as it allows the project to progress faster. MrS Scanning for our furry friends since Jan 2002 |
KWSN imcrazynowSend message Joined: 27 Jan 09 Posts: 26 Credit: 3,572,637 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't seem to be able to locate the cc_config.xml file in my BOINC folder. Is it someplace else or can one be created and put in the folder? If so how do I do it.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
In your "BOINC/user data" folder create a text file and name it cc_config.xml. The contents can be:
MrS Scanning for our furry friends since Jan 2002 |
©2026 Universitat Pompeu Fabra