Credits calculations

Message boards : Number crunching : Credits calculations
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
fractal

Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8075 - Posted: 2 Apr 2009, 18:06:04 UTC - in response to Message 7862.  
Last modified: 2 Apr 2009, 18:33:51 UTC


1) Change your BOINC configuration to only use 1 CPU core. There's two ways of doing this:
1a) Use the BOINC Manager option to limit # of CPU cores to 25%, (quad-core), 33% (triple core), or 50% (dual core).
1b) Use the config file option to instruct BOINC to pretend it's a single core system. (I think this is the NCPUS flag, but I could be mistaken.)

2) Lower the work queue size to 0.1 days or similar so that BOINC never requests more than one WU.


Neither of these helped.

I have a q6600 and a single 9600gso. I have it set to use at most 75% of the cpu and the message windows says "4/2/2009 10:59:40 AM||Preferences limit # CPUs to 3" when it comes up. This lets BOINC use 3 cores to run CPU applications and leaves one to feed the GPU and for whatever I am doing on the box.

My max work queue size is 0.1 days.

I aborted all the work in the "ready to run" state and it downloaded three more units leaving me with one running and three "ready to run".

I am finishing work in "Approximate elapsed time for entire WU: 79329.953 s" which is about 20 hrs. I have set boinc to return results immediately. But none of this will help if BOINC is going to keep three days work on my machine.

I suggest you reconsider your decision to penalize those of us who have more CPU cores than GPU's.
ID: 8075 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Scott Brown

Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 8093 - Posted: 2 Apr 2009, 23:02:15 UTC - in response to Message 8075.  


I have a q6600 and a single 9600gso...
I am finishing work in "Approximate elapsed time for entire WU: 79329.953 s" which is about 20 hrs.


You should really consider an OC of the shader and core clocks. The 96 shader 9600GSO (and 8800GS, which is the same card rebranded) is very tolerant of OC if you have good heat management in your system. I have a factory OC 9600GSO and have OC'ed my 8800GS as well to around 1700 shader clock. Modest heat increase, no errors, and increased speed in crunching. This should get you close to having better luck in the quad.

As for fixes to this overall issue, a very easy solution would be to define workunit types by there preset credit totals (which are based on flops counting and equate quite nicely to run times) so that users with slower cards could opt out of the longer work. In other words, create generalized classes of work so that there might be four types: 1) less than 2400 base credit, 2) 2400 - 3000 base credit, 3) 3,000 - 3,600 base credit, and 4) more than 3,600 base credit (the number and values of these thresholds are of course just examples). Add the workunit type checkboxes to the user account GPUGRID preferences section (similar to what is done at PrimeGrid) and let users select as needed.

ID: 8093 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8104 - Posted: 3 Apr 2009, 5:41:25 UTC - in response to Message 8081.  

IMO this is a terrible idea. The only way a lot of us can meet this standard is to keep aborting queued WUs constantly since you insist on sending us 1 WU for every CPU instead of GPU. You're giving the fastest users a big bonus and penalizing the rest of us by burying us even further down the stats. It's already starting to cause hard feelings and negative PR is hard to overcome. You've created a very nice project, but this is a bad move (at least until you can actually provide us with a way to make it work).

The problem is not really the project's fault. Until we have proper GPU support in BOINC these issues are going to be there. 6.6.20 or what ever number they give to the release version will be the first that will start to address these kinds of problems. Note I say START to address these problems. there is a lot more dificulty in Disneyland that this ...

Still, when you think of it, CUDA in BOINC is less than a year old and we have come a long way ... but there is still a long way to go ...
ID: 8104 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8128 - Posted: 3 Apr 2009, 16:13:42 UTC - in response to Message 8104.  

The only way a lot of us can meet this standard is to keep aborting queued WUs constantly since you insist on sending us 1 WU for every CPU instead of GPU.

The problem is not really the project's fault. Until we have proper GPU support in BOINC these issues are going to be there. 6.6.20 or what ever number they give to the release version will be the first that will start to address these kinds of problems. Note I say START to address these problems. there is a lot more dificulty in Disneyland that this ...

Still, when you think of it, CUDA in BOINC is less than a year old and we have come a long way ... but there is still a long way to go ...

I agree. When I wrote the above message I was pretty irritated after spending quite a while trying to figure out why some were getting those 4900 point WUs and some weren't. Looked in the wrong forum. If I could delete the above, I would. There are ways to avoid the problem with a bit of effort.


ID: 8128 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
fractal

Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8130 - Posted: 3 Apr 2009, 18:15:45 UTC - in response to Message 8128.  

I'll try the method you posted on the ars forum once I clear the backlog on this machine. Until then, I just abort any "Ready to Start" GPUGRID wu's I see on the machine. It's a pain, but it works.
ID: 8130 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8133 - Posted: 3 Apr 2009, 18:28:45 UTC - in response to Message 8128.  

I agree. When I wrote the above message I was pretty irritated after spending quite a while trying to figure out why some were getting those 4900 point WUs and some weren't. Looked in the wrong forum. If I could delete the above, I would. There are ways to avoid the problem with a bit of effort.

Well, the good news is that I did not notice that you were irritated ... :)

Most of us that tender advice here are pretty laid back and tend to not get excited easily so it kinda rolls off and no need to sweat it ... heck all of us at one time or another have said (typed?) things that we wish we could unsay ...

As to the "sizing" issue, well, we have not seen the end to the problems there yet. And sadly the Developers, seemingly by design are ignoring the issues that GPUs raise rather than starting to be proactive about them. I mean they are not thinking about how to solve the issue of the fact that the pool of GPU resources is not guaranteed to be symmetric in capabilities (orthogonal is another way to look at it).

And sadly model numbers may not be the best way to detect this either ... At one point I had a GTX295, GTX 280 and 9800GT in one system ... which should the long running tasks be scheduled on?

Anyway, I am going to try to address this subject again, because it is not going to go away if ignored...
ID: 8133 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 8140 - Posted: 3 Apr 2009, 20:26:58 UTC - in response to Message 8133.  
Last modified: 3 Apr 2009, 20:27:10 UTC

Next week,
we will do an application update, bringing the credits multiplier from 1.5 to 2.0 (closer to seti 2.4xflops). Still adding 20-30% more for who returns them within two days. This should simplify your life, but keep a positive incentive for quick returns.

gdf
ID: 8140 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8149 - Posted: 3 Apr 2009, 22:00:05 UTC
Last modified: 3 Apr 2009, 22:00:55 UTC

That sounds nice.

I'd like to know, though: can any of you actually manage to reduce the queue size for GPU-Grid? What I got so far:

- BOINC 6.4.5, cache size 0.3 days -> 4 WUs (~12h each)
- BOINC 6.6.20: wanted to fetch cpu-work from GPU-Grid
- BOINC 6.5.0: cache size 0.1 day -> between 2 and 3 WUs

Ressource share for GPU-Grid is about 25% on a quad core, 1 GPU. That doesn't exactly make it easy to return results quickly..

MrS
Scanning for our furry friends since Jan 2002
ID: 8149 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8157 - Posted: 4 Apr 2009, 9:30:23 UTC - in response to Message 8149.  

That sounds nice.

I'd like to know, though: can any of you actually manage to reduce the queue size for GPU-Grid? What I got so far:

- BOINC 6.4.5, cache size 0.3 days -> 4 WUs (~12h each)
- BOINC 6.6.20: wanted to fetch cpu-work from GPU-Grid
- BOINC 6.5.0: cache size 0.1 day -> between 2 and 3 WUs

Ressource share for GPU-Grid is about 25% on a quad core, 1 GPU. That doesn't exactly make it easy to return results quickly..

MrS

My quad core (GTX280) and 9800 system only seem to stock 1 or 2 spares with 0.5 cache. BOth running 6.5.0 at the moment.
ID: 8157 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Venturini Dario[VENETO]

Send message
Joined: 26 Jul 08
Posts: 44
Credit: 4,832,360
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwat
Message 8171 - Posted: 4 Apr 2009, 14:05:13 UTC

I'm not really into this "let's award more to those that compute sooner"...

The idea behind BOINC is completely different. If you want fast computation, go get a supercomputer. If you want to use WASTED cycles, well man, you're going the wrong way.

Dario
ID: 8171 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8184 - Posted: 4 Apr 2009, 20:45:47 UTC - in response to Message 8171.  

The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it.

And the point of reason for these recent adjustments is that, unlike at seti or einstein or many others, for GPU-Grid it is important to get results back quickly. Otherwise the calculations can't continue.

And adding a relatively modest bonus of 20-30% for returns within 2 days sounds very reasonable to me. Even a 9600GT can easily do this, if BOINC can be convinced not to cache the maximum amount of work. Oh, and an update on my situation: it seems like my 6.5.0 slowly starts to obey the smaller cache setting.

MrS
Scanning for our furry friends since Jan 2002
ID: 8184 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 8187 - Posted: 4 Apr 2009, 22:20:37 UTC - in response to Message 8184.  

The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it.

And the point of reason for these recent adjustments is that, unlike at seti or einstein or many others, for GPU-Grid it is important to get results back quickly. Otherwise the calculations can't continue.

And adding a relatively modest bonus of 20-30% for returns within 2 days sounds very reasonable to me. Even a 9600GT can easily do this, if BOINC can be convinced not to cache the maximum amount of work. Oh, and an update on my situation: it seems like my 6.5.0 slowly starts to obey the smaller cache setting.

MrS


That is the way I have seen it from my start of boinc participation.

First time that I have seen it voiced, though.
mike
ID: 8187 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8198 - Posted: 5 Apr 2009, 6:58:07 UTC - in response to Message 8184.  

The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it.

*IF* BOINC? :)

I know, "of BOINC" ... :)

Actually, the point of BOINC is to allow groups to use their funds more effectively than to spend it on super-computer time or in the purchase of a supercomputer ...

This allows leveraging of the science funding, which is always parsimoniously granted by governments and corporations to look into the most important questions of our time.

In effect, for the cost of a couple low-end to mid-range servers, a group can investigate questions for which, in the past, could not be looked at because there simply was not the funding to do this research, or that could not do if for long enough to establish a solid answer.
ID: 8198 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 8203 - Posted: 5 Apr 2009, 9:33:24 UTC - in response to Message 8198.  

This is exactly the point. We are the first project which uses BOINC like if it was a supercomputer in a low-latency mode, rather that just high-throughput.
We have currently queued jobs for something like 40,000 cpus. It's impossible to get access to a supercomputer for such an amount of time.

We submit jobs, analyze the result, improve the computational protocol and re-submit again. Like if it was an experiment in a wet lab. So, we need the results fast. A paper on this usage will be out soon, where you guys can read more about our protocols.

Best, gdf.
ID: 8203 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8236 - Posted: 6 Apr 2009, 2:10:16 UTC - in response to Message 8203.  

This is exactly the point. We are the first project which uses BOINC like if it was a supercomputer in a low-latency mode, rather that just high-throughput.
We have currently queued jobs for something like 40,000 cpus. It's impossible to get access to a supercomputer for such an amount of time.

We submit jobs, analyze the result, improve the computational protocol and re-submit again. Like if it was an experiment in a wet lab. So, we need the results fast. A paper on this usage will be out soon, where you guys can read more about our protocols.

Best, gdf.

Now this explanation makes sense of it all. If getting the results back quickly substantially helps the science, go for it. I think there'd be a lot less uproar if these kind of changes were posted along with the rational. While we like stats we're really here to help advance human knowledge. Thanks for the info.

ID: 8236 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Venturini Dario[VENETO]

Send message
Joined: 26 Jul 08
Posts: 44
Credit: 4,832,360
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwat
Message 8278 - Posted: 7 Apr 2009, 16:49:35 UTC - in response to Message 8236.  


Now this explanation makes sense of it all. If getting the results back quickly substantially helps the science, go for it. I think there'd be a lot less uproar if these kind of changes were posted along with the rational. While we like stats we're really here to help advance human knowledge. Thanks for the info.


I totally agree.
ID: 8278 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Daniel Neely

Send message
Joined: 21 Feb 09
Posts: 5
Credit: 36,705,213
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8292 - Posted: 8 Apr 2009, 0:26:18 UTC

Would reducing my resource share for GPU grid help with keeping its queue short while maintaining a longer one for my CPU, or would I just end up with an idle GPU at times because I only had CPU WUs?
ID: 8292 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile rebirther
Avatar

Send message
Joined: 7 Jul 07
Posts: 53
Credit: 3,048,781
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 8428 - Posted: 14 Apr 2009, 18:59:52 UTC

Credits back to earned = granted?
ID: 8428 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8440 - Posted: 14 Apr 2009, 22:04:05 UTC - in response to Message 8428.  
Last modified: 14 Apr 2009, 22:04:34 UTC

No, that was just a side effect of the server being down ... no WUs were returned within the 24 hr. bonus period. I have completed and returned 4 new WUs today and have recieved the bonus :-)

Steve
ID: 8440 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
dyeman

Send message
Joined: 21 Mar 09
Posts: 35
Credit: 591,434,551
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8448 - Posted: 15 Apr 2009, 3:25:44 UTC - in response to Message 8440.  

The first task 522250 I received after the crash was returned well within 24 hours but received no bonus. Maybe the time is calculated from the time the task was created rather than the time it was sent??
ID: 8448 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : Credits calculations

©2026 Universitat Pompeu Fabra