Message boards :
Number crunching :
Credits calculations
Message board moderation
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
For transparency towards other projects, we have published even more in details how credits are computed. Give a look if you are interested. GDF |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Of course we are interested. I need to take some time to read these papers ... but thank you for these papers ... |
|
Send message Joined: 13 Mar 09 Posts: 59 Credit: 324,366 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Thanks I was looking for an explanation of the calculations. I was trying to work out the efficiency of GPU computation compared to CPU computations based on power consumption of the device. My first successful WU http://www.gpugrid.net/result.php?resultid=400387 provided me with 4,214.28. By my calculations from the equations you provided that means that: 14564551680 = MFLOP per WU + approx MIPS per WU How can I work out the MFLOP per Wu or the approx MIPS per WU? Rob |
|
Send message Joined: 17 Mar 09 Posts: 12 Credit: 0 RAC: 0 Level ![]() Scientific publications
|
For transparency towards other projects, we have published even more in details how credits are computed. Nice. This enables an easy comparison to the other GPU enabled projects. As described in your link, you use the result of the flops counting to calculate the credits for a WU according to the old formula for benchmark based claimed credit. This leads of course to a severe "underpaying" of the GPU crunchers here when comparing to other projects. Just take SETI@home as an example. As they changed their credits system to a similar flops counting scheme, they introduced some kind of an additional "efficiency factor" to have a continous transition of the old benchmark based scale to the new one. If one brings it down to the simplest possible representation the credit calculation of SETI looks like this SETI: credit = 2.72 * TFlop/WU (single precision) If one brings your calculation to the same form one comes up with GPUGrid: credit = 1.736 * TFlop/WU (single precision) Considering GPUGrid claims to use additionaly a great deal of integer instructions (SETI is not afaik), that only increases the difference of the credit calculations. I suggest you should think about simply adapting the SETI scheme. Generally there should be some discussions between the BOINC projects and also David Anderson about this credit issue. You see what happens if there is not some kind of consensus if you look to Milkyway@home. For a lot of people it looks they are granting really excessive credits and it would be nice if there would be some common position of the projects. MW is using also flops counting, but double precision calculations. I would think some kind of premium for this is justified, especially as GPUs have between 5x to 12x the single precision perfomance compared to double precision. But looking to the future I would think a weight factor of two or so may be okay for double precision operations (which would be also right for CPUs). Using SETI as base again that would mean MW should grant credits = 5.44 * TFlop/WU (double precision). But in the moment they are granting ~37% more (7.5*TFlop). If you calculate the ratios of the equivalent credit multipliers between GPUGrid : SETI : MW (SETI is base) you get 0.64 : 1 : 1.37. So GPUGrid is actually about the same amount below SETI than MW is above (and GPUGrid is awarding less than half as MW). I've suggested the same already to the MW administrator (appears to be sick and unavailable for the last days). MW reducing their credits and GPUGrid raising theirs so both match the SETI credit multiplier would set some kind of balance between the three GPU projects as a first measure. This would also reduce the quite extreme difference between GPUGrid and MW one sees in the moment. But as I said already, as more and more projects develop GPU applications a fundamental solution to this may be desirable. It should only be the first step to reach consensus between the projects or maybe to develop more sophisticated criterias for the granted credit. What do you think about it? |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Sorry, I missed your answer. I think that it is quite simple. Credits or cobblestones are assigned for the average between integer and floating point operations. This is a good, well chosen quantity I think. Every application doing floating point is also doing integer operations (accessing an array involves an integer operation). Other applications might just doing integers. How many is the matter? We multiply the floats by 1.5x to account for the integer operations. It could be decided to multiply by 2x, as this is still consistent with the benchmarks (see credit thread in the FAQ). Nothing else. Projects could just keep to that. So, I would suggest: Seti: C x flops GPUGRID: C x flops MW: C x flops with C = 2, but if they decide something else that's fine also. All the projects should have the same metrics, but not the same credits/hour, otherwise the incentives to produce efficient applications is null. Double and single float should be valued the same, otherwise there are inconsistencies. GDF |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
> What do you think about it? In practice, I agree. There should be a fixed constant to multiply the flops. This should be uniform across projects, while flops of the application can vary. gdf |
[AF>Libristes] DudumomoSend message Joined: 30 Jan 09 Posts: 45 Credit: 425,620,748 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Severals users like http://www.gpugrid.net/results.php?hostid=22576 have seen an increase of their points attribuated to their WU... Is it normal ? Is it to compensate, if there is, a decrease of users crunching GPUgrid for milkyway ? Thank you ! |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
No, credits are the same as before. The amount of credits is due to the size of the WU. gdf |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We have been discussing with people at Seti and David Anderson about the multiplier. It is likely that we will adopt Seti multiplier by next application update. We were too conservative it seems. gdf |
|
Send message Joined: 26 Nov 08 Posts: 5 Credit: 50,514,446 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That sounds good.......It would be nice to finally see a standard used in all three projects.....And this will also be a big help in the future when more GPU projects come online..... |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
So, we have to redistribute quite a bit of credits to the users in order to align to Seti factor as we were undercrediting. First change is already in place. People who returns results within 24 hours will receive 50% more credits. We might adjust these values in the future. gdf |
Stefan LedwinaSend message Joined: 16 Jul 07 Posts: 464 Credit: 298,573,998 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Sounds great! :D Now if the server would send the WUs per GPU and not per CPU, it would make it a lot easier for me to send them back within 24 hours... ;-) pixelicious.at - my little photoblog |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Well, I have to admit to being a bit baffled by the choice to award more for 24 hour returns. If you really needed work done within 24 hours, then why not just set the deadline as such? If the idea behind this new credit policy is to drive away persons with slower cards, then I think you will be successful. I think that you should also modify the compatible cards FAQ to note this policy (especially since the minimum recommended cards at 64 shaders will be hard pressed to complete much work in under 24 hours). Wouldn't a simple adjustment to the flops multiplier be an easier way to come into credit alignment with other projects? |
Lazarus-ukSend message Joined: 16 Nov 08 Posts: 29 Credit: 122,821,515 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So, I have just finished one task. I have another 2 WUs in the queue, that I got this morning and know that I cannot complete in 24hrs (i.e. tomorrow morning). What I should do is abort the waiting task and the one just started and get another fresh WU that I know I can complete in 24hrs and get 50% more credit for. I mean why should I crunch for less credits than I have to? I think this may turn out to be counter-productive. |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes, I don't fully understand what the new credit scheme is going to be or what the 24 hour deal is about either ... |
BymarkSend message Joined: 23 Feb 09 Posts: 30 Credit: 5,897,921 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
This is the first time in boinc history that I have to by a new slower CPU to math my ATI 260 GPU(My now AMD 8650 Triple-Core), probably a AMD 5600, very nice anyway! Long Life to Gpugrind! Regards Thomas |
|
Send message Joined: 7 Sep 08 Posts: 1 Credit: 204,490,242 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The best is to limit the Wu per host not per core. Or is there a other way to limit the Wu on a multicore system? cu JagDoc |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
One of the problems that this project has, much like Milky Way is that task x+1 relies on task x being returned. What this means is that the tasks are more of a task *STREAM* ... With that in mind, those that can return tasks faster allows the project to get to the "end" of the stream faster than if they have to wait for the deadline, or longer, to occur. Just as I, and others, have suggested that projects that are in testing (particularly alpha testing) should award at higher rates, those that enable the project to get to their goals more quickly should also be awarded. I have also noted and supported those projects where you are rewarded even when the tasks crash if the crashes are through no fault of yours (CPDN comes to mind) ... Yes, if you have a slower card you will not get the higher rate of award. But, should this not encourage you to get and apply a faster card? I know it may move up my schedule to replace the 9800GT I have ... I suppose that I may be a little on the project side here because I will certainly be earning the higher rate on much of the work I do as the i7 has two GTX 295 cards and the speed at which they do the work means that I will likely see a lot of higher pay... the GTX280 and the 9800 on the other hand will be catch me if you can ... GDF, You may need to put in a watchdog to see if participants are killing tasks at a higher than expected rate to clear the queues ... and on the other side, it would be nice if there was a project side way to limit the queue ... for example, maybe it would be in my interest to allow the machine with the 9800 to ONLY queue one task at a time, maybe even none... the same situation with the GTX280 where I know I have had as many as 3 spares waiting ... cutting that back to two would serve BOTH of us better ... AND, if you do get that coded and working, CPDN is interested in that kind of throttling too (well, at least one project person was) ... Last note, it might be nice if you posted the payment rates in a sticky in the web and server forum with this new information ... |
|
Send message Joined: 18 Sep 08 Posts: 65 Credit: 3,037,414 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
I suppose that I may be a little on the project side here because I will certainly be earning the higher rate on much of the work I do as the i7 has two GTX 295 cards and the speed at which they do the work means that I will likely see a lot of higher pay... the GTX280 and the 9800 on the other hand will be catch me if you can ... hmm - only a matter of local cache size. even my old 9600gt runs them under 24 hours. so if i run @ 0.01 cache... ....and donate a candle to santo improvisario each and every day... |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
One of the problems that this project has, much like Milky Way is that task x+1 relies on task x being returned. What this means is that the tasks are more of a task *STREAM* ... Thus my question about simply shortening the deadline...
A 9800GT would return all work within 24 hours on a single core machine...instead of buying a new $500(US) GPU, why not spend much less than that on on older single core machine with PCIe slots? Or too be more direct, one could get more "bang-for-the-buck" with a middle range card on a single core than a relatively fast card (say a GTX 260) in a HT i7--there is no way that the 260 could return all eight downloaded workunits under 24 hours (i.e., such a credit bonus is not so straightforward in motivating one to purchase the latest and fastest equipment). |
©2025 Universitat Pompeu Fabra