New nvidia beta application

Message boards : Graphics cards (GPUs) : New nvidia beta application
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 11 · Next

AuthorMessage
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14861 - Posted: 31 Jan 2010, 13:16:18 UTC - in response to Message 14860.  

GDF ... you and your team are the best !!!
Thanks - Steve
ID: 14861 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14865 - Posted: 31 Jan 2010, 19:29:31 UTC - in response to Message 14861.  
Last modified: 31 Jan 2010, 19:29:44 UTC

Could you guys check if 6.06 is slower? For some reason the appox elapsed time does not get printed.

gdf
ID: 14865 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14866 - Posted: 31 Jan 2010, 20:05:26 UTC - in response to Message 14865.  
Last modified: 31 Jan 2010, 20:10:06 UTC

OK, I had this all nice but then lost it all as I had timed out :-(

While my GTX285 only provides a small sample, the differences between runs of the same version were very minor so I think the following may be helpful

This GTX285 normally processes a GIANNI_BIND WU in 21600 seconds.

Version 6.05
(over three WUs with less than .05% between each of them)
runtime was 13172 seconds.

Version 6.06
(over two WUs with less than .07% between each of them)
runtime was 14006 seconds.

The difference between 6.05 and 6.06 was -6% which really is quite minor in comparison to the overall estimate you provided of 60% reduction originally.

To round out the numbers, the average CPU time on the 6.06 was 680 seconds which is about half of what the old v6.71 uses for a GIANNI_BIND. The only similar CPU usage on 6.71 is the KASHIF_HIVPR which on average come in at about 550 seconds of CPU usage.
Thanks - Steve
ID: 14866 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 14867 - Posted: 31 Jan 2010, 20:08:42 UTC - in response to Message 14865.  

Any guesstimates available for how long a L42-TONI_TEST2901-3-10-RND1170_0 acemdbeta version 606 on a 9800GTX would run? Its been going for 19hrs 26mins so far

Regards
Zy
ID: 14867 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14868 - Posted: 31 Jan 2010, 20:13:28 UTC - in response to Message 14852.  

... and you guys are bi^&chin about using 1 CPU core to get the additional 12236???


Not me, I'd take the increase in GPU speed any time. But we've had this discussion before in the pioneer time of GPU-Grid when they still needed to figure out how to use less than 1 core. From that I know the CPU power is important to many participants.

If this proves difficult to fix: what if there was a user preference for

- maximum GPU performance, use 1 CPU
- use minor amount of CPU and crunch what you can the GPU

MrS
Scanning for our furry friends since Jan 2002
ID: 14868 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14869 - Posted: 31 Jan 2010, 20:16:34 UTC - in response to Message 14868.  

ETA ... read up a few posts ... GDF fixed it !!!
CPU usage is back down very low again. While at this moment it appears at the expense of a few percentage points of the overall improvement (so we might be looking at 55% reduction instead of 60%), I think this is excellent news and look forward to seeing 6.06 in production.
Thanks - Steve
ID: 14869 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14870 - Posted: 31 Jan 2010, 20:21:42 UTC - in response to Message 14867.  

Zydor ... I find that the beta is taking about 65% of the time it takes me to process a GIANNI_BIND. Not sure what a 9800 takes but there have not be reports of these betas crashing (in fact they appear to be remarkably stable). I would suggest that if it is version 6.05 that you abort because we have already moved on to 6.06 which uses much less CPU time. If you want the points just let it run.
Thanks - Steve
ID: 14870 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14871 - Posted: 31 Jan 2010, 20:32:39 UTC - in response to Message 14870.  

This one ran for 28 hours on a 8800GT with v6.05, went to 100% and then was back to 17% when I aborted it. CPU time was also > 26 hours:

http://www.gpugrid.net/workunit.php?wuid=1129962

So far haven't had any long v6.06 WUs. As you can see this client was very reliable with the v6.71 client:

http://www.gpugrid.net/results.php?hostid=55324

ID: 14871 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Siegfried Niklas
Avatar

Send message
Joined: 23 Feb 09
Posts: 39
Credit: 144,654,294
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 14872 - Posted: 31 Jan 2010, 20:33:06 UTC - in response to Message 14865.  

Could you guys check if 6.06 is slower? For some reason the appox elapsed time does not get printed.

gdf



One is running on one of my 9800GT.
There is no progress shown, so I have to waite for completition.

But the GPU-load looks very good (low CPU-load).

ID: 14872 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14873 - Posted: 31 Jan 2010, 20:50:48 UTC - in response to Message 14872.  
Last modified: 31 Jan 2010, 21:02:32 UTC

You have to compare speed between 6.05 and 6.06 which are all run on a TONI- workunit (there are two types, one short and one long). What I would like to know is the speed on a fully CPU loaded host, with all cores busy.

AS far as I know the progress bar should work. Does it?

gdf
ID: 14873 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile leprechaun

Send message
Joined: 22 Jun 09
Posts: 8
Credit: 45,224,378
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 14874 - Posted: 31 Jan 2010, 20:58:03 UTC
Last modified: 31 Jan 2010, 20:58:37 UTC

I have a L7-TONI_TEST WU here. (6.06)
To see after nearly 7 hours no progress bar. Cancel?

GPU: GTX260
ID: 14874 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14875 - Posted: 31 Jan 2010, 21:18:06 UTC - in response to Message 14874.  

I think so.

gdf
ID: 14875 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 14876 - Posted: 31 Jan 2010, 21:21:39 UTC - in response to Message 14873.  
Last modified: 31 Jan 2010, 21:50:40 UTC

Running L42-TONI_TEST2901-3-10-RND1170_0 acemdbeta version 606 on a 9800GTX - no progress bar or % counter. Its up to 19hrs 55mins run time at present.

Four cores loaded with Aqua, and 25 x Freehal WUs also running concurrently.

EDIT:
With the speed improvements, this should have completed by now - Aborted at 20hrs 17mins

One odd thing I have noted is that all WUs irrespective of type, length Beta or current Production are coming at the same predicted time of 03:54:45. The predicted time will vary machine to machine of course - vageries of BOINC - but its strange all GPUGRID WUs irrespective of type have the same initial "time to completion". That time to completion will count down to zero - then show as "---" with the Betas, for the duration of the remaining processing time.

Regards
Zy
ID: 14876 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14877 - Posted: 31 Jan 2010, 21:46:14 UTC - in response to Message 14873.  

You have to compare speed between 6.05 and 6.06 which are all run on a TONI- workunit (there are two types, one short and one long). What I would like to know is the speed on a fully CPU loaded host, with all cores busy.

AS far as I know the progress bar should work. Does it?

gdf


The results I posted above were run on an i7-920, HT ON, fully loaded with 8 threads of WCG's HFCC subproject.

I am running that unit headless so I don't know if the progressbar was displayed properly.
Thanks - Steve
ID: 14877 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14879 - Posted: 31 Jan 2010, 21:57:23 UTC - in response to Message 14875.  
Last modified: 31 Jan 2010, 21:58:00 UTC

Our tests shows that under full load (all CPUs used), the application is very slow. This is probably true for the standard application as well, a bit less because the kernels are longer there.

If there is no load on the CPU or you have a free core, then the cost is minimal, but then you might just as well use it.
This being confirmed, we will default to 1CPU+1GPU usage.

gdf
ID: 14879 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Tom Philippart

Send message
Joined: 12 Feb 09
Posts: 57
Credit: 23,376,686
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 14880 - Posted: 31 Jan 2010, 22:03:40 UTC - in response to Message 14879.  


This being confirmed, we will default to 1CPU+1GPU usage.

gdf


sorry, but this will be a very bad move imo...
ID: 14880 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14881 - Posted: 31 Jan 2010, 22:23:49 UTC - in response to Message 14880.  

If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core!
We might go to 0.6 CPU so that two GPUs can share the same CPU. This should be enough and more similar to the reality of the usage. As a matter of fact, many of our calculations do use some CPU anyway for calculations.
Comments?

gdf
ID: 14881 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14882 - Posted: 31 Jan 2010, 22:49:01 UTC - in response to Message 14881.  

Previously it worked with comparably little cpu support. Does this method not work any more due to the algorithm changes?

MrS
Scanning for our furry friends since Jan 2002
ID: 14882 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14883 - Posted: 31 Jan 2010, 23:17:11 UTC - in response to Message 14882.  

I think that we have a big hit on the previous application as well.
We just did not realize it because on Linux is fine and it's fine on windows if the processor is not fully subscribed.

gdf
ID: 14883 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 14884 - Posted: 31 Jan 2010, 23:53:44 UTC - in response to Message 14881.  

If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core!
We might go to 0.6 CPU so that two GPUs can share the same CPU. This should be enough and more similar to the reality of the usage. As a matter of fact, many of our calculations do use some CPU anyway for calculations.
Comments?

gdf


Makes sense in a performance perspective.

0.6 or thereabouts would be fine as by implication dual gpus are involved which by inlarge means a more PC/Technically aware individual who will realise the sense in the trade off. 0.6 should also allow access to that cpu by other apps by single gpu owners, and that will help fend off yelling about credits and interference with other projects.

Its a very obvious statement to say the priority is the needs of the app - but I will repeat it to prevent nugatory spinning off into credit rhetoric here. This is about how to launch this one and the real issues it will face in BOINC Land, not credit Rhetoric.

Utilisation of the cpu/gpu in this way is relatively new in BOINC - SETI does a version of it in one of their apps, but the credit spin there is another world as they are relatively low on credits anyway, and this GPUGRID App is engineered from the ground up - different ball game. The fact remains whatever the rhetoric, a configuration of this nature will open doors to the chattering classes, and if its not approached and "Marketed" correctly, the droves of BOINC Crunchers who love to repeat dramatic rumour, will feed momentum of bad news on its Launch - thats the last thing thats needed when its all there for the right reasons.

Therefore I would strongly suggest a brief "Time Out" at some point to put together the case for it and why the App has ended up the way it does, and why the credits are the way they end up, posting it prominantly as a sticky. GPUGRID members can then refer / link to it in their travels around BOINC Land and nip rumours in the bud.

We can all make a Grand Case in Theory that such a precaution is not required, as all true and rightuous Crunchers naturally march forth to the golden light giving selfless assistance to the goodness of Mankind. However we all know life in BOINC Land is not so simplistic - unfortunately :)

Its not going to take much effort, and will go a long way to deflect critics by being open and transparent. There will be still be some moaning no doubt, but hopefully most will be nipped off by such a move.

Regards
Zy
ID: 14884 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 11 · Next

Message boards : Graphics cards (GPUs) : New nvidia beta application

©2026 Universitat Pompeu Fabra