New nvidia beta application

Message boards : Graphics cards (GPUs) : New nvidia beta application
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 11 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14885 - Posted: 31 Jan 2010, 23:59:21 UTC - in response to Message 14883.  

Please, please, please go to 1GPU + 1 CPU!

It is such a pain to try to use 3 cores of a quad in order to leave one free for GPUGrid.
On an i7 what odds is it to only use 7 of the 8 cores? Virtually none. If you were running GPUGrid you were using part of a core anyway, so you actually only lose out on 0.8 threads (10%), and at the same time you are gaining a huge amount for GPUGrid. I think a 60% increase in GPUGrid performance seriously outweighs the loss of 1 thread.
I will put it another way, on an i-7 running WCG or some other Boinc project you lose about 500 points for that one thread (running 24/7). Your GTX 260 (for example) will gain about 9000points! Total Points gain = 8500.

Within 2 months 6core systems with 12 threads will be available. You will even be able to get dual socket versions.
Look to the future, it has dense core populations with multithreading being the norm.
ID: 14885 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14886 - Posted: 1 Feb 2010, 0:28:43 UTC - in response to Message 14883.  
Last modified: 1 Feb 2010, 0:38:29 UTC

I think that we have a big hit on the previous application as well.
We just did not realize it because on Linux is fine and it's fine on windows if the processor is not fully subscribed.

gdf

On my systems v6.71 running with a CPU core idle did not speed up the application. V6.71 played very well with every other project, and I currently run more than a dozen on various machines.

I wish people would quit talking about points gained or lost. For many of us it's not about points. It's about being able to run work for other projects too. If ANY project makes that difficult it becomes expendable. I really like GPUGRID and want to keep running it, so if this app makes running other projects more difficult please give us the option of running the old app (via app_info.xml if necessary).
ID: 14886 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 14887 - Posted: 1 Feb 2010, 0:36:42 UTC - in response to Message 14885.  
Last modified: 1 Feb 2010, 0:46:42 UTC

On a long term prognosis I would agree re 6 cores etc etc - however the transition to multi core niavana has a bear trap.

There are still vast numbers of single core crunchers, they are diminishing quickly of course as machines are replaced, and they cant run the newer cards anyway. However, they are there in large numbers out in BOINC Land, it would be a bad move to cut them off totally, much better in a reputation sense to let them wither on the vine.

Dual Core will be with us in large numbers for years, cutting off one of two cpus would be a bad move.

Aqua debated this recently, and decided that the heavyweight Apps (and there are some real real heavyweight ones there due to the guargantuan amount of data being crunched simulating parts of Quantum Computers) would be restricted to Quad Core only. In fact it was only recently they introduced a light weight App supporting the main app, that they opened the smaller apps to single core.

It was done via Preferences, and in that maybe a way forward here ? The lower gpu useage (say 0.6) could be via one app with an attendent estimate of time to crunch, and second or more app(s) could be there using one cpu and one gpu. Place a test on the faster app to allow access to Dual and Quad core only, and denying download to single core. That way the Dual Core user decides whats right for them, and still leaves the door open for single core/single gpu without arguments spinning round for years.

That will all depend on how the app is put together and how CPU access is granted in the code, obviously I dont know how easy or hard it will be to make such changes without complete rewrites, but I would have thought it would not be too hard to produce two versions around such a premis.

Regards
Zy
ID: 14887 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14889 - Posted: 1 Feb 2010, 1:21:41 UTC

There's lots of speculation and AFAIK few machines that have run many v6.05 & v6.06 WUs. Well here's one:

http://www.gpugrid.net/results.php?hostid=56900&offset=0&show_names=1&state=0

A recap:

v6.05 (GPU then CPU time):

13,190.28 --- 13,184.89
13,136.48 --- 13,131.83


v6.06 (GPU then CPU time):

13,962.64 --- 670.80
14,049.59 --- 689.72

So for around an increase of 15 minutes more GPU time, v6.06 saves over 3.5 hours of CPU time.
This is for a GTX 285. The difference is even more extreme for slower cards. Here's a GT 240:

v6.05 (GPU then CPU time):

41,658.85 --- 40,742.34


v6.06 (GPU then CPU time):

45,470.80 --- 4,785.85

In this case we see an increase of 1:04 of GPU time and a whopping saving of 10:00 hours of CPU time with v6.06.

I certainly prefer v6.06. I can help a lot of projects with that CPU time and will have a more responsive system to work with (especially dual core or slower systems).

ID: 14889 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14890 - Posted: 1 Feb 2010, 1:32:26 UTC - in response to Message 14889.  

Our tests shows that under full load (all CPUs used), the application is very slow.

When you say under full load what kind of load are you talking about?
My load, and likely that of other people that are concerned about dedicating a full CPU for each GPU WU , is with other BOINC projects but they all share pretty good and I got very good results with the 6.06 version. If you would like more detailed testing I have a GTX295 and GTX285 that I can configure, set, run anything you would like to help make the best decision for the project.
Thanks - Steve
ID: 14890 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14891 - Posted: 1 Feb 2010, 2:04:53 UTC - in response to Message 14890.  

Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK.
It looks like v6.06 was running well, even better than the current v6.71 app.
ID: 14891 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ross*

Send message
Joined: 6 May 09
Posts: 34
Credit: 443,507,669
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14892 - Posted: 1 Feb 2010, 5:51:14 UTC - in response to Message 14885.  

[Within 2 months 6core systems with 12 threads will be available. You will even be able to get dual socket versions.
Look to the future, it has dense core populations with multithreading being the norm.]
Hi
to add to this debate. I have 2 i7 920 boxes with a ATI 5970 each plus running WCGrid on 8 threads each as well. Whatever is planned, my view is that the app should be able to run the GPU apps with a CPU app running as well, without any micro management.
I will be running 6 core CPUs in a couple of months hopefully with a new nivida GPU card.
I agree with the use of preferences to fine tune how the box manages its resources.
My future is to 2 gulftowns per MB and 2 double GPUs per box.
Cheers
Ross, New Zealand
ID: 14892 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14893 - Posted: 1 Feb 2010, 7:55:58 UTC - in response to Message 14891.  

Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK.
I noticed that :-) and that's fine by me. If I didn't want anyone to see them I would make them hidden.
It looks like v6.06 was running well, even better than the current v6.71 app.[/quote]Definately much better ... could I have more please?

Thanks - Steve
ID: 14893 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14895 - Posted: 1 Feb 2010, 8:58:29 UTC - in response to Message 14889.  

Beyond,
if this was the situation, it would be fine.
In our tests, loading the CPU with some work, it double the length of the GPU simulation.
So what where you running while you have these sims in progress?

gdf

There's lots of speculation and AFAIK few machines that have run many v6.05 & v6.06 WUs. Well here's one:

http://www.gpugrid.net/results.php?hostid=56900&offset=0&show_names=1&state=0

A recap:

v6.05 (GPU then CPU time):

13,190.28 --- 13,184.89
13,136.48 --- 13,131.83


v6.06 (GPU then CPU time):

13,962.64 --- 670.80
14,049.59 --- 689.72

So for around an increase of 15 minutes more GPU time, v6.06 saves over 3.5 hours of CPU time.
This is for a GTX 285. The difference is even more extreme for slower cards. Here's a GT 240:

v6.05 (GPU then CPU time):

41,658.85 --- 40,742.34


v6.06 (GPU then CPU time):

45,470.80 --- 4,785.85

In this case we see an increase of 1:04 of GPU time and a whopping saving of 10:00 hours of CPU time with v6.06.

I certainly prefer v6.06. I can help a lot of projects with that CPU time and will have a more responsive system to work with (especially dual core or slower systems).


ID: 14895 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 14896 - Posted: 1 Feb 2010, 9:02:26 UTC - in response to Message 14893.  

Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK.
I noticed that :-) and that's fine by me. If I didn't want anyone to see them I would make them hidden.
It looks like v6.06 was running well, even better than the current v6.71 app.
Definately much better ... could I have more please?
[/quote]

Hi,
run the 6.06 application together with say Seti/Aqua running on all your CPU cores and let's see what are the performance.

GDF
ID: 14896 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14897 - Posted: 1 Feb 2010, 9:56:44 UTC - in response to Message 14895.  

Hi GDF,

For the examples below: The GTX 285 belongs to Snow Crash and the GT 240 belongs to SKGiven. In my brief search those were the only machines I came up with that had run long WUs for both v6.05 and v6.06. Maybe they can supply more details about what else was running at the time.

As for my 5 GPUGRID boxes: only 1 of 5 received any long beta WUs, both of which were of the v6.05 variety (even though all are setup to receive test apps). Of those 2 v6.05 WUs: 1 ran for an enormous amount of time, went to 100% and then back to 17% before I aborted it. The other refused to start, something I have never seen before.

Also as I stated somewhere above, with the non-BOINC Wieferich project running on both cores of the dual core machine that got v6.05 WUs: with the normal 2 Wieferich CPU WUs running, 1 stalled and the machine became 95% unresponsive. When I killed 1 instance of Wieferich and let v6.05 have the whole core, the machine became usable but of course I lost 1/2 of the Wieferich production. No other GPU app causes this problem, not v6.71, not Collatz, not MilkyWay, not PrimeGrid, not RC5-72, not SETI. You see the problem some of us have with v6.05...

Regards/Beyond


Beyond,
if this was the situation, it would be fine.
In our tests, loading the CPU with some work, it double the length of the GPU simulation.
So what where you running while you have these sims in progress?

gdf

There's lots of speculation and AFAIK few machines that have run many v6.05 & v6.06 WUs. Well here's one:

http://www.gpugrid.net/results.php?hostid=56900&offset=0&show_names=1&state=0

A recap:

v6.05 (GPU then CPU time):

13,190.28 --- 13,184.89
13,136.48 --- 13,131.83


v6.06 (GPU then CPU time):

13,962.64 --- 670.80
14,049.59 --- 689.72

So for around an increase of 15 minutes more GPU time, v6.06 saves over 3.5 hours of CPU time.
This is for a GTX 285. The difference is even more extreme for slower cards. Here's a GT 240:

v6.05 (GPU then CPU time):

41,658.85 --- 40,742.34


v6.06 (GPU then CPU time):

45,470.80 --- 4,785.85

In this case we see an increase of 1:04 of GPU time and a whopping saving of 10:00 hours of CPU time with v6.06.

I certainly prefer v6.06. I can help a lot of projects with that CPU time and will have a more responsive system to work with (especially dual core or slower systems).



ID: 14897 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14898 - Posted: 1 Feb 2010, 10:48:46 UTC - in response to Message 14897.  

I own the GTX285 that I posted results for.

It is an i7-920 with hyperthreading turned on, running Windows XP Pro 32 bit with BOINCManager version 6.2.28 (WCG Edition)

I was executing 8 World Community Grid WUs, specifically their Help Fight Childhood Cancer subproject, the entire time that the 6.05 and 6.06 GPUGrid betas were executing.
http://www.gpugrid.net/show_host_detail.php?hostid=56900
http://www.gpugrid.net/results.php?hostid=56900


I also have another i7-920 which has hyperthreading turned on, running Windows Vista Ultimate 64 bit with BOINCManager 64 bit 6.10.29. This machine has a GTX295.
http://www.gpugrid.net/show_host_detail.php?hostid=31780
http://www.gpugrid.net/results.php?hostid=31780

I only have valid results for two of the small betas on v6.06 which do show a higher usage of CPU than the results on my GTX285.
GTX285 CPU usage is 6% of CPU usage
GTX295 CPU usage is 16% of CPU usage
I was running a combination of Einstein and ClmiatePrediction on the machine with the GTX295 while the short betas were executing.

I have attached this second PC to SETI and am running 8 of their WUs along with two of GPUGrid's 6.06 betas.
Yes, I aborted a couple of GPUGrid 6.71 WUs so we could get this test started.

I am seeing low CPU usage for the GPUGrid 6.06 betas with between 2-3 percent each as viewed in Task Manager.

The "Status" column reports the WU is running and using 0.28 CPU + 1.00 NVIDIA GPU

The progressbar/ percent complete indicator does not appear to be updating at all.

The Time to Complete is updating.

I have been running for 30 minutes and if the time to complete is accurate then I am not seeing any improvement on runtime at all. I will let this continue and we can follow up with final results when it finishes.

Let's start listing things to test ...
At first blush it looks like it has something to do with the OS?
Perhaps we could test different versions / bits of BOINCManager?

Thanks - Steve
ID: 14898 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Tom Philippart

Send message
Joined: 12 Feb 09
Posts: 57
Credit: 23,376,686
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 14899 - Posted: 1 Feb 2010, 14:22:40 UTC - in response to Message 14881.  

If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core!
We might go to 0.6 CPU so that two GPUs can share the same CPU. This should be enough and more similar to the reality of the usage. As a matter of fact, many of our calculations do use some CPU anyway for calculations.
Comments?

gdf


Folding@Home also used 1 cpu core for their ati client. They worked around it using flush intervals, buffering and dualbuffering as environment variables. I don't know if this could be applied to cuda...

I get your point with the performance, but as a whole I think this shouldn't stay this way.

I think at the moment virtually no one runs gpugrid exclusively. Say I have a quadcore with 1 gpu, my objective is to maximize my multiproject participation. To illustrate let's say I run WCG on the cpu and gpugrid on the gpu. I'm currently able to contribute 4 cores to WCG and "1core" (the gpu) to gpugrid.
This allows me to have "5cores" in my quadcore computer. I was able to run 4 WUs of WCG at a time, and when I found out about gpugrid, I was able to contribute to this project on top of my former regular contribution on WCG.

Now, it would mean to me that I'm still contributing to the 2 projects, but it would degrade my WCG production. I'll end up with "4cores" (4WUs running on a quad+gpu).

I'm not talking about credits, in that perspective the new solution would outweight the old, but I'm talking in terms of output. This compared to the old situation and other gpu projects.

Gpu crunching is still in an early stage, but it has till now been seen as an additional source of production: it works on top of what was possible before, as a "5th core".

As somebody before me said, the other projects all use almost no cpu power.
If you have a computer dedicated to gpugrid, the new solution is of course the best one as you're not viewing it from a multiproject point of view.
ID: 14899 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14901 - Posted: 1 Feb 2010, 15:38:31 UTC - in response to Message 14899.  

When running a 6.71 application Boinc says 0.64CPUs + 1GPU.
Using Boinc and Task Manager you can see that Boinc is running 4 WCG CPU tasks and 1 GPU task (100% CPU usage). If you set Boinc to use 3 of the 4 CPUs, this actually applies to GPUGrid tasks as well. So Boinc will try to run three WCG CPU tasks, and use one of these CPUs for the GPU tasks, leaving one CPU free (useless).
Unless you can set things up so that Boinc thinks it takes 1 full CPU + 1 GPU then the potential GPU efficiencies will not be seen!

Although there are work arounds, to get as much GPU task efficiency as possible, none are particularly handy.

One method is to install 2 versions of Boinc (a 64bit and a 32bit) using the 32bit to run GPUGrid tasks and setting the CPU % usage to 25%, and setting the 64bit client to 75%. Of course you need a 64bit OS to do this.
A second method is similar, but involves using the WCG Bespoke Client rather than Boinc for WCG tasks and setting it to use 3 CPUs. This is obviously no use to anyone running other Boinc tasks.
A third method requires setting up Virtual PCs, eg running XP from within Visa.
ID: 14901 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jeremy

Send message
Joined: 15 Feb 09
Posts: 55
Credit: 3,542,733
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 14902 - Posted: 1 Feb 2010, 16:03:02 UTC - in response to Message 14901.  

If you set Boinc to use 3 of the 4 CPUs, this actually applies to GPUGrid tasks as well. So Boinc will try to run three WCG CPU tasks, and use one of these CPUs for the GPU tasks, leaving one CPU free (useless).
Unless you can set things up so that Boinc thinks it takes 1 full CPU + 1 GPU then the potential GPU efficiencies will not be seen!


This is not the case based on my most recent experience. Last night I set BOINC to use 75% of available cores. Turned it loose and I got 100% load on 3 out of 4 cores and an average of about 30% on the fourth with no background tasks active. If I suspend activity on all projects except GPUgrid I get the same result. One core with a 30% load, the rest with nothing.

That's with two 6.71 GPUgrid apps running. BOINC 6.10.18, Win7 Ult64, C2Q @ 3.83 GHz, and two GTX 260-216.

I'm waiting for a few more WUs to finish before I check to see if there's a performance advantage to running this way.
ID: 14902 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14903 - Posted: 1 Feb 2010, 17:07:02 UTC

I've been aborting v6.71 tasks on my GTX 260 trying to get a v6.06 WU to test, no luck.
It's hard to test the beta if we can't get WUs. Did manage to get one on the GT 240 however...
ID: 14903 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Siegfried Niklas
Avatar

Send message
Joined: 23 Feb 09
Posts: 39
Credit: 144,654,294
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 14905 - Posted: 1 Feb 2010, 17:39:07 UTC

GTX 295, 701MHz, 1509MHz, 1086MHz (896MB) driver: 19062 , i7-860@3.8 GHz/, Vista64

ACEMD beta version v6.05 + 8 CPU-cores(HT)- Spinhenge@home:

Run time: 15771.8694
CPU time: 15753.59


ACEMD beta version v6.06 + 8 CPU-cores(HT)- Spinhenge@home:

Run time: 17494.2924
CPU time: 2444.832
ID: 14905 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael Goetz
Avatar

Send message
Joined: 2 Mar 09
Posts: 124
Credit: 124,873,744
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 14906 - Posted: 1 Feb 2010, 17:47:59 UTC
Last modified: 1 Feb 2010, 17:52:22 UTC

I just ran this beta test. It errored out instantly, as it did with my wingmen.

GTX 280, Q6600, all 4 cores running tasks from PrimeGrid.

Edit: Didn't realize, but two WUs had run. The ID above is corrected.

Also ran was this test, which completed normally with 203 seconds of run time and 33 CPU seconds. Not sure what else was running at that time.
Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

ID: 14906 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile K1atOdessa

Send message
Joined: 25 Feb 08
Posts: 249
Credit: 444,646,963
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14908 - Posted: 1 Feb 2010, 19:23:16 UTC

My first test 6.06 WU choked after running for about 1.5 hours. Saw the issue where the progress bar never moved, but saw the message below after it errored out.

"SWAN : FATAL : Failure executing kernel sync [frc_sum_kernel] [999]
Assertion failed: 0, file ../swan/swanlib_nv.cpp, line 203

This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information."
ID: 14908 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14909 - Posted: 1 Feb 2010, 19:36:41 UTC

Jeremy wrote:
If you set Boinc to use 3 of the 4 CPUs, this actually applies to GPUGrid tasks as well. So Boinc will try to run three WCG CPU tasks, and use one of these CPUs for the GPU tasks, leaving one CPU free (useless).

This is not the case based on my most recent experience.

I second that. Running MW on a HD4870 and a C2Q. If I set BOINC 6.10.29 to use 75% CPU is launches 3 CPU tasks and one MW@ATI. This way performance is much better than at 4 CPU + 1 MW, even though MW itself uses little CPU. The catch here is that it needs cpu support often and in precise intervals. So effectively you have to dedicate one core here as well.. or live with a slower GPU.


Tom Philippart wrote:
I'm not talking about credits, in that perspective the new solution would outweight the old, but I'm talking in terms of output.

You're measuring with two different gauges here. A reduction of your WCG output by a factor of 1.33 does count, but a GPU-Grid output increase by a factor of 1.66 does not count? You must not count it purely in terms of "cores", as that can be quite misleading. Or is one core of a Celeron 266 MHz worth as much as one of an i7? Or a GTX260 or a GTX380?


Zydor wrote:
... and second or more app(s) could be there using one cpu and one gpu. Place a test on the faster app to allow access to Dual and Quad core only, and denying download to single core.

Don't deny it, just don't make it the default. Otherwise dedicated cruncher boxes will hate you ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 14909 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 11 · Next

Message boards : Graphics cards (GPUs) : New nvidia beta application

©2026 Universitat Pompeu Fabra