Desktop freezes

Message boards : Graphics cards (GPUs) : Desktop freezes
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12916 - Posted: 29 Sep 2009, 18:21:01 UTC - in response to Message 12894.  

I installed 185.36.128 and CUDA Toolkit 2.2, but freezes still there.

Then I turned off effects. As of now no more freezes, but I still think that it's like hiding the head in the sand.

Yeah, get a whiz-bang system and then you have to shut off everything to make BOINC work... BOINC is supposed to be working in the background as idle and not interfere with anything ... then again, what do I know ... :)


Not a lot about CUDA apps? ;)

Yep, never said I did... but I do know a lot about the conceptual idea of how BOINC should operate... has operated, and does operate...

The CPU component of the GPU task may be idling but the GPU is not idling when a CUDA kernel is executing on it. Desktop effects (compositing) also take resource. (Far more than they should for the sake of eye candy.) Hit the GPU with 'texture from pixmap' while executing CUDA kernels and expect the desktop to stutter.

This I know

As an aside, if you read the CUDA release notes, they tell you that individual CUDA kernel launches are limited to a 5 sec run time restriction when a display is attached to the GPU. For this reason it is recommended that CUDA is run on a GPU that is NOT attached to an X display. If you choose to ignore the recommendation, I'd suggest doing everything possible not to add extra load to a GPU while it's running CUDA and connected to a display, like turning off desktop effects.

On the other hand, though some will say it is not BOINC's fault but the project's ... there is a wide variance with the way BOINC is operating with the various projects in that for most I have no issues at all and see significant effects with one, maybe two ... my point being that as usual the UCB team is abdicating the responsibility to help the projects with the notion that this kind of thing is a project responsibility ...

Maybe so, but that only means that we now have 50 teams that have to figure this stuff out on their own instead of one ...
ID: 12916 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JackOfAll
Avatar

Send message
Joined: 7 Jun 09
Posts: 40
Credit: 24,377,383
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 12921 - Posted: 29 Sep 2009, 19:23:28 UTC - in response to Message 12916.  

As an aside, if you read the CUDA release notes, they tell you that individual CUDA kernel launches are limited to a 5 sec run time restriction when a display is attached to the GPU. For this reason it is recommended that CUDA is run on a GPU that is NOT attached to an X display. If you choose to ignore the recommendation, I'd suggest doing everything possible not to add extra load to a GPU while it's running CUDA and connected to a display, like turning off desktop effects.


On the other hand, though some will say it is not BOINC's fault but the project's ... there is a wide variance with the way BOINC is operating with the various projects in that for most I have no issues at all and see significant effects with one, maybe two ... my point being that as usual the UCB team is abdicating the responsibility to help the projects with the notion that this kind of thing is a project responsibility ...

Maybe so, but that only means that we now have 50 teams that have to figure this stuff out on their own instead of one ...


I understand what you are saying, but at the end of the day the boinc core is a glorified launcher, (the middleware, if you like), and the projects are responsible for the diverse clients. The UCB team can't really be responsible for the projects, and their client software.

The one size fits all approach does not work so well, (eg. FIFO GPU scheduling), and to be frank individual projects are going to want to see optimizations that suit their own purposes rather than generalizations. Sounds like it is a no win situation to me.

CUDA (and GPGPU in general) is such a young technology that how many people do know it inside out and backwards? I mean, at least the boinc core can set the process prioities for the CPU task clients. That's pretty tricky to do with GPU side of the GPU tasks. ;) They're either on or off. Unlike the multitude of options that are built into eg. the Linux kernel CPU scheduler, you just don't have that functionality available on the GPU. (And the jury is still out on whether the 'GPU in use' config property and underlying code is actually doing what the developer expects that it is doing. I'm not 100% convinced it is but was too busy today to spend more time testing this.)

Anyway, I hope I made the point I was trying to make. If people would change their expectations, and think of running CUDA on a GPU that's doing something else, (like driving a display via X) as a less than optimal way of doing things, that would go some way towards it. (If you want optimal on a consumer grade card, forget about whether desktop effects are switched on or off and don't use it to drive a display at all!) Giving it a chance, by not using other hardware acceleration functionality (desktop effects) at the same time as using CUDA computing capability, seems obvious to me. Lobbing bricks in the general direction of nVidia drivers and BOINC because the desktop stutters when they have no understanding of how their hardware actually works, or what is a reasonable expectation, just shows a lack of education. (I expect to get bashed for that last sentence, and I'm not trying to be insulting, but it does seem that some peoples expectations are set way beyond what their hardware is actually capable of. YMMV.)

Crunching on Linux: Fedora 11 x86_64 / nVidia 185.18.36 driver / CUDA 2.2
ID: 12921 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm
Avatar

Send message
Joined: 2 Mar 09
Posts: 159
Credit: 13,639,818
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12922 - Posted: 29 Sep 2009, 19:52:15 UTC - in response to Message 12921.  
Last modified: 29 Sep 2009, 19:53:00 UTC

my system i have noticed, when running 1 or 2 apps in high priority in windows task manager, i get a freeze up for 1-2 secs every so often; but i live with it... crunch time is decreased by 5-10% depending on the app and wu.
ID: 12922 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12927 - Posted: 29 Sep 2009, 23:45:28 UTC - in response to Message 12921.  

OK, that's clear that there will no quick fix for "use GPU..." in nearest future, right?

In fact, the 2nd day I'm surviving w/o desktop effects and you know what? I'm still alive :-) sure it's less functional, but there are NO freezes which made me pissed off so much. So, thx JackOfAll and Paul for your help.

May I asked couple of questions while such people are around? in Q4 this year nvidia will present GT300 cards. so, here are actually two questions:

- have you heard smth about architecture of future cards? I mean - if there is a sense to upgrade from GTX275 to the new ones

- will BOINC app for GPUGRID work on SLI? I mean I'd like to get 2 cards and I'm not sure if I should SLI bridge or not (like in Folding and i must NOT connect cards with the bridge);

- I'd like to build in next couple of weeks new rig based on 1366 socket. Now I've got E6300 (Wolfdale-2M 2800 stock OC'ed to 4000). Will this upgrade gain me smth in terms of GPUGRID? I mean - does GPUGRID suffers now from the lack of CPU? right now I'm crunching for Rosetta, but nice for GPUGRID is 10 and for Rosetta - 19.

BTW, Paul, regarding this proverb. I've got a friend from Perth and is one of his favorite proverbs. I'm not sure if it's widely used all over Australia, but here's the story behind :-)
ID: 12927 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12930 - Posted: 30 Sep 2009, 1:27:59 UTC

@Jack, I sure wish I had enough left to answer you ... I understand your point, but, as middleware BOINC has more responsibilities when more than one project is affected or needs a feature. Then, that is exactly where middleware is supposed to step up to avoid reinventing the wheel ...

@CTA

Doing anything with the CPU does nothing really for GPU Grid, but would help Rosetta which is not all bad. Getting faster GPUs or more of them is the key to doing more here. The strategy with new generations is the questing on when to adopt. So, the GTX300 comes out ... that drives down the cost of GTX2xy cards which means that I can buy a couple more for the same cost as one 300 card. In that this adds to my total capacity I can do more.

6= months to a year from now if and when a card dies the 300 has been supplemented with a 325 or whatever ... so it is lower in cost (in theory) and I can get one for less and then get that significant boost. If you do have the cash to invest is it better to buy at the top or twice as many in the middle ... arguments both ways ... and I have done both... :)

If the 300 is twice as fast for the same power footprint that argues for the extra up-front expense as the power cost to run it is less... it is all very dependent on exact details...

As to SLI, with the later drivers you are supposed to be able, with BOINC, to use SLI ... I am not interested in the extra gaming power so I have not bothered to try it, not broken so I am not trying to fix it ... :)
ID: 12930 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JackOfAll
Avatar

Send message
Joined: 7 Jun 09
Posts: 40
Credit: 24,377,383
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 12935 - Posted: 30 Sep 2009, 11:36:28 UTC - in response to Message 12927.  

OK, that's clear that there will no quick fix for "use GPU..." in nearest future, right?


I'm kind of limited to what I can and cannot say having signed a rather draconian NDA. We use CUDA commercially in a software product. We actually outsource our software development now, so I've asked someone who I consider to be a CUDA expert to take a look at what that code currently does and if there is a better way of achieving the objective. When I get a response I'll pass it to UCB.

In fact, the 2nd day I'm surviving w/o desktop effects and you know what? I'm still alive :-) sure it's less functional, but there are NO freezes which made me pissed off so much. So, thx JackOfAll and Paul for your help.


Glad you can live without the 'bling' for the moment.

May I asked couple of questions while such people are around? in Q4 this year nvidia will present GT300 cards. so, here are actually two questions:

- have you heard smth about architecture of future cards? I mean - if there is a sense to upgrade from GTX275 to the new ones


Details are still a little thin on the ground and depending on who you believe we might not even see the new architecture cards until next year. Right now, IMHO is a bad time to be buying new nVidia cards. I'd advocate holding off for a couple of months. (Especially with the high end cards, > GTX275.)

- will BOINC app for GPUGRID work on SLI? I mean I'd like to get 2 cards and I'm not sure if I should SLI bridge or not (like in Folding and i must NOT connect cards with the bridge);


Paul answered this above. The 190.xx driver series and CUDA 2.3 allows you to access individual GPU's (for CUDA purposes) whilst the cards are in SLI mode. Not tried it personally, but I know it does work.


Crunching on Linux: Fedora 11 x86_64 / nVidia 185.18.36 driver / CUDA 2.2
ID: 12935 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13035 - Posted: 6 Oct 2009, 4:42:58 UTC
Last modified: 6 Oct 2009, 5:21:27 UTC

I've seen something similar to the display freeze problem under the 64-bit Windows versions of BOINC (at least 6.10.3), but the freeze is permanent enough that I've been unable to check if all the other software freezes as well. Seems to occur only when running both a GPUGRID workunit and a CPU workunit from some other project, and only if the CPU workunit has graphics that are big enough to fill the screen. I'm not familiar with the terms used to describe avoiding any use of the screensaver that comes with recent BOINC versions, if you want to do this under Linux versions, but I'd suggest trying this if you know how.

I've also seen something similar to the FIFO problem, on the same machine, but only when it downloads two CPU workunits from the same other project and the combined estimated time for the two is greater than that project's delay to the deadline, so they can't run on the same CPU core and still finish on time. 6.10.3 seems to think that workunits that have gone into high priority mode do not need to obey the limits on how much memory BOINC is allowed to use.
Also, simply installing 6.10.3 seems to increase the estimated times for running many types of workunits, especially those from The Lattice Project. There's little sign that those workunits often even need as much time as the initial estimate, so 6.10.3 may eventually adjust its database to give more accurate estimates.

Another of my machines, slower and without a GPU capable of running GPUGRID workunits, and still running BOINC 6.6.36, had two similar The Lattice Project workunits arrive with an initial runtime low enough to allow running them both on the same CPU core by one day before the deadline, and therefore 64-bit Windows BOINC 6.6.36 saw no reason to require running them on separate CPU cores. However it hasn't found any reason to download any CPU workunits form any of the BOINC projects it's connected to since then, even though one of its CPU cores is now idle.

So far, both of these problems appear to apply when using a 190.* driver, with no clear evidence on whether they also apply when using a 185.* driver, so I'd suggest looking for posts on whether there's a 185.* driver that works with recent GPUGRID workunits under Linux.

Also, both problems appears to apply when using a 9800 GPU card but likely not when using a GTX 260 or higher GPU card, so I'd suggest mentioning which GPU cards are involved in any further discussion of these problems. I'd also suggest mentioning whether your machine has enough GPUs to make SLI worthwhile - none of mine have more than one, and therefore don't seem to be allowed to even use SLI.

In case it makes a difference, none of my machines have a second graphics card to move its monitor to.
ID: 13035 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Graphics cards (GPUs) : Desktop freezes

©2026 Universitat Pompeu Fabra