Had to stop crunching on GT240

Message boards : Graphics cards (GPUs) : Had to stop crunching on GT240
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19338 - Posted: 7 Nov 2010, 20:44:58 UTC

I've had to stop crunching for GPU Grid on the GT240's I've got because since the new app,my dual machine keeps freezing even when cards are at stock and I get stuttery video on another computer. Before anyone suggests it's not this projects apps I have tested it with and without. One will still run for a while because it's remote but it too will be detached.
It's all well and good squeezing every ounce out of these cards but please remember they also have other tasks to perform which, I think is sometimes forgotten especially as they don't have the low priority capability of the CPU.

SORRY!
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline
ID: 19338 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
David @ TPS

Send message
Joined: 8 May 09
Posts: 1
Credit: 1,596,130
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 19798 - Posted: 7 Dec 2010, 17:54:46 UTC - in response to Message 19338.  

I also had to pull my GT240 since it did not like the new WU's at all. It went from about 20 hrs to several days. I do not think it had the power to run GPUgrid any more.
ID: 19798 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19799 - Posted: 7 Dec 2010, 19:56:26 UTC

Yes, it's a shame that such good, not really old, cards get abandoned by the project in it's addiction to only the latest, most expensive, cards running 24/7.

They don't care about ordinary crunchers, they only want people with very much money to spend.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19799 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MarkJ
Volunteer moderator
Volunteer tester

Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19801 - Posted: 7 Dec 2010, 20:13:37 UTC

As was suggested to me by skgiven, you could roll back to the 197.45 drivers, so that you get the older 6.12 app running under cuda 3.0. I have done this on one machine and it seems to have resolved the reliability issues I was having.

Cuda 3.1 + the 6.13 app are not stable together (under windows).

I understand if you need a later cuda version for some other project, so this may not be the answer for you.

Cheers
BOINC blog
ID: 19801 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jeff Gu
Avatar

Send message
Joined: 12 Apr 08
Posts: 1
Credit: 8,249,452
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 19815 - Posted: 8 Dec 2010, 21:23:45 UTC

I'm going to have to drop this one, too. My Nvidia cards are all 9600/9800/8800 cards and GPUgrid WU's either error out, take days to finish with no credit given or the machine simply won't get any.

I also think it's a shame that my mostly recycled crunchers, machines I brought back to life to put to work for science, aren't desirable. I'm not about to put together $2000 machines to satisfy the requirements of a couple of projects that seem more interested in playing with bleeding edge equipment than doing anything resembling real science.
Jeff Gu
ID: 19815 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Richard Haselgrove

Send message
Joined: 11 Jul 09
Posts: 1639
Credit: 10,159,968,649
RAC: 261
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19817 - Posted: 8 Dec 2010, 21:47:39 UTC - in response to Message 19815.  

Sometimes real (bleeding edge) science requires bleeding edge equipment to get answers while the science is still - well, bleeding edge.

Having said that, my 9800GT and 9800GTX+ cards are (mostly) finishing tasks - except the new 'Fatty' WUs - within 24 hours, without error, provided I abort the *-KASHIF_HIVPR_n1_(un)bound_* sub-class of tasks: that's the only group I'm still having trouble with.
ID: 19817 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19820 - Posted: 8 Dec 2010, 23:06:10 UTC - in response to Message 19801.  

As was suggested to me by skgiven, you could roll back to the 197.45 drivers, so that you get the older 6.12 app running under cuda 3.0. I have done this on one machine and it seems to have resolved the reliability issues I was having.

Cuda 3.1 + the 6.13 app are not stable together (under windows).

I understand if you need a later cuda version for some other project, so this may not be the answer for you.

Cheers



Would it not be better to have the server only give the 6.12 app to 200 series cards rather then the user having to use old drivers which is going to affect things like gaming???

If the project wants only cutting edge cards then why doesn't it say so???


Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline
ID: 19820 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19834 - Posted: 9 Dec 2010, 15:57:46 UTC - in response to Message 19820.  

Would it not be better to have the server only give the 6.12 app to 200 series cards rather then the user having to use old drivers which is going to affect things like gaming???

I know that it would have been better for some of us, including myself on one or two systems, but probably too much work for the scientists (assuming it is actually possible), and it would not have solved the problem where people can’t use a Fermi and a GT200 card in the same system – they now can.

If the project wants only cutting edge cards then why doesn't it say so???

Just to clear this up, the project does want sharp cards, and these are presently limited to the GTX580, GTX570, GTX480, GTX470, GTX465. However the project also wants the lesser (mid-high range) Fermi cards such as the GTX460 and GTS450, and the lower end cards such as the GT440.
This does not mean the project does not want contributions from people with older cards and that is not limited to the top end GT200 cards either (GTX260-216, GTX275, GTX280, GTX285, GTX295), it extends to the GT240, and any of the mid to high end CC1.1 cards that are recommended and work on GPUGrid.

There have always been problems with the CC1.1 cards, overheating, bad design, bad drivers, CUDA bugs and the performance of many CC1.1 cards tended to vary with the different research applications and tasks. To a lesser extent this was and is also the case with CC1.3 and CC1.2 cards.

Overall the present situation is not any worse for CC1.1 card users, it’s just that it is much better for Fermi users; there are fewer critical issues with Fermi cards. This is inherent in the cards design; these are newer, better cards with improved driver support. Most of this is down to NVidia’s designs and software support (development of drivers and CUDA). This has improved substantially from what it was with the CC1.1 cards.

As there are no new driver improvements or CUDA app improvements for CC1.1, CC1.2 or CC1.3 cards from NVidia, the research is static on this front, the research team cannot etch out any scientific app improvements for these cards. Indeed to spend time trying to further refine mature apps would be a waste of time, instead they made compatible apps for CC1.3 and Fermi cards. Any app advancements that do come will primarily be for Fermi cards, but the scientific research still runs on the older cards.
ID: 19834 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BarryAZ

Send message
Joined: 16 Apr 09
Posts: 163
Credit: 921,733,849
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19836 - Posted: 9 Dec 2010, 16:58:07 UTC - in response to Message 19817.  

I periodically check back on this project to see if there has been any progress resolving work unit failure issues on mid range cards (I still have a few 9800GT's running). The answer, sadly, is that things are worse. Just had a batch error out after running anywhere from 3 to 14 hours -- the deadly error while computing result.

GPUGrid (for me) has a number of problems.

1) Relatively low credits for GPU cycles (compared to nearly any other CUDA supporting project).

2) VERY long run times - which increase the possibility of 'error while computing'.

3) Bad citizenship regarding erroring work units, that is, unlike other projects where the work units either error out early or 'pre-abort', GPUGrid workunits are perfectly content sucking up GPU cycles for hours before failing thus wasting cycles.

4) Over time, the GPU requirements have grown, making any but quite expensive quite high power CUDA cards increasingly marginal. Further, the long run/fail scenario has gotten worse rather than better over time.

5) The continued non-support for ATI GPU cards. As ATI GPU cards have been getting better and more efficient, as for a double precision lower cost and relatively lower power card, the 4850 is quite attractive, the lack of support from GPUGrid increasingly marginalizes the project.

Given all this, for me, I've elected to detatch the 9800 based workstations I have from GPUGrid, instead, they will focus of Collatz, DNetc, and to a lesser degree Einstein and SETI as NONE of those projects are plagued by the long run, error while computing problems seen so frequently here.




Sometimes real (bleeding edge) science requires bleeding edge equipment to get answers while the science is still - well, bleeding edge.

Having said that, my 9800GT and 9800GTX+ cards are (mostly) finishing tasks - except the new 'Fatty' WUs - within 24 hours, without error, provided I abort the *-KASHIF_HIVPR_n1_(un)bound_* sub-class of tasks: that's the only group I'm still having trouble with.

ID: 19836 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Greg Beach
Avatar

Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 19994 - Posted: 18 Dec 2010, 22:06:04 UTC

For Linux users you can use the GT 240 with the latest NVIDIA drivers. There was a problem with the Cairo 2D libraries that caused X.Org to chew a lot of CPU but that's resolved in the latest version.

Run times for the 6.13 app increased about 50-75% over 6.12 so I don't get the 24 hour bonus but I do make it within the 48 hour window. At least the workunits don't fail like they did before.

For those interested, my configuration is:

Fedora 14 x86_64 (all current updates)
NVIDIA Driver 260.19.29
CUDA 3.2 Toolkit
ID: 19994 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20022 - Posted: 24 Dec 2010, 16:13:31 UTC - in response to Message 19836.  

I periodically check back on this project to see if there has been any progress resolving work unit failure issues on mid range cards (I still have a few 9800GT's running). The answer, sadly, is that things are worse. Just had a batch error out after running anywhere from 3 to 14 hours -- the deadly error while computing result.

GPUGrid (for me) has a number of problems.

1) Relatively low credits for GPU cycles (compared to nearly any other CUDA supporting project).

2) VERY long run times - which increase the possibility of 'error while computing'.

3) Bad citizenship regarding erroring work units, that is, unlike other projects where the work units either error out early or 'pre-abort', GPUGrid workunits are perfectly content sucking up GPU cycles for hours before failing thus wasting cycles.

4) Over time, the GPU requirements have grown, making any but quite expensive quite high power CUDA cards increasingly marginal. Further, the long run/fail scenario has gotten worse rather than better over time.

5) The continued non-support for ATI GPU cards. As ATI GPU cards have been getting better and more efficient, as for a double precision lower cost and relatively lower power card, the 4850 is quite attractive, the lack of support from GPUGrid increasingly marginalizes the project.

Ditto. I check back from time to time. Would like to run the project again if things improve. Personally think that some help is needed in the programming department.

ID: 20022 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20024 - Posted: 24 Dec 2010, 18:24:59 UTC - in response to Message 20022.  

These guys program in CUDA, this is their expertise.

Things are good and will get better.
ID: 20024 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20045 - Posted: 27 Dec 2010, 9:32:05 UTC - in response to Message 20024.  

These guys program in CUDA, this is their expertise.

Things are good and will get better.

Things were good, got bad and will perhaps get good again.

For GT240 things are very, very bad now, and they were without problems before this fatal change in application.

If the credits for work done haven't changed, and as they are determined at the project only I assume that's the case, my computer does about half the work as before if I heavily meddle with each and every single WU. If not, as it's currently the case because I'm away from the computer, and the WUs run as the project officially wants them, it will probably not even make the for-credit-only-deadline of 4 days, not to speak of the real one of two days.

A re-installation of the old app would have made my computer double effective, but the project decided to rather ditch it than to cater to recent, just not very expensive, cards.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20045 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20048 - Posted: 27 Dec 2010, 13:00:58 UTC - in response to Message 20045.  

Saenger, you present a skewed view that lacks project objectivity. The project does cater for GT240’s:
Running 24/7 this stock GT240 would get between 16K and 19K per day. These cards each get around 14K per day despite being on Vista and not running for about 20h per week.

The widely accepted advice is to use a driver that will allow GT240 crunchers to run the 6.12app rather than the 6.13app (designed for Fermi cards). More recent drivers are slower for the GT240 and there is a fault in these drivers causing some cards (especially GT240’s) to reduce the clock rates. This seems to be the case more often on XP and Linux. GPUGrid does not write the drivers and has no influence in their design. It is also advised to free up one CPU core per GPU card and to use swan_sync=0.
As for Linux variations, it appears that the most readably available driver for Linux 10.10 versions is the 260 driver. While it is possible to use a different driver, it’s not easy under Linux 10.10 and many previous builds just do not install. Again these troubles are down to Linux and NVidia, not GPUGrid. Note that Linux 10.10 was released after the 6.12 and 6.13 apps.
Your choice of operating system, driver and configuration determines your ability to crunch the various Boinc Project. It has been explained many times that the new system facilitates more cards. Your use of non recommended configurations is your choice. I would encourage you to cease complaining about your setups shortcomings, it does not help anyone. Please either change your setup or accept the situation as it is for now.
ID: 20048 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Stoneageman
Avatar

Send message
Joined: 25 May 09
Posts: 224
Credit: 34,057,374,498
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20050 - Posted: 27 Dec 2010, 18:43:31 UTC

Hello Saenger,
Checkout Bigtuna
He's running Linux with a 240 and averaging less than a day/task at stock, with no SWAN_SYNC. Might be worth a PM to find out what his set up is.
ID: 20050 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20081 - Posted: 1 Jan 2011, 14:52:44 UTC

My machine finished the first WU under non-babysitting conditions, so like the projects has to expect it's users to set their machines, except that it's running an unusual 24/7:
Sent	23 Dec 2010 21:07:21 UTC
Received	30 Dec 2010 18:27:37 UTC
Server state	Over
Outcome	Success
Client state	Keines
Exit status	0 (0x0)
Computer ID	66676
Report deadline	28 Dec 2010 21:07:21 UTC
Run time	588547.353027
CPU time	2359.39
stderr out	

<core_client_version>6.10.17</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory:                 536150016 bytes
# Number of multiprocessors:                     12
# Number of cores:                               96
MDIO ERROR: cannot open file "restart.coor"
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory:                 536150016 bytes
# Number of multiprocessors:                     12
# Number of cores:                               96
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GT 240"
# Clock rate: 1.34 GHz
# Total amount of global memory:                 536150016 bytes
# Number of multiprocessors:                     12
# Number of cores:                               96
# Time per step (avg over 175000 steps): 	466.165 ms
# Approximate elapsed time for entire WU:  	582706.597 s
19:20:35 (5944): called boinc_finish

</stderr_txt>
]]>

Validate state	Task was reported too late to validate
Claimed credit	8011.53935185185
Granted credit	0
application version	ACEMD2: GPU molecular dynamics v6.13 (cuda31)

It even has set swan_sync to 0.
So far for catering a GT240. Definitely not so. If it's not running on that machine, you're not interested in normal crunchers with very recent cards like mine.

Everything was running as you have to expect it. You have to deal with this environment, not us poor crunchers with your absurd babysitting and fiddling demands.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20081 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20082 - Posted: 2 Jan 2011, 15:27:07 UTC - in response to Message 20081.  

You persistently refuse to use recommended configurations, and then persistently complain about the project not supporting your non recommended setup.
While I understand that setup and configuration is your choice, and that you are disgruntled that the project does not run well under your setup, you should accept that the project cannot afford to go out of its way to facilitate such setups:
Although the project has thousands of crunchers attached, the top 5 crunchers do 10% of the work. The top 20 crunchers do 22% and the top 100 crunchers do 45%. If the project went back to old apps then no-one with a GTX460 could crunch on Linux and no-one with a GTS450 could crunch at all. Many of these cards are in use here.
Your setup appears to facilitate other projects rather than this one.
It’s down to the application developers to support new cards and down to crunchers to support the project, not the other way round.

ID: 20082 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20083 - Posted: 2 Jan 2011, 16:42:15 UTC

As I have no way to choose any application, it's the project peoples choice alone to send me Fermi apps for my non-Fermi card. I can't do anything to choose 6.12, I can only install the newest drivers.

The projects knows perfectly well my setup (and not because I posted it here, but because my BOINC manager knows it), but the persistently refuse to sent me a matching app. If I could set anything in my account, I would do so. I even asked about this possibility quite some time ago, another active choice by the project not to implement.

I can only be passive, as I have no possibility to choose.
The project actively sends the applications, the WUs and it decides alone what it deems to be suitable for my machine.
If they wanted, they had all possibilities to send only matching WUs and apps, they simply don't want.
They want my machine not to use CPU.
They want my machine to run the app on low priority.
They want my machine to crunch even extremely long WUs.
They want to send to my non-Fermi machine Fermi apps.
They want to waste my GPU-power.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 20083 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ftpd

Send message
Joined: 6 Jun 08
Posts: 152
Credit: 328,250,382
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20084 - Posted: 2 Jan 2011, 17:03:47 UTC - in response to Message 20083.  
Last modified: 2 Jan 2011, 17:21:33 UTC

Saenger,

The final choice is yours.

You can crunch GPUgrid or not!

Please, stop this wastefull discussion on the forum from now.

Happy new year and go crunch RNA-World!

Use your card for Primegrid or Seti@home!
Ton (ftpd) Netherlands
ID: 20084 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
crazyrabbit1

Send message
Joined: 13 Jun 10
Posts: 1
Credit: 641,860
RAC: 0
Level
Gly
Scientific publications
watwat
Message 20085 - Posted: 2 Jan 2011, 21:21:49 UTC - in response to Message 20084.  

I also have a gt240 and i downgradet my driverversion to crunch some wu's in a normal time, because i thought the project is worth it, as a backup. And normally i do not spend time on an beta project but after reading the thread i think the project never be out of beta state.
The project is hunting for an application for the newest and fastet gpus that are available.....and not for as many crunchers as possible....that is no science, just my to cents.
Even if i buy today the fastest gpu i can get, i can not be shure i can use it on gpugrid in 12 month.....I think the project should think about this.

Only my opinion.
I would upgrade my driver that i can crunch Einstein.
ID: 20085 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Graphics cards (GPUs) : Had to stop crunching on GT240

©2025 Universitat Pompeu Fabra