Message boards :
Number crunching :
NOELIAs are back!
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Is Prefer Maximum Performance 'presently' selected (as in, did you set it since you last rebooted)? PS. Finger pointed at CPDN (no system or WU failures since I stopped crunching climate models)! FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
PS. Finger pointed at CPDN (no system or WU failures since I stopped crunching climate models)! I suspected such. Maybe a conflict between the apps? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Possibly, or with Boinc, but on at least 2 occasions it/something has caused a blue screen/restart, which would have prevented Boinc and running apps from closing down properly. Most likely it was the CPDN app/WU's that caused Windows to fail, and the GPUGrid WU failures were just co-incidental. I had presumed it was the GPUGrid app that had failed triggering everything else to fail, due to the startup error messages and logs. I didn't think it was the CPDN apps as several models had been running for several days. Since I've stopped running the CPDN WU's, I've had no problems... FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I run CPDN too and had BSoD's a while back and traced my problem to the USB 3.0, I uninstalled the drivers, rebooted, went in to the BIOS and turned off USB 3.0. That was 4 to 5 months ago and it seems to have fixed it, maybe CPDN and the new USB doesn't get along. I figured that the drivers weren't mature enough, I don't need it so it's no big deal for me (knocking on wood). |
|
Send message Joined: 5 May 13 Posts: 187 Credit: 349,254,454 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Is Prefer Maximum Performance 'presently' selected (as in, did you set it since you last rebooted)? I don't use the Nvidia tool (nvidia-settings in Linux), as this is a headless machine, so effectively everything is "stock". I aborted the NOELIA, as it was going to take waaaay too long (something like 4 days!) and didn't want to risk a midway - or worse - failure. I am crunching a NATHAN_KID right now at full speed, the GPU at 60-something degrees and a whole CPU core consumed. Total estimated runtime at ~22h. Bottom line, it has to be something with the NOELIAs, at least some of them.Here is the WU discussed. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
There's a discussion about 319.17 performing poorly for someone else running Ubuntu Server 12.04 x86_64. Going back to 310.44 fixed it for him. BTW: I had all Einstein WUs crash upon a driver reset triggered by suspending Noelias, so it's not purely CPDN related. No bluescreen, though. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I also saw a driver restart with Einstein (couple of weeks ago) and a POEM WU yesterday (vista rig). So it's common to many NVidia apps and WDDM OS's. The reg fix I posted has thus far prevented the driver restarts, but not the BSD/Restarts. So, two different issues. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My last task on the GTX285 was a NATHAN and took 196,443.98 seconds. That is 10,000 seconds than a NOELIA on the same card. So these new NOELIA's are not all bad. Greetings from TJ |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My GTX650TiBoost finished a NOELIA_klebe in 13h 51min (49,849sec). Ubuntu 13.04, NVidia 304.88, Boinc 7.0.27. Despite restarting several times while configuring things it was still 5.5% faster than on a 2008server (which is a bit faster than W7) on the same rig. 6890757 4473004 24 May 2013 | 21:20:02 UTC 25 May 2013 | 12:13:07 UTC Completed and validated 49,848.80 21,179.21 127,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42) TJ, you really should sell that GTX285 heater and get something new (cheap to buy, much faster, less expensive to run). A GTX650Ti would more than triple the performance, a GTX650TiBoost would almost quadruple it and a GTX660 would be around 4.5 times as fast. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
TJ, you really should sell that GTX285 heater and get something new (cheap to buy, much faster, less expensive to run). A GTX650Ti would more than triple the performance, a GTX650TiBoost would almost quadruple it and a GTX660 would be around 4.5 times as fast. I did. That is way i wrote the "last task" it's out the rig and the new GTX660 is in it. Running MilkyWay now for testing, but seems slower to me as a WU takes longer to finish around 8 minutes was 6 minutes on the old GTX285, but the WU's are not the same. But all seems slow, even browsing, I posted this under the cards threat. Indeed a heather, the GTX660 is cooler now 54°C Greetings from TJ |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
MilkyWay requires FP64 (double precision). It's a bad project for GK104 cards. The GTX285 was better because it has better double precision than a GTX660. For Single Precision and CUDA4.2 the GTX660 is much better than a GTX285. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 23 Dec 09 Posts: 189 Credit: 4,798,881,008 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just a quick question: I run a 8600 GT and a 9800 GT and some 8400 (never dump old gear) - not GPUGRID rather PRIMEGRID - as I am still thinking of buying a GTX 660 up to GTX 770 in the near future, but looking on my electric bill from last month (roughly USD 350.00) I was discouraged to invest in an additional card. So as you have discussed the GTX 285 at this very moment, do you think that it would pay off to buy a GTX 650ti and dump the 9800 GT and the 8600 GT, the later is causing trouble with about 10000 credits each day. The 8400 have just been lying around so I thought why not put in some computers. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
After a quick look I think that PG's credits are roughly comparable to here and a 650Ti is a descent enough GPU for here. The 8600GT, 9800GT and 8400's are of no use for here and are SP only, so no use at MW either. While they might work at PG and several other projects their use also depends on project up-time, so POEM might work on an 8400GS but there aren't enough WU's around to justify keeping the card for that project. Individually these cards are not very expensive to run but their performance is relatively poor, and project compatibility is limited. They might still be of some use as entry-level gaming cards, so you might get something for them, but not much. If the $350/month only came down by ~$50 a GTX650Ti would pay for itself in a couple of months, and a 660 would probably pay for itself within 3 or 4 months, just by getting rid of the old cards running cost. A 8400GS has a TDP of between 25W and 38W, so there isn't a lot of electric being used per GPU. The 8600GT only has a TDP of 43W, so again there isn't much of a saving from one, but neither of these cards can do much crunching anyway. The 9800GT however varies between 59W and 125W, depending on the model (?) I see you have two 8400GS GPU's running at PG, along with an 8600GT and a 9800GT. These are bringing you a RAC of <35K for ~(40or65+35+60to110)W = 125W to 210W (depending on the models). Obviously your choice what you crunch with and who you crunch for, but if you pulled those GPU's and ran a GTX650Ti here you could get a RAC of ~190,000, and for ~100W. So you would be saving some electric (between 35W and 120W) and earning more than five times the credits. Presuming your 9800GT has either a 105W or a 125W TDP (it's not a GE version) then the Electric saving would be at least 50W. If you want to save further on the Electric front, get rid of your E6750 system, and possibly your E8500. Even if you just stopped using those CPU's to crunch you would save as much as you would from getting rid of an old GPU. You have 3 good rigs at PG, but two old beasts that don't really do much other than eat electric. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I seconds SKs suggestion. And you wouldn't have to throw those Core 2 Duos away: they're still decent surf stations / office boxes, especially if equipped with 4 GB RAM and an SSD. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 23 Dec 09 Posts: 189 Credit: 4,798,881,008 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
If the $350/month only came down by ~$50 a GTX650Ti would pay for itself in a couple of months, and a 660 would probably pay for itself within 3 or 4 months, just by getting rid of the old cards running cost. That’s what I was thinking, dismiss all the old cards and replace it with an GTX 650 Ti and save enouth in th electric bill to justify the investment. A 8400GS has a TDP of between 25W and 38W, so there isn't a lot of electric being used per GPU. Does about 3371 credits a day in PG = 88 credits / 1 W The 8600GT only has a TDP of 43W, so again there isn't much of a saving from one, but neither of these cards can do much crunching anyway. Around 10000 credits a day in PG = 232 /1 W The 9800GT however varies between 59W and 125W, depending on the model (?) Around 25000 credits a day in PG = 200 / 1W I see you have two 8400GS GPU's running at PG, along with an 8600GT and a 9800GT. These are bringing you a RAC of <35K for ~(40or65+35+60to110)W = 125W to 210W (depending on the models). Obviously your choice what you crunch with and who you crunch for, but if you pulled those GPU's and ran a GTX650Ti here you could get a RAC of ~190,000, and for ~100W. So you would be saving some electric (between 35W and 120W) and earning more than five times the credits. Presuming your 9800GT has either a 105W or a 125W TDP (it's not a GE version) then the Electric saving would be at least 50W. Just made me thinking, my GTX 570 (Factory overclocked) has a TDP of 218 W and gives around 250000 Credits a /day… ok you can’t throw it away, because of the grey energy after 2 years of operation… so, it should be a GTX 650 TI 2 GB and not a boast because of the TDP? If you want to save further on the Electric front, get rid of your E6750 system, and possibly your E8500. Even if you just stopped using those CPU's to crunch you would save as much as you would from getting rid of an old GPU. I seconds SKs suggestion. And you wouldn't have to throw those Core 2 Duos away: they're still decent surf stations / office boxes, especially if equipped with 4 GB RAM and an SSD. PG, Einstein and Collatz are just a side show my real interest is on climateprediction.net, GPUGRID (skgiven there we have the same interests) and to some extend Malariacontrol. So I thought I will use all my gear I have for BOINC, but the Electric Bill made me think... So the two Core 2 Duos will come in just if there is climateprediction work around in the future. MrS you are right the two Core 2 Duos are just reserve computers, if I have some Practicants or an other help how needs a computer - work well for them. Finally about the Bluescreen`s topic I still think Climateprediction.net goes very well along with GPUGRID. |
©2025 Universitat Pompeu Fabra