Message boards : News : Old Noelia WUs
Author | Message |
---|---|
We have checked the error statistics and they are too high to be normal, so we are going to abort them. | |
ID: 29045 | Rating: 0 | rate: / | |
I have put a whole bunch of simulations to replace the cancelled ones on long. It is a system I have been meaning to run more simulations to get more statistics. They should pose no problems, but I'll be keeping an eye on them. As always, please let us know if the case is otherwise. | |
ID: 29046 | Rating: 0 | rate: / | |
Thank you guys for the very fast intervention. I have 13 of the new NATHAN units processed half way so far, and they are all running just perfectly. Will let you know in exactly 2 hours if they all end succesfull in here. | |
ID: 29048 | Rating: 0 | rate: / | |
I am very disappointed with this project lately, yesterday NOELIA two units of 8 hours each processed with faulty execution, today NOELIA other two units with 10 hours each processed aborted by the user, this is unacceptable and very little serious. | |
ID: 29049 | Rating: 0 | rate: / | |
I am very disappointed with this project lately, yesterday NOELIA two units of 8 hours each processed with faulty execution, today NOELIA other two units with 10 hours each processed aborted by the user, this is unacceptable and very little serious. I agree with you. Lots of WU problems recently, even on the short queue. I don't know if I'll continue to support this project if these problems keep going. ____________ Team Belgium | |
ID: 29050 | Rating: 0 | rate: / | |
The new NATHAN units are processing just fine. A small 23,88mb result, 70800 credits, very good ones. | |
ID: 29052 | Rating: 0 | rate: / | |
Yes, I've just reported my first completed one - task 6588199 - on the same host, same settings, same session (no reboot) as the one which failed a Noelia this morning. | |
ID: 29053 | Rating: 0 | rate: / | |
All my first 13 units where processed without any issue. Second batch is processing. I say we are back on business, i´m glad. | |
ID: 29054 | Rating: 0 | rate: / | |
I have put a whole bunch of simulations to replace the cancelled ones on long. It is a system I have been meaning to run more simulations to get more statistics. They should pose no problems, but I'll be keeping an eye on them. As always, please let us know if the case is otherwise. Thanks Nate, want to mention that these NATHAN WUs are running great on my 4 GTX 460 768mb cards too so I think you've done some honing. Thanks again, appreciate it. | |
ID: 29061 | Rating: 0 | rate: / | |
Been seeing multiple cases where new WU are not checkpointing | |
ID: 29065 | Rating: 0 | rate: / | |
Yummy, there are some TONYs on the pipe aswell... tasty wu´s, gimme gimme....crunch them all and gladly pay the energy bills when all works like a charm.... | |
ID: 29066 | Rating: 0 | rate: / | |
Been seeing multiple cases where new WU are not checkpointing Yes I am still having problems in linux also. I thought it was just the NOELIAs but the new NATHANs are doing it also. The tasks will lock up or remain at 0% and the system has to be rebooted for the GPU to work again on any project. Might be the new app, my linux sytems had not needed a reboot in months before this. | |
ID: 29070 | Rating: 0 | rate: / | |
I switched to short ones only and recently got a short NOELIA task which after 24 hours was stuck at 0%. Too bad I was away from the machine and realized it too late. | |
ID: 29072 | Rating: 0 | rate: / | |
The short Noelia's run fine on my 550Ti. They take little longer than the previous 4.2 ones (100-200 sec. more) even little more CPU use and less credit 8700 to 10500 previous. | |
ID: 29077 | Rating: 0 | rate: / | |
The short Noelia's run fine on my 550Ti. They take little longer than the previous 4.2 ones (100-200 sec. more) even little more CPU use and less credit 8700 to 10500 previous. Yup, crunch "fine" on my GTX 560. I noticed though that they often crash the NV driver and they also show CUDA errors when looking at task details, but they complete fine here and I get valid results ____________ Team Belgium | |
ID: 29079 | Rating: 0 | rate: / | |
They did run well for few days, but this one: | |
ID: 29080 | Rating: 0 | rate: / | |
The short Noelia's run fine on my 550Ti. They take little longer than the previous 4.2 ones (100-200 sec. more) even little more CPU use and less credit 8700 to 10500 previous. Long queue Noelia's run fine on my GTX 460 and my GTX 580 both running 310.70 driver. ____________ | |
ID: 29088 | Rating: 0 | rate: / | |
The short Noelia's run fine on my 550Ti. They take little longer than the previous 4.2 ones (100-200 sec. more) even little more CPU use and less credit 8700 to 10500 previous. Well, I've disabled long one for the time being as the last 2 long WUs I crunched errored out, so I'm crunching only short ones at the moment. I mostly get short NOELIA and so far, so good. I'm able to report valids ____________ Team Belgium | |
ID: 29094 | Rating: 0 | rate: / | |
All short NOELIA runs from me with my two GTX 650 Ti GPUs, too: no problems. | |
ID: 29096 | Rating: 0 | rate: / | |
I have put a whole bunch of simulations to replace the cancelled ones on long. It is a system I have been meaning to run more simulations to get more statistics. They should pose no problems, but I'll be keeping an eye on them. As always, please let us know if the case is otherwise. These ran smoothly until yesterday evening when one failed: http://www.gpugrid.net/result.php?resultid=6595932 It was the same thing that was happening with the last bunch of TONI units. Today, I had an adventure with this unit: http://www.gpugrid.net/result.php?resultid=6600488 It finished successfully, but barely. When it was about 25% done, I got an error message saying that acemd.2865.exe had failed, and the unit wasn't crunching, so I suspended it before I got computation error in boinc manager, the video card's speed and setting were reset (slower speed), and I rebooted the computer, resumed the unit, and it continued to crunch. At 90%+ completion, the computer froze, so I had to unplug it, and restart. It finished successfully! But the subsequent unit refuse to start crunching and the video card speed and settings were reset to a slower speed. I had to suspend that unit and reboot. It is running okay right now, and hopefully it won't crash. | |
ID: 29100 | Rating: 0 | rate: / | |
True. Not 100%, but doable. | |
ID: 29108 | Rating: 0 | rate: / | |
If I babbyseat the machines, I mean.... I will be traveling in two days, then the worst is expected. | |
ID: 29109 | Rating: 0 | rate: / | |
If you travel, I would recommend getting an app on a mobile device to bring with you that will allow you to remote into the computers. An example would be teamviewer, which is free. | |
ID: 29110 | Rating: 0 | rate: / | |
If you travel, I would recommend getting an app on a mobile device to bring with you that will allow you to remote into the computers. An example would be teamviewer, which is free. exactly what I do on my tablet. Problem is, when the big rig starts to reboot, I can´t access it, Hope it won´t happen. | |
ID: 29111 | Rating: 0 | rate: / | |
Now I got more problems even with short Noelia tasks. They were stuck, caused errors or app crash. A reboot was needed to start a new GPU task. | |
ID: 29113 | Rating: 0 | rate: / | |
Same here, got 5 Box's running the shorter ones & I think all 5 are hung Wu's right no, one at 37 Hr's ... | |
ID: 29114 | Rating: 0 | rate: / | |
No problems with short NOELIA tasks. I have not attempted any long NOELIAs for about a week. | |
ID: 29115 | Rating: 0 | rate: / | |
Short Noelias were going fine, until I had to abort this one, which was restarting repeatedly with error: SWAN : FATAL : Cuda driver error 702 in file 'swanlibnv2.cpp' in line 1841. | |
ID: 29116 | Rating: 0 | rate: / | |
226 (0xffffffffffffff1e) ERR_TOO_MANY_EXITS error on the latest beta units. This is a new one! | |
ID: 29122 | Rating: 0 | rate: / | |
SWAN : FATAL : Cuda driver error 702 in file 'swanlibnv2.cpp' in line 1841. It looks like most of the major errors are gone (severe error % is good), but this one does seem to be occurring more frequently than we would like. We'll see if we can find a cause. | |
ID: 29123 | Rating: 0 | rate: / | |
dmesg from the beta WU's | |
ID: 29124 | Rating: 0 | rate: / | |
226 (0xffffffffffffff1e) ERR_TOO_MANY_EXITS error on the latest beta units. This is a new one! Is it my imagination or did you change the error message these units? | |
ID: 29125 | Rating: 0 | rate: / | |
Verified the beta WU's hang the GPU in some manner. rmmoding nvidia and modprobing nvidia does not resolve. The system must be rebooted to recover from whatever the WU is causing. On Nvidia 313.26. | |
ID: 29126 | Rating: 0 | rate: / | |
I wanted to chime in to say I just had 12 NOELIA tasks fail hard on the "ACEMD beta version v6.49 (cuda42)" app, using Windows 8 Pro x64, BOINC v7.0.55 x64 beta, nVidia 314.14 beta drivers, GTX 660 Ti (which usually works on GPUGRID) and GTX 460 (which usually works on World Community Grid) | |
ID: 29127 | Rating: 0 | rate: / | |
These NOELIA acemdbeta WUs are all hanging for me. They get stuck at a "Current CPU Time" of between 1 and 5 seconds. I had to abort them. | |
ID: 29128 | Rating: 0 | rate: / | |
On my system Vista 32bit, BOINC 6.10.58 nVidia 314.7 the latest Noelia beta errored out after more than 11 hours. It is this one: | |
ID: 29135 | Rating: 0 | rate: / | |
I know this is the beta app, but... They would need 10 to 15 computers (dual booting or virtual pc) with every operating system on them plus all the different versions of BOINC everyone's running not to mention the different video cards. They'll never be able to please everyone, I always suspend other jobs or clear them out if I know I'm going to beta test but that's just me not 20/20 hindsight. What I'm trying to say is that if they did do some limited testing, who's to say what OS they choose? It certainly wouldn't be Windows 8, it's turning out to be a flop and a real disappointment for Microsoft and their vendors. I don't want to sound too harsh (if I do I apologize) but that's what beta testing is all about, right? | |
ID: 29137 | Rating: 0 | rate: / | |
I agree with you flashawk. We crunchers need to do the testing with all the different set-ups and platforms. Win8 is a pain indeed. ____________ Greetings from TJ | |
ID: 29138 | Rating: 0 | rate: / | |
Devs, do you run some of these tasks before issuing them to us? If not, you should, because when the bugged tasks get to us, the failures waste many more resources than they would if you tested them locally first. We do test them locally, to the extent we can. Part of the issue is that running locally for us vs. running on BOINC are not comparable. We do have an in-house fake BOINC project, but even that isn't exactly comparable to sending to you users. Additionally, we have very limited ability to test on Windows. In the future we will improve there, but we have limited resources right now. What we are thinking is that this might be related to the Windows application. Has anyone who experiences these problems seen them on a linux box? Is it only Windows? The more we know, the more quickly we can improve. The last thing we want is to crash your machines. A failed WU is one thing. Locking up cruncher machines is much, much worse. Please let us know so we can fix it. | |
ID: 29139 | Rating: 0 | rate: / | |
I'm running these om Win7 with GTX670 and often get a windows message Nvidia driver stopped working | |
ID: 29140 | Rating: 0 | rate: / | |
The previous bunch of Noelia's beta's did good on my WinVista 32 bit pc with driver 314.7 BOINC 6.10.58. The batch from last days error out after hours with the message that the acemd driver stopped and has recovered from an unexpected error. I am now trying the long runs from Nathan on my GTX550Ti. | |
ID: 29141 | Rating: 0 | rate: / | |
Hi there im having problems on my linux box havent been able to run any work at all for a 3-4 days.. | |
ID: 29142 | Rating: 0 | rate: / | |
Thanks for the reply, Nate. I'm glad to hear that you guys are looking to improve the testability for Windows, even before issuing tasks on the Beta application to us Beta users. | |
ID: 29143 | Rating: 0 | rate: / | |
What we are thinking is that this might be related to the Windows application. Has anyone who experiences these problems seen them on a linux box? Is it only Windows? The more we know, the more quickly we can improve. The last thing we want is to crash your machines. A failed WU is one thing. Locking up cruncher machines is much, much worse. Please let us know so we can fix it. I've just aborted one of your long run tasks which looked as if it was going bad - http://www.gpugrid.net/workunit.php?wuid=4246107 (replication _6 is always a bad sign). The first cruncher to try it was running Linux. | |
ID: 29144 | Rating: 0 | rate: / | |
All NOELIA tasks at the moment freeze my Linux pretty totally to the point that I have to restart computer. What's worse, I did de-select beta tasks but after the reboot BOINC downloads more from the ACEMDBETA queue those tasks and I'm back to reboot cycle. | |
ID: 29146 | Rating: 0 | rate: / | |
All NOELIA tasks at the moment freeze my Linux pretty totally to the point that I have to restart computer. What's worse, I did de-select beta tasks but after the reboot BOINC downloads more from the ACEMDBETA queue those tasks and I'm back to reboot cycle. Deselect Run test applications? as well. | |
ID: 29147 | Rating: 0 | rate: / | |
Ok, I had still test applications selected, after deselecting that and resetting project I got now NATHAN long run task, which is also pretty odd, because I have only short runs enabled at the moment. | |
ID: 29150 | Rating: 0 | rate: / | |
Ok, I had still test applications selected, after deselecting that and resetting project I got now NATHAN long run task, which is also pretty odd, because I have only short runs enabled at the moment. There aren't any short run tasks available today. Might you have had If no work for selected applications is available, accept work from other applications? selected as well? | |
ID: 29151 | Rating: 0 | rate: / | |
109nx33-NOELIA_109n_equ-1-2-RND6949_0 4248581 139265 12 Mar 2013 | 8:04:33 UTC 13 Mar 2013 | 14:48:02 UTC Error while computing 58,742.39 1.73 --- ACEMD beta version v6.49 (cuda42) | |
ID: 29154 | Rating: 0 | rate: / | |
All of these betas failed on mine machine. Moreover, I opted out from beta, updated, but am still receiving them (and only them). | |
ID: 29156 | Rating: 0 | rate: / | |
Martin, did you follow this thread on how to completely opt out of beta tasks? | |
ID: 29158 | Rating: 0 | rate: / | |
Seven in a row of the 6.49 ACEMD beta NOELIAs failed for me also, all in 8 seconds or less, so I am giving it a rest for now. That was on a Kepler GTX 650 Ti card, and I will try a Fermi GTX 560 tomorrow to see if that does any better. This is on Win7 64-bit, and BOINC 7.0.56 x64. Those cards have been basically error free for the last several days, since the last Noelia errors. | |
ID: 29159 | Rating: 0 | rate: / | |
A few NOELIA WUs failed recently on my system too. | |
ID: 29164 | Rating: 0 | rate: / | |
Tsukiouji, | |
ID: 29165 | Rating: 0 | rate: / | |
The problem is at link. Filter set "http://www.gpugrid.net/results.php?userid=94436" can be set and results can be seen by the owner only. There is no problem with "host" filers, e.g. http://www.gpugrid.net/results.php?hostid=144019. | |
ID: 29166 | Rating: 0 | rate: / | |
Thanks for the explanation -- I was able to find the user's tasks by clicking on their name, and looking at the tasks for the only computer. Link: http://www.gpugrid.net/results.php?hostid=144019 | |
ID: 29170 | Rating: 0 | rate: / | |
Martin, did you follow this thread on how to completely opt out of beta tasks? No, but I'm in all other queues, so there is (plenty of) other work. But the problem is solved now. Admins cancelled existing beta tasks and no others are waiting. I'll opt in to beta again to help test on Win platform. | |
ID: 29174 | Rating: 0 | rate: / | |
Now I am not sure if this is an error with my machine it has been offline for a few weeks, or if it is due to a bug in the Noelia Tasks it got earlier today. <message> the other had a much shorter but similar output of: <message> ____________ | |
ID: 29185 | Rating: 0 | rate: / | |
I´m still having the BSOD/reboot thing on my triple 690 rig, each two days, even with the NATHAN long units. Just a full cache abort and clean units will solve it, but then in two days another one will come. | |
ID: 29189 | Rating: 0 | rate: / | |
After working for a few days the Nathan packages are also crashing the application and my driver. | |
ID: 29281 | Rating: 0 | rate: / | |
I´m still having the BSOD/reboot thing on my triple 690 rig, each two days, even with the NATHAN long units. Just a full cache abort and clean units will solve it, but then in two days another one will come. On my end, i´m having suspicious about one of the 690´s beeing not that strong. Taking out the oc of it seems to improve the machine stability. This issue should be machine fault, because none of my other machines does it. Plus no one seems to have the same BSOD problem with the current units, then the problem is here. Just want to share it, because that´s not a project fault. BTW I would like to have more news from the results front, so I can proudly share it with my family and friends, and maybe found some more volunteers to the cause. Typo edited* | |
ID: 29287 | Rating: 0 | rate: / | |
I´m still having the BSOD/reboot thing on my triple 690 rig, each two days, even with the NATHAN long units. Just a full cache abort and clean units will solve it, but then in two days another one will come. BSODs Strike Back! I don't have my 690's OC'ed and my system crashed today with NATHAN units e.g. http://www.gpugrid.net/workunit.php?wuid=4313870 (I deactivated the project before error reports from this unit could be assembled, as the system BSODs first before BOINC notices it) Have been working through them for a month or so without a BSOD, after experiencing the same crash reports seen elsewhere around here (e.g. .http://www.gpugrid.net/forum_thread.php?id=3308&nowrap=true#29090) I will be crunching my backup project until this is fixed. | |
ID: 29303 | Rating: 0 | rate: / | |
I just noticed I have two Noelia WU on my linux boxes for the first time in a few weeks. They were both stuck at 0% and the boxes had to be rebooted to get the gpu running again. | |
ID: 29330 | Rating: 0 | rate: / | |
I just noticed I have two Noelia WU on my linux boxes for the first time in a few weeks. They were both stuck at 0% and the boxes had to be rebooted to get the gpu running again. Exact same thing here, Windows XP Pro 64 bit. I had 3 NOELIA's come through, I caught one at 0% after 5 1/2 hours of crunching on a GTX680, GPU was at 99%, memory controller was at 0% along with the CPU usage for that GPU. The other 2 caused a 2685 error and one NOELIA hosed a CPDN work unit that I had over 250 hours on. I am not signed on to do beta testing, these came through the regular server (I also did a TONI without issue). Interesting that they slipped them through like this, makes me feel like they don't trust us. | |
ID: 29335 | Rating: 0 | rate: / | |
Interesting that they slipped them through like this, makes me feel like they don't trust us. No, the way I understand it is that Noelia is testing new functionality, which had been added in the recent app update but wasn't used in previous WUs (except the infamous Noelias). To me it looks like there's more alpha and beta testing needed here. And serious debugging. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 29338 | Rating: 0 | rate: / | |
Same here, this morning the machine (Ubuntu 64, 2x660GTIs) was hung, reboot to see that there was a Noelia stuck at 0%, wait to see if it progresses...no way...a couple of reboots more to finally abort and get back to normality. | |
ID: 29339 | Rating: 0 | rate: / | |
Well, I guess you're getting information through the moderators lounge, I seriously didn't see any post about those work units coming through or I would have been on the look out. | |
ID: 29340 | Rating: 0 | rate: / | |
On 30th March I had a Short task sit for 18h before I spotted it doing nothing, 47x2-NOELIA_TRYP_0-2-3-RND8854_6 (6.52app). Since then I've had three Nathan tasks fail and one Noelia 148nx9xBIS-NOELIA_148n-1-2-RND8819_1 (all 6.18apps). | |
ID: 29341 | Rating: 0 | rate: / | |
Nothing has changed with the NATHAN tasks. They have been running for weeks with historically low error rates, so they really shouldn't be a problem, as far as I can imagine. I know almost nothing at this point about the new NOELIA WUs, but I have suspended them for now considering the complaints. | |
ID: 29360 | Rating: 0 | rate: / | |
Nothing has changed with the NATHAN tasks. They have been running for weeks with historically low error rates, so they really shouldn't be a problem, as far as I can imagine. I know almost nothing at this point about the new NOELIA WUs, but I have suspended them for now considering the complaints. Ya buddy, you got the touch. Maybe you can work you're magic on rebuilding the NOELIA's, you seem to have the "Right Stuff". I admit, I have no idea what goes into writing these wu's, Noelia must be doing something fundamentally different than the rest of the scientist's at GPUGRID. I'm hoping she'll get it right soon and this well all have been worth it. | |
ID: 29362 | Rating: 0 | rate: / | |
Please NO MORE NEW LONG NOELIA tasks until they are really tested. | |
ID: 29363 | Rating: 0 | rate: / | |
There have been some really odd errors in the last couple of months, | |
ID: 29364 | Rating: 0 | rate: / | |
Nothing has changed with the NATHAN tasks. They have been running for weeks with historically low error rates, so they really shouldn't be a problem, as far as I can imagine. I know almost nothing at this point about the new NOELIA WUs, but I have suspended them for now considering the complaints. Thank you Nate for suspending them. I really hope you guys can figure out the problems in your staging environment, before even sending them through the beta app. If there's anything I can do to help (like some sort of pre-Beta test, if possible), you can PM me. I really enjoy testing, especially when I know it might fail, but I expect the production apps to be near-error-free. Regards, Jacob | |
ID: 29395 | Rating: 0 | rate: / | |
I just got another NOELIA long wu and it gave me an error message after 30 seconds of run time, I had to reboot to get the GPU back working. | |
ID: 29406 | Rating: 0 | rate: / | |
Had a NOELIA beta fail this morning, 291px1x1BIS-NOELIA_291p_beta-1-2-RND9212 | |
ID: 29407 | Rating: 0 | rate: / | |
063ppx1xBIS-NOELIA_063pp_beta-0-2-RND4224_2 | |
ID: 29409 | Rating: 0 | rate: / | |
If you travel, I would recommend getting an app on a mobile device to bring with you that will allow you to remote into the computers. An example would be teamviewer, which is free. you can set teamviewer to start with windows and auto-login, so if the computer at home is setup this way, if it reboots, you will still have access to it. | |
ID: 29411 | Rating: 0 | rate: / | |
Further to http://www.gpugrid.net/forum_thread.php?id=3318&nowrap=true#29409 | |
ID: 29412 | Rating: 0 | rate: / | |
I aborted 063px1x1BIS-NOELIA_063p_beta-1-2-RND8034_1 after it had given the "acemd.2865P.exe has encountered a problem ..." popup error three times in succession. | |
ID: 29413 | Rating: 0 | rate: / | |
I guess I should have clarified, the NOELIA that crashed on me came through the regular server. Richard, I always get the 2865P error, I thought it was a Windows XP thing. | |
ID: 29418 | Rating: 0 | rate: / | |
I had another NOELIA sneak through on the non-beta long run server, I didn't get the error message this time, it ran for 59 minutes and remained at 0%. The CPU usage was at 0%, the GPU usage was at 0% and the memory controller was at 0% so I aborted it and had to reboot my computer to get my GTX680 working again. | |
ID: 29421 | Rating: 0 | rate: / | |
Also just noticed on one of my Linux machines a NOELIA beta task must have been sent using the non-beta long run server & had stalled 24hrs ago. | |
ID: 29422 | Rating: 0 | rate: / | |
I got one Noelia on a vista ultimate x86 system with a GTX550Ti. It took 93686.14 seconds to complete, but it did with almost 95000 credits. | |
ID: 29562 | Rating: 0 | rate: / | |
Just had two Noelia tasks failed: | |
ID: 29830 | Rating: 0 | rate: / | |
Just had two Noelia tasks failed: They both completed successfully on other machines after you posted, but I don't see any rhyme or reason for it. The machines that failed all did so quickly (in a few seconds). But they have a variety of GPU cards and operating systems, and I doubt they were all overclocked so much that they failed right away (though that is a possibility that should be checked), and they wouldn't have time to get too hot either. I noticed though that my GTX 650 Ti would sometimes fail after only a few seconds, which I haven't yet seen on my GTX 660s (except those bad work units that everyone failed on). That suggests to me that some work units just won't run on some types of cards. I know that on Folding, it was found out a couple of years ago that some of the more complex work units would fail on cards with only 96 shaders, but would run fine with 192 shaders or more. I don't see that pattern here yet, but something else might become apparent. | |
ID: 29898 | Rating: 0 | rate: / | |
Jim1348; Not had any problems with Noelia's with either my 650's nor 670. I am running xp sp3 with beta 320 drivers which have been completely stable for me. Actually I even noticed a small perf improvement over the 314's. Might be worth a try one of your problematic machines? | |
ID: 29900 | Rating: 0 | rate: / | |
Jim1348; Not had any problems with Noelia's with either my 650's nor 670. I am running xp sp3 with beta 320 drivers which have been completely stable for me. Actually I even noticed a small perf improvement over the 314's. Might be worth a try one of your problematic machines? Not problematic; only an occasional failure at the outset on the GTX 650 Ti. But it was a factory-overlocked card, and I have now reduced the clock (and increased the core voltage) to the point where I don't think it gets even the occasional failure anymore. But many of the cards are factory-overclocked now. That is the same insofar as errors are concerned as if you had used software to overclock a card; it is the chip specs from Nvidia that determine the default clock rate. If the work units fail quickly, it is not much of a problem and you will gain points overall with the faster clocks. The real problem comes when they fail after a couple of hours; then you should get out the MSI Afterburner and start reducing the clocks, or check the cooling. You will be points ahead in the end. Also, the work units change difficulty; what starts out as a stable card can easily start failing later when (not if) the harder ones come along. So I just don't overclock, which save a lot of troubleshooting later. | |
ID: 29901 | Rating: 0 | rate: / | |
I had another NOELIA sneak through on the non-beta long run server, I didn't get the error message this time, it ran for 59 minutes and remained at 0%. The CPU usage was at 0%, the GPU usage was at 0% and the memory controller was at 0% so I aborted it and had to reboot my computer to get my GTX680 working again. Same problem here. Noelia task is running for 10 hours and only 3% done ! No CPU load, no GPU load Linux Arch - GTX680 - Driver 319.17 | |
ID: 29902 | Rating: 0 | rate: / | |
Another Noelia failed after 8:45 hours. | |
ID: 29990 | Rating: 0 | rate: / | |
Mumak; reading all negative comments yet I have not had any problems with Noelia's. I see you have two machines, one with a 650ti which appears stable the other with a 660ti which is causing you to loose your hair. | |
ID: 30019 | Rating: 0 | rate: / | |
I had no issues with other tasks, but Noelia's failed on 650Ti in the past too.. It's no all which fail - I currently got another one, so will see how that goes... | |
ID: 30023 | Rating: 0 | rate: / | |
I only want to ask how much of you try to raise the gpu voltage for about 25mV whats described in some forumthreads. It still is a need on some cards on gpugrid with some type of workunits. Perhaps it helps some of you. | |
ID: 30028 | Rating: 0 | rate: / | |
Just wanted to add that I too had a Noelia WU that ran for almost 7 hrs and was only 5% complete on GTX660Ti. I had to abort it => 291px6x2-NOELIA_klebe_run2-0-3-RND9489 http://www.gpugrid.net/workunit.php?wuid=4459890 | |
ID: 30052 | Rating: 0 | rate: / | |
I had another one which if I had let it run, would have taken 100 hours to complete. 041px44x4-NOELIA_klebe_run2-1-3-RND4186, ran ~6hrs for ~6%, so I aborted again. I haven't seen this happening on any of my other machines. They are all Win7, this system is Linux, so maybe there's something wrong specific to the Linux platform? I'll have to investigate further when I get a chance, too much other things going on right now.http://www.gpugrid.net/workunit.php?wuid=4464500 | |
ID: 30063 | Rating: 0 | rate: / | |
This might just be an issue with these specific WU's, they don't run well on Linux. | |
ID: 30064 | Rating: 0 | rate: / | |
Has anyone had success in running the current Noelias under Linux? So far I've read a few posts saying it wouldn't work at all. | |
ID: 30067 | Rating: 0 | rate: / | |
People tend to complain when things aren't working, rather than when things are working. | |
ID: 30068 | Rating: 0 | rate: / | |
People tend to complain when things aren't working, rather than when things are working. Sad but true, almost over the entire planet. ____________ Greetings from TJ | |
ID: 30077 | Rating: 0 | rate: / | |
People tend to complain when things aren't working, rather than when things are working. Why would people complain when things are working? | |
ID: 30080 | Rating: 0 | rate: / | |
I have finished succesfullty over 25 new NOELIAS in Linux on my PC with two GTX 660Ti but this morning I've found one at 0% after six hours of processing, stopped it and restarted but still no progress, reboot and start boincmanager but the machine has quickly become unusuable and I had to restart again and abort the unit. These messages were in the log: | |
ID: 30082 | Rating: 0 | rate: / | |
Trotador, I would suggest you abort it, if you haven't already. | |
ID: 30083 | Rating: 0 | rate: / | |
Why would people complain when things are working? That was worth getting up early to read. | |
ID: 30084 | Rating: 0 | rate: / | |
Beyond, I have no idea why 51% of people do anything they do - I don't even ask anymore. Like electing (sort of) gwb twice? I've had a couple WUs seem to stall lately and when I vnc to the machine there's an error message saying the acemd app has had an error. If I close that box the WU restarts from zero, but if I shut down BOINC, then hit the X on the box, and then either restart BOINC or reboot the PC the WU progresses normally. It seems better to reboot because restarting BOINC sometimes causes the WU to progress at about 1/2 speed. A restart gets the GPU running normally again. BTW, the above order of the steps is important: 1) Shut down BOINC. 2) Hit the X on the error message. 3) Restart BOINC or (preferably) reboot. BTW, all these boxes are Win7-64. | |
ID: 30085 | Rating: 0 | rate: / | |
I'm sorry if you think that I was complaining, I was under the impression that maybe I'd get some help here. I've had 3 successful SDOERR tasks complete normally in the expected amount of time, and 3 NOELIA_klebe tasks that run painfully slow, most likely to end in error. I have a 4th NOELIA_klebe at 11% that's been running for 13hrs 15mins that I'm about to abort. I am in no way saying that they can't be successfully run on the Linux platform, just trying to find out what's going on so that I can get it corrected. | |
ID: 30118 | Rating: 0 | rate: / | |
Also, I did enable coolbits (GPU temps are around 41 degrees C) and set PowerMizer to prefer maximum performance. Also, I decided against aborting the current NOELIA_klebe task in hopes of using it for troubleshooting the problem. I've tried shutting down BOINC and rebooting, nothing's changed & still running slow. Hi Steve, the temp suggests to me that the WU has stopped. That happens now and then on windows too. See my post just above and see if that gets the WU moving again (with or without the error message). I'd try the reboot option as the GPU may have gone into an idle state (slow but still slightly processing). Shut down BOINC first and THEN reboot. Hope it works for you. | |
ID: 30123 | Rating: 0 | rate: / | |
The latest Noelia seem to take more time to finish than an earlier units. | |
ID: 30134 | Rating: 0 | rate: / | |
The latest Noelia seem to take more time to finish than an earlier units. On my GTX285 and GTX550Ti they take between 41-42 hours. The previous ones around 30 hours. With no errors, but my systems crunch only a few though. The ones from Stephen, SDOERR take roughly 28 on my rigs. As of yet without error as well. ____________ Greetings from TJ | |
ID: 30143 | Rating: 0 | rate: / | |
Bedrich, your runtimes for the current Noelias vary from 33ks to 43ks on the one host I looked at. That's a lot.. I'd look at GPU utilization fluctuation, try to free some more CPU cores (if they're busy with other tasks) and see if GPu utilization stabilizes. I don't think this strong variation is inherent to the WUs. | |
ID: 30151 | Rating: 0 | rate: / | |
The latest Noelia seem to take more time to finish than an earlier units. I haven't noticed them getting longer lately but they're definitely longer than is comfortable for my GPUs. In fact I move 4 of my cards to different projects when NOELIAS are the only WUs available. Not sure if the length is necessary or just an arbitrary choice. Strongly wish they were shorter though. My observations on NOELIA WUs on my GPUs: 1) They're the longest running WUs I've seen at GPUGrid. 2) They're the most troublesome WUs I've seen at GPUGrid. 3) They have the lowest credits/hour of any long WUs. Something does not compute (pun intended). | |
ID: 30156 | Rating: 0 | rate: / | |
My observations on NOELIA WUs on my GPUs: Too late to edit my above post. New NATHANs just hit and they're SLOWER than anything I've seen yet at least on my 4 GTX 460/768MB cards. Not sure how the credits/hour will play out but since they won't make 24 hours it won't be pretty (at least on the 460s). Haven't hit the 650 Ti GPUs yet. But really, I'll ask again: Is there a good reason that the WUs have to be this long or is it just an arbitrary setting? | |
ID: 30169 | Rating: 0 | rate: / | |
Basically, the amount of information included in a model determines the runtime. The more info you put in, the longer it will take, but the more accurate and meaningful the results can be. | |
ID: 30171 | Rating: 0 | rate: / | |
Bedrich, your runtimes for the current Noelias vary from 33ks to 43ks on the one host I looked at. That's a lot.. I'd look at GPU utilization fluctuation, try to free some more CPU cores (if they're busy with other tasks) and see if GPu utilization stabilizes. I don't think this strong variation is inherent to the WUs. I have 1 cpu dedicated to 1 gpu, and I haven't changed that. | |
ID: 30176 | Rating: 0 | rate: / | |
The way I understand it is that the complexity of the WU determines the time for each time step (in the range of ms). The amount of time steps should be rather arbitrary and choosen so that the server is not overloaded. | |
ID: 30223 | Rating: 0 | rate: / | |
This particular unit was a nasty one for me. | |
ID: 30232 | Rating: 0 | rate: / | |
My observations on NOELIA WUs on my GPUs: I've done 7 total of the new Nathan's & they're giving credit of 167,550. 6 of the tasks ran 47K seconds on GTX660Ti's, and 75K seconds on a GTX560Ti. | |
ID: 30234 | Rating: 0 | rate: / | |
Looks like, I pulled this one out of the fire: | |
ID: 30317 | Rating: 0 | rate: / | |
My 680 is crunching Noelia task in about 300000 s. Other tasks (Sdoerr and Nathan) are ok. I have updated driver to last 319.23. | |
ID: 30366 | Rating: 0 | rate: / | |
My 680's are taking right around 29,000, that's at 1175MHz Windows XP x64. | |
ID: 30368 | Rating: 0 | rate: / | |
My 680's are taking right around 29,000, that's at 1175MHz Windows XP x64. I have times around 29000 week ago. Maybe something with driver. | |
ID: 30369 | Rating: 0 | rate: / | |
What clock speed is you're 680 running at? To be honest, I think you're in the pipe 5x5 (just right). | |
ID: 30370 | Rating: 0 | rate: / | |
Zdenek, It might be a driver issue. Others have reported similar problems with 319.x on Linux. | |
ID: 30372 | Rating: 0 | rate: / | |
Zdenek, It might be a driver issue. Others have reported similar problems with 319.x on Linux. Yes, you are right. Moved back to 310 and all is ok. I have problem with my own distrrtgen app also. IMHO It stucks on synchronizing between CPU and GPU in cudaDeviceSynchronize(). | |
ID: 30384 | Rating: 0 | rate: / | |
I think there has generally been issues with this since about CUDA 4.2 dev. | |
ID: 30387 | Rating: 0 | rate: / | |
I have found that 6xx and Titan have problems. 5xx looks ok. | |
ID: 30397 | Rating: 0 | rate: / | |
Two more NOELIA failures yesterday: one after 4 seconds, the other after 20,927 seconds. I will continue with 'short' tasks. | |
ID: 30408 | Rating: 0 | rate: / | |
Hi | |
ID: 31087 | Rating: 0 | rate: / | |
Just completed what looks like a brand new Noelia I got one of these too. The first guy aborted it and it took my OCed 650 Ti well over 24 hours (92,919.91 seconds) to run it in Win7-64 (vs yours in XP). GPU utilization was OK, but these are TOO LONG and to add insult to injury only give out about 1/2 the credits they should. http://www.gpugrid.net/workunit.php?wuid=4527468 | |
ID: 31091 | Rating: 0 | rate: / | |
I had one, too - http://www.gpugrid.net/result.php?resultid=6992593 | |
ID: 31092 | Rating: 0 | rate: / | |
Hmm, Does not sound promising - don't suppose anyone has noticed what the gpu mem utilisation is? | |
ID: 31093 | Rating: 0 | rate: / | |
There's one thing I'd like to say though, Nathan sure did a bang-up job on those NATHAN_KIDKIXc22's, I'm getting 98% GPU load and 35-38% memory controller utilization on my GTX680's. This buds for you, Nathan! You should have named them KIDKIX_BUTTc22, you really should give a clinic for the fellow researchers (I'm sure they'll get it sorted, not complaining). | |
ID: 31095 | Rating: 0 | rate: / | |
Hmm, Does not sound promising - don't suppose anyone has noticed what the gpu mem utilisation is? Mine was around 1045 MB. | |
ID: 31097 | Rating: 0 | rate: / | |
Wow does that mean this units dont run on 1GB VRAM Hardware? (didnt tried) | |
ID: 31099 | Rating: 0 | rate: / | |
Thanks, at least we have a plausible explanation, not so sure ruling out the mainstream will be good for GpuGrid. Pity we can't isolate WUs as I can think of an addition to Flashawks naming convention - but lets not go there. | |
ID: 31100 | Rating: 0 | rate: / | |
Are you asking about GPU utilization or the size of the work units? I take it that these wu's are from the short queue and if the work unit size is larger than the amount of GDDR memory on the video card, that would not only cause a massive slow down in crunching times it will also make your computer almost unresponsive (mouse, keyboard and such). | |
ID: 31101 | Rating: 0 | rate: / | |
That's what I understood too. All scientists are working on different projects/amino acids and use different algorithms. Thus WU's differ. The latest one from Nathan seems almost optimal as far we can see in error-free and rather fast cycles on the fastest cards. | |
ID: 31103 | Rating: 0 | rate: / | |
Flashhawk, I was referring to the "Memory Used" figure as reported by GPU-Z. | |
ID: 31104 | Rating: 0 | rate: / | |
Are you asking about GPU utilization or the size of the work units? I take it that these wu's are from the short queue and if the work unit size is larger than the amount of GDDR memory on the video card, that would not only cause a massive slow down in crunching times it will also make your computer almost unresponsive (mouse, keyboard and such). They're long queue WUs yet they credit like the short queue. If I see any more I'll make like a Dalek: EXTERMINATE, EXTERMINATE!!! BTW, like you mentioned: kudos to Nathan on the new KIX WUs. Nathan, give the other WU generators a class in WU design. Please? | |
ID: 31106 | Rating: 0 | rate: / | |
NOELIA_Mg WU are Long runs. Most of Noelia's work has used >1GB GDDR and taken longer than other work. | |
ID: 31107 | Rating: 0 | rate: / | |
NOELIA_Mg WU are Long runs. Most of Noelia's work has used >1GB GDDR and taken longer than other work. The ones listed above scored just 69,875. Including only 25% bonus though since they're SO LONG :-( | |
ID: 31108 | Rating: 0 | rate: / | |
Flashhawk, I was referring to the "Memory Used" figure as reported by GPU-Z. No petebe, I wasn't confused by anything on your part, I was confused because more people aren't complaining about unresponsive computers. If someone is using an older card with only 1GB of onboard GDDR, then the system RAM or swap file would be used slowing computers to a crawl. No, you're fine buddy, sorry for the confusion, I should have been a little clearer. Frankly, I'm shocked I haven't seen more of this in the forum, that's a huge wu for not much credit. I guess I'll have to turn on the short queue and check them out, that is odd they aren't in the long queue. Edit: I understand now (I'm pretty slow sometimes), there coming through the long queue, I haven't seen one yet. | |
ID: 31110 | Rating: 0 | rate: / | |
GPU-Z "only" reports the overall memory used, which includes the GPU-Grid WU and anything else running. If a card with 1024 MB shows 1045 MB used that won't slow the computer to a crawl. Everything except a whopping 21 MB still fit into the GPU memory. How often can this amount be transferred back and forth between system RAM and GPU at PCIe speeds? (rough answer: a damn lot) | |
ID: 31114 | Rating: 0 | rate: / | |
GPU-Z "only" reports the overall memory used, which includes the GPU-Grid WU and anything else running. If a card with 1024 MB shows 1045 MB used that won't slow the computer to a crawl. Everything except a whopping 21 MB still fit into the GPU memory. How often can this amount be transferred back and forth between system RAM and GPU at PCIe speeds? (rough answer: a damn lot) It could also be other things. As some of you remember from another thread I was having problems with my new GTX660 on a XFX MOBO. With a laggy system at times with 2GB RAM on the GPU and 12GB on the MOBO. Or the driver, or the driver in combination with another piece of software. I have put the GTX660 in another system and it works like a train (as we say in Dutch). But how about the question from dskagcommunity about VRAM? His exact question: Wow does that mean this units don't run on 1GB VRAM Hardware? (didn't tried) As it can be swapped back and forth from system RAM to GPU than would the message that these WU's won't run on 1GB VRAM cards make no sense, or am I missing something important? I just want to learn here. ____________ Greetings from TJ | |
ID: 31117 | Rating: 0 | rate: / | |
This has been discussed before, and to some length - These WU's are only going to be slow on 768MB cards and 512MB cards; a few GTX460's, GTX450's and GT440's. Generally speaking the relative performance of Noelias' WU's on mid-range cards should be better as they won't be burdened with low bus/bandwidths. More the pity they don't have better credit and cant finish inside 24h... | |
ID: 31120 | Rating: 0 | rate: / | |
You're right TJ, these WUs should run on cards with 1 GB VRAM. However, I think the signs are clear: don't anyone buy or recommend a card with 1 GB for GPU-Grid any more. | |
ID: 31124 | Rating: 0 | rate: / | |
Ah good to read, hope the 1,28gb on my 24h crunchermachines are working still a longer time without swapping, i buyed them only few month ago ^^ | |
ID: 31125 | Rating: 0 | rate: / | |
This has been discussed before, and to some length - These WU's are only going to be slow on 768MB cards and 512MB cards; a few GTX460's, GTX450's and GT440's. Generally speaking the relative performance of Noelias' WU's on mid-range cards should be better as they won't be burdened with low bus/bandwidths. More the pity they don't have better credit and cant finish inside 24h... EXCEPT that no one has been talking about these WUs on sub 1GB cards. The first reports were referring to 650 Ti GPUs and so far reports have been that they're running even worse on the 2GB 660 and 660 TI cards. BTW, you can add a 1280MB 570 to the list of GPUs that don't like these new NOELIAS... | |
ID: 31126 | Rating: 0 | rate: / | |
WELL..... new Noelias are filling the cache.... let´s see how these ones goes and hope for the best. | |
ID: 31355 | Rating: 0 | rate: / | |
WELL..... new Noelias are filling the cache.... let´s see how these ones goes and hope for the best. Well, my 650Ti doesn't seem to like them AT ALL! At least this one. Slot: 0 Task: 063ppx8x1-NOELIA_klebe_run4-0-3-RND9577_0 Elapsed: 04:29 CPU time: 00:17 Percent done: 03.76 Estimated: 119:17 Remaining: 114:36 So, it will take something like 5 days to finish on my 650Ti! I wonder if there's a card out there that can finish these in the 24h window... ____________ | |
ID: 31361 | Rating: 0 | rate: / | |
In Linux I can't check GPU utilization, so can't tell how well this NOELIA is using my card. Judging by the temperature of the card though (52C), utilization must be pretty low, as it normally goes up to 64-67C with NATHANs. | |
ID: 31362 | Rating: 0 | rate: / | |
Zdenek, It might be a driver issue. Others have reported similar problems with 319.x on Linux. All drivers above 319 (incl 325 beta) under linux still have problem with noelia tasks with 6xx gpus. Very slow. Low CPU usage. I recommend to use 310 under linux. | |
ID: 31363 | Rating: 0 | rate: / | |
You're so right! I removed 319 and installed 310.44 and immediately CPU usage went up (40-45% from 15-20%) and the GPU temp is at the usual 65C! Also, estimated total time is dropping rapidly. | |
ID: 31364 | Rating: 0 | rate: / | |
WELL..... new Noelias are filling the cache.... let´s see how these ones goes and hope for the best. They are ok here. Finishing without errors in the usual 8:30/9:00 hrs in my 690s and 770s. Driver 320.49 | |
ID: 31365 | Rating: 0 | rate: / | |
Yeah, it seems it was the driver (319.17). I downgraded to 310.44 and it's working much better now. If my calculations are not off, it should take ~18.5h for my 650Ti to complete such a WU, which is very similar to NATHAN_KIDs. This particular NOELIA I'm crunching right now will of course take longer, as I did the first 6% very slowly. | |
ID: 31366 | Rating: 0 | rate: / | |
They are ok here. Finishing without errors in the usual 8:30/9:00 hrs in my 690s and 770s. Driver 320.49 It is linux and noelia tasks problem only. Win are ok. Nathans tasks and linux are ok also. | |
ID: 31369 | Rating: 0 | rate: / | |
This Noelia task http://www.gpugrid.net/result.php?resultid=7034770 appeared to lock up my Linux box so I aborted it (after two reboots). It had previously failed on two Windows boxes. | |
ID: 31370 | Rating: 0 | rate: / | |
On all 3 of my GTX 460/768 GPUs: | |
ID: 31371 | Rating: 0 | rate: / | |
They did not failed on my hosts so far. | |
ID: 31373 | Rating: 0 | rate: / | |
I thought you were going to abort the current Noelia runs...? | |
ID: 31376 | Rating: 0 | rate: / | |
I have 1 box that doesn't like these NOELIA's so I'm going to swap in my 2 GTX670 backup cards and see if that works. I should just switch to Linux Debian now, I've been getting ready for sometime. Microsoft is going to stop supporting Windows XP 32 bit in April and XP x64 in September 2014 even though XP is still running on 38% of the worlds computers (Windows 7 is 44%). | |
ID: 31377 | Rating: 0 | rate: / | |
So far no issues with current Noelia tasks on Linux but this beta had run for 11+ hrs before I noticed it & required a reboot to get the 660 working again. | |
ID: 31378 | Rating: 0 | rate: / | |
WELL..... new Noelias are filling the cache.... let´s see how these ones goes and hope for the best. Reporting back on this. It turned out (with the help of HA-SOFT, s.r.o., thanks!) that NOELIAs have some trouble with driver 319 under Linux. I downgraded to 310.44 and the NOELIA I was currently crunching started progressing at a much faster rate. It finished in ~25h (previously estimated at 119h!) and, of course, I missed the 24h bonus, but only because I had lost ~7 hours with the newer driver. The new NOELIA_klebe_run I got has an estimated 18:09, which is about the same with NATHANs on my GTX 650Ti. What's sweet with these NOELIAs is the CPU usage, about 40-45% of my i7 870. ____________ | |
ID: 31380 | Rating: 0 | rate: / | |
The 304.88 repository driver works just fine. In my opinion there are too many issues with the ~320 drivers for both Windows and Linux. | |
ID: 31381 | Rating: 0 | rate: / | |
It turned out (with the help of HA-SOFT, s.r.o., thanks!) that NOELIAs have some trouble with driver 319 under Linux. I downgraded to 310.44 and the NOELIA I was currently crunching started progressing at a much faster rate. Don't know what's going on with NV drivers lately. Had to switch 3 of my GPUs to other projects because of the NOELIAs and found that while 2 ran fine at SETI, the 3rd did not. Looked at them and sure enough the 3rd had a newer driver (all are Win7-64). Reverted to 310.90 and SETI ran like a charm. So it's not just Linux, it's Windows too with NVidia driver problems. | |
ID: 31384 | Rating: 0 | rate: / | |
Thats why the admins over several projects often say, the lastest drivers are perhaps good for gaming but not always for crunching ;) i think the latest really stable crunchproofdrivers are 310.xx Im very careful with driver updates because there where too much problems often enough. But thats not only an NVidia Thing. You can always hit the ground hard with actual ATI/AMD Drivers like 13.x too ;) | |
ID: 31386 | Rating: 0 | rate: / | |
My system builder has threatened me with at least death if I ever update an NVIDIA driver. He carefully selects the driver as he builds the machine and leaves in in place.... | |
ID: 31387 | Rating: 0 | rate: / | |
John, | |
ID: 31388 | Rating: 0 | rate: / | |
Argh! These NOELIA_xMG_RUN WUs are taking too long on my 650Ti, around 44h!! I aborted two of them, hoping for a NOELIA_klebe or NATHAN, but nope, it was one of these beasts or nothing.. | |
ID: 31392 | Rating: 0 | rate: / | |
Just completed 35x5-NOELIA_7MG_RUN-0-2-RND3709 - only a minor increase in runtime (less than 10%) compared to other recent tasks for host 132158. | |
ID: 31393 | Rating: 0 | rate: / | |
I have 3 Noelia's running estimate is 12-13 hours and definitely one will finish in that time. All run on windows (vista and 7) with 320.18 drivers. I had some Noelia SR in the previous days and they finish all okay. | |
ID: 31394 | Rating: 0 | rate: / | |
Thanks for the responses guys! | |
ID: 31395 | Rating: 0 | rate: / | |
Thanks for the responses guys! Hello Vagelis, Yes indeed the 12-13 hour is for the 660. I have also a 550Ti doing a Noelia and that would take about 46 hours! Already 36.5 hours done. I do the estimates myself. If I see what % has been done in which time I calculate that towards 100%. So 100 divided by percentage % dome times the time it took to do that percentage. You have to see if the driver works by let it do a few WU's. I don't switch drivers to often. ____________ Greetings from TJ | |
ID: 31397 | Rating: 0 | rate: / | |
I have been running GPUgrid for weeks without trouble, suddenly, a few days ago, all the work units I try to run don't utilize the GPU as before. Initially they are calculated to run for abou 13 hours, but after the 13 hours they have only reached about 15% and the time to completion starts rising. At this point I abort them, if not before, as they don't seem to utilize more than a small part of the GPU. | |
ID: 31398 | Rating: 0 | rate: / | |
Argh! These NOELIA_xMG_RUN WUs are taking too long on my 650Ti, around 44h!! I aborted two of them, hoping for a NOELIA_klebe or NATHAN, but nope, it was one of these beasts or nothing.. A little under 19 hours on my 650 Ti (980 MHz), using Win7 64-bit. Two have completed successfully, and one is in progress. (The only crash was when I changed a cc_config file on a work in progress; I think it would have completed normally otherwise.) Did you leave a CPU core free to support the GPU? | |
ID: 31399 | Rating: 0 | rate: / | |
I have been running GPUgrid for weeks without trouble, suddenly, a few days ago, all the work units I try to run don't utilize the GPU as before. Initially they are calculated to run for abou 13 hours, but after the 13 hours they have only reached about 15% and the time to completion starts rising. At this point I abort them, if not before, as they don't seem to utilize more than a small part of the GPU. Well its hard to say I guess. It are all different Noelia WU´s. You can also read in this thread that the driver you are using could be the issue. The klebe-run seems to be in line with the long runs, but on my 550Ti it is already 36.6 hours working for 79%. That 550Ti uses driver 320,18 but could be an issue with this particular WU. I see regularly that a SR is about 3 times as long on the 550Ti than on the 660 and a LR twice as long. Now the Noelia klebe is going to take 4 times as long. I like to see some klebe-runs on my 660 first to decide what to do with the driver. ____________ Greetings from TJ | |
ID: 31400 | Rating: 0 | rate: / | |
The NOELIA_xMG_RUN WUs have a very large output file, approximately 147 MB. Are you using the previous application version, again? The units are running otherwise fine, taking me between 10.5 to 11.5 hours to complete on my windows 7 computer, so please don't cancel them, like you did last time. | |
ID: 31401 | Rating: 0 | rate: / | |
The units are running otherwise fine, taking me between 10.5 to 11.5 hours to complete on my windows 7 computer, so please don't cancel them, like you did last time. I agree. They are running fine on both of my 660's and 650 Ti. It is too early to see what the error rate is; it may be a little more than the Nathans, but not very much thus far. | |
ID: 31402 | Rating: 0 | rate: / | |
How long should I let them run, then? If they were using the GPU 100% I'd have let them run, but they don't. That's why I cancelled them, fearing they would take days to run or eventually error out. | |
ID: 31403 | Rating: 0 | rate: / | |
Hi, Jim: John, | |
ID: 31404 | Rating: 0 | rate: / | |
Hi, Jim: Thanks. I was using 320.49 with no problems on my 650 Ti, but thought I would go back to 310.90 as a test. But in general (unlike AMD drivers), the Nvidia ones all work the same for me. | |
ID: 31405 | Rating: 0 | rate: / | |
I don´t see a difference in temperature still around 66°C just like Nathan´s LR´s. | |
ID: 31406 | Rating: 0 | rate: / | |
Argh! These NOELIA_xMG_RUN WUs are taking too long on my 650Ti, around 44h!! I aborted two of them, hoping for a NOELIA_klebe or NATHAN, but nope, it was one of these beasts or nothing.. Same here. I think Noelia has thrown us another curve without notice. Just as the NOELIA_klebe will not run on cards with less than 1GB, these NOELIA_xMG_RUN WUs look as if they run OK on 2GB cards but extremely slow on 1GB. Some of the earlier NATHAN WUs had a similar behavior on < 1GB GPUs and ran at 1/2 speed. He fixed them and the later NATHANs then ran fine on sub 1GB cards. I think maybe NOELIA has just knocked all 1GB GPUs off GPUGrid. Very sad indeed. | |
ID: 31407 | Rating: 0 | rate: / | |
Same here. I think Noelia has thrown us another curve without notice. Just as the NOELIA_klebe will not run on cards with less than 1GB, these NOELIA_xMG_RUN WUs look as if they run OK on 2GB cards but extremely slow on 1GB. Some of the earlier NATHAN WUs had a similar behavior on < 1GB GPUs and ran at 1/2 speed. He fixed them and the later NATHANs then ran fine on sub 1GB cards. I think maybe NOELIA has just knocked all 1GB GPUs off GPUGrid. Very sad indeed. My experiences above were only on the NOELIA_klebe, so I don't know what problems will occur on the NOELIA_xMG_RUN. But my 660s have 2GB, and my 650 Ti has 1GB, so I guess I will find out. Maybe they should have an opt-in for these larger sizes? I am sure there are plenty of cards around that can do them, it is just a question of getting the right work unit on the right card. | |
ID: 31408 | Rating: 0 | rate: / | |
Same here. I think Noelia has thrown us another curve without notice. Just as the NOELIA_klebe will not run on cards with less than 1GB, these NOELIA_xMG_RUN WUs look as if they run OK on 2GB cards but extremely slow on 1GB. Some of the earlier NATHAN WUs had a similar behavior on < 1GB GPUs and ran at 1/2 speed. He fixed them and the later NATHANs then ran fine on sub 1GB cards. I think maybe NOELIA has just knocked all 1GB GPUs off GPUGrid. Very sad indeed. We've asked and asked and it should be simple to do. Maybe don't know how, don't care? Who knows. | |
ID: 31409 | Rating: 0 | rate: / | |
These MG Units are the first ones that run bit different in relation to all others before here ^^ The single 560TI 448 Cores in the Pentium4 system run MG Units a bit faster then one of the two 570 Cards in a Core2Duo System. I would suggest it is the card in the x4 Slot. But never saw before the 570 with higher runtime then the 560. Seems first time to me that a bit PCIe bandwidth is needed. | |
ID: 31410 | Rating: 0 | rate: / | |
These MG Units are the first ones that run bit different in relation to all others before here ^^ The single 560TI 448 Cores in the Pentium4 system run MG Units a bit faster then one of the two 570 Cards in a Core2Duo System. I would suggest it is the card in the x4 Slot. But never saw before the 570 with higher runtime then the 560. Seems first time to me that a bit PCIe bandwidth is needed. Also looks like they run OK in 1279MB, so it seems they need more than 1024MB but somewhere less than 1279MB to run at an acceptable speed. Unfortunately that's too bad for most of us. | |
ID: 31411 | Rating: 0 | rate: / | |
I have a 7MG work unit running on 650 Ti on windows 7, GPU usage is 98% and it is 50% after 20 hours. Similar 7MG units are running on my 470 with about 85% done after 14 hours, and 660 Ti about 55% done after 5.5 hours, so I guess it is the GPU memory that is the problem. 650 Ti has 1GB, 470 has 1280MB, and 660 Ti has 2GB. | |
ID: 31412 | Rating: 0 | rate: / | |
It just took 8 hours 27 minutes for my 670 to finish a 7MG_RUN with a 151MB upload, I noticed that the NATHAN's used over 95% of the CPU while the NOELIA's use less than 50% of the CPU. My 680's and 770 take about 7 hours 40 minutes, I have no choice but to use the 320.xx series drivers other wise the 770 won't work, the 320.49 seems to be fine but the other 320's are buggy (it's all over the internet). | |
ID: 31413 | Rating: 0 | rate: / | |
Just for your information, my graphics card is a GTX660 with 2GB of memory, and I'm still unable to these WUs well. Maybe it's different on Linux than on Windows? | |
ID: 31414 | Rating: 0 | rate: / | |
Kenneth.. sorry there was no clear response before: nVidia driver 319 has been shown by at least 2 others to cause the issue you describing. Downgrading to 310 has fixed it in both cases, so give it a try. | |
ID: 31415 | Rating: 0 | rate: / | |
A Noelia 7MG is using 1329MB RAM on my GTX 480 (Win7 x64), another one (on a GTX 670 WinXP x64) is started at 1188MB memory usage, and it's slowly rising. So these workunits won't fit in 1GB RAM. I had a stuck workunit, it made no progress after 6 hours, so I've aborted it, however it's page shows 0 sec runtime. The subsequent workunit also stuck at 0% progress, but a system restart fixed this situation. | |
ID: 31418 | Rating: 0 | rate: / | |
Just for your information, my graphics card is a GTX660 with 2GB of memory, and I'm still unable to these WUs well. Maybe it's different on Linux than on Windows? One of the problems with Linux is lack of good monitoring and GPU clock adjusting software, in Windows, when one wu finished and another started, especially when going from a NATHAN to a NOELIA, my GPU clock would change. Sometimes it would boost too high and cause errors, I am able to create profiles in PrecisionX and reset everything with one click. I know there aren't very good apps for Linux (at least that I'm aware of) for doing this, it would certainly help. I wish someone would write a good one soon because I'll be switching to it when NVidia stops making drivers for XP x64 and Microsoft stops supporting it next year. | |
ID: 31420 | Rating: 0 | rate: / | |
I think maybe NOELIA has just knocked all 1GB GPUs off GPUGrid. Very sad indeed. Since all 9 of my NVidia GPUs are 1GB or less, and I can't get anything but these #*&$% NOELIA_1MG WUs, I'm off the project till something changes here. Think I'll have a lot of company,,, Sad :-( | |
ID: 31422 | Rating: 0 | rate: / | |
This batch could well have a GDDR capacity issue for anything other than cards with 2GB GDDR (which suggests CUDA routine selection isn't working/doesn't exist in this run), and possibly a separate issue with Linux... I will plug in a rig tomorrow with a GTX650Ti and 304.88 to confirm this, but it's obviously going to take a while! | |
ID: 31423 | Rating: 0 | rate: / | |
Since all 9 of my NVidia GPUs are 1GB or less, and I can't get anything but these #*&$% NOELIA_1MG WUs, I'm off the project till something changes here. Think I'll have a lot of company,,, Sad :-( A NOELIA_1MG just crashed on my GTX 660 with 2 GB memory after 7 hours run time, so there is no guarantee that even more memory will fix it (Win7 64-bit, 314.22 drivers, supported by a virtual core of an i7-3770). The other NOELIAs that I have received have been fine, though that is not the entire set. There are some good ones and some not-so-good ones. | |
ID: 31424 | Rating: 0 | rate: / | |
This one 1MG crashed too on several machines like my superstable one http://www.gpugrid.net/workunit.php?wuid=4583900 | |
ID: 31425 | Rating: 0 | rate: / | |
I think maybe NOELIA has just knocked all 1GB GPUs off GPUGrid. Very sad indeed. I agree 100% and have already moved my 650Ti to Einstein. Alas, Einstein's credit is SO lame! ____________ | |
ID: 31426 | Rating: 0 | rate: / | |
This is just weird. | |
ID: 31428 | Rating: 0 | rate: / | |
I didn´t want to complain earlier, but with more than ten WU´s failed since yesterday (not different than the other days) I feel compelled to do it. The Noelias, new and not so new ones, are failing on all my rigs and cards (690s and 770s). | |
ID: 31430 | Rating: 0 | rate: / | |
On my GTX550Ti with 1GB RAM a Noelia (klebe run) took 46h22m to finish without error. On a WinVista x86 rig with 2 CPU cores doing Einstein@home and nVidia driver 320.49. | |
ID: 31432 | Rating: 0 | rate: / | |
This isn't funny --- ok, so it is funny, but only because nothing burned-up: | |
ID: 31434 | Rating: 0 | rate: / | |
This isn't funny --- ok, so it is funny, but only because nothing burned-up: Yeah. That happens when a WU fails on my machines too. But since it didn´t restart automatically a new WU, nothing burns. But it will loose all your precision X presets and waste the processing so far. Upseting mode on. Edit: will say again: can we (ok you, the project guys) change the wus? They aren´t good and are upseting users. | |
ID: 31435 | Rating: 0 | rate: / | |
I can confirm, the following: | |
ID: 31436 | Rating: 0 | rate: / | |
HELP! Nathan, where are you? | |
ID: 31446 | Rating: 0 | rate: / | |
These newer NOELIA klebe tasks seem to be taking longer and longer to finish. The old NOELIAs were 9-10hours. Then it went to 12-13 hours. This latest one is going to be in the 15-16 hour range using 750MB of memory on an MSI660TI PE. | |
ID: 31447 | Rating: 0 | rate: / | |
HELP! Nathan, where are you? He's hiding in the Short queue :) These newer NOELIA klebe tasks seem to be taking longer and longer to finish. The old NOELIAs were 9-10hours. Then it went to 12-13 hours. This latest one is going to be in the 15-16 hour range using 750MB of memory on an MSI660TI PE. Don't take it personally, the present Looooong NOELIA WU's don't like anyone. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 31451 | Rating: 0 | rate: / | |
HELP! Nathan, where are you? He's on vacation, I see that down clocking my cards a little has helped reduce my error rate. There's only a finite amount of wu's here, we got to bite the bullet and chug through the weekend, I think Nathan will be back on Monday. He should be able to sort things out. | |
ID: 31452 | Rating: 0 | rate: / | |
HELP! Nathan, where are you? If I have read previous posts from Nathan correctly, every scientist does her or his own WU's with different functionality, thus Nathan would not interfere (at least much). I have my clocks still high and the Noelia's that do not error within the first minutes will finish, but take (a lot) more time. I call them ELR's, exceptional long run's. Even on fast cards (770/690) they took long. I don't mind running them. ____________ Greetings from TJ | |
ID: 31455 | Rating: 0 | rate: / | |
Nathan helps Noelia all the time, I guess you weren't following about a year ago. On my 680's they take 7.5 hours to 8 hours, that's not long to me. I didn't know you had a GTX770 or 690, how long have you had those? My 770 is identical to my 680's time wise, memory speed doesn't seem to make that much difference. | |
ID: 31456 | Rating: 0 | rate: / | |
Guys, not all of us have high-end to super-high-end cards crunching! My 650Ti was able to chew all long WUs within 24h up until these latest NOELIAs (xMG_RUN) appeared, which take me more than 40h!! Not only is the credit low, the risk of losing too much work becomes greater! | |
ID: 31458 | Rating: 0 | rate: / | |
Nathan helps Noelia all the time, I guess you weren't following about a year ago. On my 680's they take 7.5 hours to 8 hours, that's not long to me. I didn't know you had a GTX770 or 690, how long have you had those? My 770 is identical to my 680's time wise, memory speed doesn't seem to make that much difference. I haven't, but a 770 is on the way, but one can look at rigs that have them and compare that with Nathan's. That's what I did and saw they took longer. ____________ Greetings from TJ | |
ID: 31460 | Rating: 0 | rate: / | |
I must admit that I think Vagelis has a point. Being around at this project for a while, I see that only the best (and thus expensive) hardware can cope with the Wu's lately. My GTX285 (a former workhorse) can no longer be used, as it will need 2.5-3 days to finish. The 550Ti is taking almost two days, so is waiting for retirement as well. | |
ID: 31461 | Rating: 0 | rate: / | |
It hasn't just been Noelia's wu's, | |
ID: 31462 | Rating: 0 | rate: / | |
The best "bang for buck" cards were the mid-range cards, at least until the arrival of the GTX770, when the prices started to fall across the range. The GTX 670 might now be the best "bang for buck" card, or not far off (but it depends on the price and they change regularly even in the same country). | |
ID: 31464 | Rating: 0 | rate: / | |
My experience with last Noelia's in my Linux Ubuntu 12.04 with Nvidia drivers 304.43 and two EVGA SC 660GTI cards is satisfactory. | |
ID: 31467 | Rating: 0 | rate: / | |
I just have one crunching system with a single mid-range GPU, I'm no mega-cruncher like some of you guys. I may as well work only on the short queue, until these NOELIAs disappear. | |
ID: 31468 | Rating: 0 | rate: / | |
You think I don't mind when a Noelia crashes and takes out a CPDN model that had over 300 hours crunching? Interesting. My CPDN work is done on a different PC than the ones that do GPUGrid, and it looks like I will be keeping it that way. But I haven't really noticed that a Noelia crash takes out anything else (yet) on Win7 64-bit. | |
ID: 31469 | Rating: 0 | rate: / | |
IMHO, the researchers must take into account not only their research goals, but also the average (not the high-end) cruncher's crunching power. They do. That's why there are two queues here at GPUGrid. | |
ID: 31470 | Rating: 0 | rate: / | |
IMHO, the researchers must take into account not only their research goals, but also the average (not the high-end) cruncher's crunching power. I don't want work units to crash, but what I really want is for my cards to be used efficiently. Some projects work too hard to be backward-compatible with older cards that you don't get the full value of your investment in a new card. At that point, I start looking for other projects. | |
ID: 31472 | Rating: 0 | rate: / | |
You think I don't mind when a Noelia crashes and takes out a CPDN model that had over 300 hours crunching? I lost 4 models one day, it was the dreaded "ACEMD.2865P.exe*32 Encountered an error and needs to close", the CPDN models ranged from 328 hours, 256 hours, 198 hours and 73 hours (I wrote them down). It's only happened twice, the other time only got 1 model. | |
ID: 31473 | Rating: 0 | rate: / | |
Unfortunately I can empathize with you all too well. | |
ID: 31474 | Rating: 0 | rate: / | |
I may have identified the source of some problems with the present Noelia WU's. When I checked the Memory Controller Load it was 1% for a GTX 660Ti. The last time I looked it was around 40%. The GPU load was 98% and clocks were normal (high). | |
ID: 31475 | Rating: 0 | rate: / | |
I may have identified the source of some problems with the present Noelia WU's. When I checked the Memory Controller Load it was 1% for a GTX 660Ti. The last time I looked it was around 40%. The GPU load was 98% and clocks were normal (high). I saw that a couple of days ago with one of my cards, I think a 660. I exited BOINC as normal, and when it restarted, the Noelia errored out. But that means the work unit could hang that way for a long time unless you manually intervene; not a fun thought. | |
ID: 31476 | Rating: 0 | rate: / | |
I may have identified the source of some problems with the present Noelia WU's. When I checked the Memory Controller Load it was 1% for a GTX 660Ti. The last time I looked it was around 40%. The GPU load was 98% and clocks were normal (high). I have that too on my quad with the 660 still in it. I did some alternations with Precision X, and a reboot, but it stays at MCU stays at 1% and the GPU power sits around 62%. It has done 34% in 17 hours. The other 660 in the T7400 does great. How did you fix this problem skgiven with the 1% MCU load? ____________ Greetings from TJ | |
ID: 31477 | Rating: 0 | rate: / | |
Hello: It seems that I have a problem on Linux / Ubuntu with the GTX 770 and Noelia tasks, performance is pitiful salary at the GPU and no CPU usage. | |
ID: 31478 | Rating: 0 | rate: / | |
I've restarted Boinc, the system, suspended and resumed tasks to make them swap GPU and now both Noelia WU's are using 0 or 1% Memory Controller Load. The worrying thing is that one WU is at 52% after 24h, mostly on a GTX660Ti and the other is at 39% after 5h40min, but will no doubt take days since the memory controller load is banjaxed. | |
ID: 31479 | Rating: 0 | rate: / | |
Could it be hardware/software related? Your 660Ti isn't worse than my 660. Both 660's are exact the same both EVGA not OC. One in the T7400 with PCIe 2.0 and is doing good with 93% GPU load, 65°C, 35% MCU load and 96% GPU power. It does a Noelia in about 14 hours. | |
ID: 31481 | Rating: 0 | rate: / | |
I'm going to dispose of the 314.22 drivers and try 306.97, but since I have not experienced the memory controller issues with other WU's I would say it's task related. I'm also seeing wonky driver restarts, but I've seen that before with Noelia WU's. What kind of OS is running on this host? | |
ID: 31483 | Rating: 0 | rate: / | |
W7x64, but went to 310.90. | |
ID: 31485 | Rating: 0 | rate: / | |
I have noticed recently that when exiting BOINC (7.0.64 x64) I have been getting crashes of the Nvidia drivers. But I have just upgraded to BOINC 7.2.4, and don't see this. Whether that has anything to do with the present Noelia problems is another matter, but it is worth watching. | |
ID: 31486 | Rating: 0 | rate: / | |
W7x64, but went to 310.90. I have the feeling that Win7x64 is more prone to cause workunit errors (especially Noelia's) than WinXPx64. | |
ID: 31487 | Rating: 0 | rate: / | |
I still think this might be a WU issue, but I've suspected for some time that hidden WDDM bugs could occasionally cause issues. | |
ID: 31489 | Rating: 0 | rate: / | |
I have noticed recently that when exiting BOINC (7.0.64 x64) I have been getting crashes of the Nvidia drivers. But I have just upgraded to BOINC 7.2.4, and don't see this. Whether that has anything to do with the present Noelia problems is another matter, but it is worth watching. I think this is a bug with NOELIA's long WUs. I switched over to short WUs only to avoid this NVIDIA driver crash everytime I suspend or exit BOINC which sometimes crashes my whole system and I'm forced to hard reboot. | |
ID: 31491 | Rating: 0 | rate: / | |
I've only had two or three that errored out. Both were almost immediate, with one unit erroring for everyone. And the one that I just crashed on went to SAM, and he finished it. | |
ID: 31493 | Rating: 0 | rate: / | |
These latest Noelia WU's use the older v6.18 Application. Previous Noelia WU's used v6.49. So that may explain behavior differences. Previous Noelia WU's did not use a full CPU core/thread (the only type of work that doesn't). | |
ID: 31495 | Rating: 0 | rate: / | |
If I understand correct than the 1% MCU load is a result of the WU and we can not do anything about it? | |
ID: 31496 | Rating: 0 | rate: / | |
It will be interesting to see how that turns out. | |
ID: 31497 | Rating: 0 | rate: / | |
While you mentioned it, I took a close eye on it. And I saw at least two WU's from Noelia from the start and the MCU was 1% from the begin onwards. | |
ID: 31501 | Rating: 0 | rate: / | |
Zoltan wrote: I have the feeling that Win7x64 is more prone to cause workunit errors (especially Noelia's) than WinXPx64. This could well be. XP uses the old driver architecture versus WDDM on Vista/7/8, so they're actually on different branches now. Generally they should be similar, but especially corner cases like bugs being triggered would be expected to differ between them. Carlesa25 wrote: It seems that I have a problem on Linux / Ubuntu with the GTX 770 and Noelia tasks, performance is pitiful salary at the GPU and no CPU usage. Well, it's obviously a driver issue, since it works with the older versions. I can't see anything BOINC or GPU-Grid could do about this other than to inform nVidia and hope they'll fix it at some point. If the most recent beta drivers are still not working, chances are that nVidia doesn't yet know about this problem. As a work around you switch the GTX770 to a windows box, if you've got any. And.. the issue applies to other WUs as well doesn't it? Otherwise you could go for the short queue. @1% MCU load: so far the only reports of this happening have been from SK and TJ. Are you guys just watching more closely than others.. or is the error only happening on your systems? In the latter case it could be the disabled driver watchdog (did you apply this registry change as well, TJ?). If something goes wrong in the GPU and normally the watchdog would reset the driver & GPU (with task failure or not, whatever)... and you disable the watchdog, then your GPU may just continue to do something in this strange state. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31505 | Rating: 0 | rate: / | |
@1% MCU load: so far the only reports of this happening have been from SK and TJ. Are you guys just watching more closely than others.. or is the error only happening on your systems? In the latter case it could be the disabled driver watchdog (did you apply this registry change as well, TJ?). If something goes wrong in the GPU and normally the watchdog would reset the driver & GPU (with task failure or not, whatever)... and you disable the watchdog, then your GPU may just continue to do something in this strange state. No I did not change this in the registry. I have looked for it but didn't find it. So to not mess things up I left it. Yes I look closely at these WU's at the moment, and I guess skgiven does too. skgiven said: To be fair I've had 13 Noelia WU's finish and only 2 fail (both within a few minutes, which is a lot better than after 10h). That said I did edit the registry to try to prevent failures. Perhaps skgiven can give a hint what need to be changed. I suppose it is in Software, nVidia driver or card manufactures? ____________ Greetings from TJ | |
ID: 31507 | Rating: 0 | rate: / | |
skygiven do you mean the registrychange that no windows error messages pops up and block the GPU/BOINC Slot from working on? I added them too on my Systems, or do you mean another regedit? | |
ID: 31511 | Rating: 0 | rate: / | |
It's disabling the driver watchdog completely. It's supposed to stop errors from happening.. but if there's a real error you'll have to go for the hard reboot. | |
ID: 31513 | Rating: 0 | rate: / | |
I suggest a different workaround: As the GTX 770 is basically a GTX 680 with higher clocks, the GTX 770 should work with the previous drivers, if you include the appropriate line into the nv4_dispi.inf file (this method is for Windows only, so a Linux guru should tell us how to do it under Linux) You should look for: %NVIDIA_DEV.1180% = Section021, PCI\VEN_10DE&DEV_1180 You should copy the whole line below the original (the section number may be different), and then change both 1180 to 1184: %NVIDIA_DEV.1184% = Section021, PCI\VEN_10DE&DEV_1184 and then you should look for: NVIDIA_DEV.1180 = "NVIDIA GeForce GTX 680" copy the whole line below the original and then change 1180 to 1184, and 680 to 770: NVIDIA_DEV.1184 = "NVIDIA GeForce GTX 770" After these modifications the old driver should recognize the new card. | |
ID: 31514 | Rating: 0 | rate: / | |
so far the only reports of this happening have been from SK and TJ. Are you guys just watching more closely than others.. or is the error only happening on your systems? Yesterday, I caught one that had been running for 10.5 hours and was at 0.00%, GPU at 99%, memory controller at 0%, I aborted it and it was a first run (ended in 0). | |
ID: 31515 | Rating: 0 | rate: / | |
Hello: I do not understand is that with short tasks work perfectly my GTX770 (Ubuntu 13.04) and the long NOELIA failure, so far I have not tried with a different long task. | |
ID: 31516 | Rating: 0 | rate: / | |
Einstein tasks are entirely different, and al an older version of CUDA. | |
ID: 31517 | Rating: 0 | rate: / | |
Einstein tasks are entirely different, and al an older version of CUDA. That's true, but Einstein, Albert, Milkyway are nice projects to test your setup and drivers. Most WU' run fast so you can see what happens. I agree with Carlesa25 and others that these set of Noelia's have strange behavior. I use driver 320.49 and 320.18 for Noelia's and most finish, but in longer time or error quite immediately. The error rate is higher. I don't think a driver change will resolve all problems, and Linux is more efficient than Windows when crunching. ____________ Greetings from TJ | |
ID: 31518 | Rating: 0 | rate: / | |
Such a pity about the NOELIA tasks: wasting energy and computer resources..... | |
ID: 31519 | Rating: 0 | rate: / | |
Hi Noelia tasks in Windows 8-64bit are working well, so far, on the GTX770, the GPU load 86% + 22% CPU. | |
ID: 31520 | Rating: 0 | rate: / | |
Three people reported the 1% MCU load, and I changed drivers just in case they were the issue. You have to be using GPUZ to see the MCU load, it’s not listed in Precision/Afterburner. I suggest anyone seeing a task run for ages check this in GPUZ. | |
ID: 31523 | Rating: 0 | rate: / | |
Hello: I'm using EVGA-Precision and gives the same readings on all parameters that GPU-Z. 320.49 driver. | |
ID: 31524 | Rating: 0 | rate: / | |
I don't think EVGA precision gives a Memory Control load reading, so you have to use GPUZ to get that. | |
ID: 31525 | Rating: 0 | rate: / | |
I don't think EVGA precision give a Memory Control load reading, so you have to use GPUZ to get that. SK is right, you can't read the memory controller load in PrecisionX, you need GPU-Z or GPU-Shark to see the controller. | |
ID: 31528 | Rating: 0 | rate: / | |
I don't think EVGA precision give a Memory Control load reading, so you have to use GPUZ to get that. Hello: I have installed and use, GPU-Z 0.72 and EVGA-PrecisionX 4.2.0.2143, and the same sensor readings and see one another, do not really understand what you say. | |
ID: 31529 | Rating: 0 | rate: / | |
GPUZ | |
ID: 31530 | Rating: 0 | rate: / | |
That's correct Precision X does not show MCU load or GPU power, but EVGA NV-Z does. | |
ID: 31535 | Rating: 0 | rate: / | |
Hi Noelia in Windows 8 Task completed smoothly. | |
ID: 31537 | Rating: 0 | rate: / | |
I was going to keep my GTX 650 Ti with 1GB memory on Longs even with the occasional crash, but it has occurred to me that this might give the wrong feedback to GPUGrid. That is, if they get 5 errors (or whatever their limit is), they might conclude that it is a bad work unit and discard it, when in fact it is merely due to 5 cards with not enough memory. Therefore, I am going to Shorts, but also enabling Beta testing, so they can try out their stuff (if they choose to) before releasing it. (I make sure my card is running at the default Nvidia chip speed rather than the card factory overclock, since a test of how unstable the card is does not do them any good in evaluating their work units). | |
ID: 31539 | Rating: 0 | rate: / | |
Ah, Carlesa. You leave K-Boost off I see. That's why it shows your FB %. Than yes, they do show the same data. | |
ID: 31540 | Rating: 0 | rate: / | |
I've tried the latest (320.49) driver on my least reliable host (it has a WinXPx64), and it became completely unreliable :) Every (Noelia) task is stuck at 0% with 0% GPU usage using the 320.49 driver (btw it's CUDA 5.5). | |
ID: 31543 | Rating: 0 | rate: / | |
I've tried the latest (320.49) driver on my least reliable host (it has a WinXPx64), and it became completely unreliable :) Every (Noelia) task is stuck at 0% with 0% GPU usage using the 320.49 driver (btw it's CUDA 5.5). Hello: In Windows 8-64bits I am using the 320.49 without problems, both short tasks like Noelia. | |
ID: 31544 | Rating: 0 | rate: / | |
One long WU is probably not enough to know it's running without problems, but you never can tell. | |
ID: 31545 | Rating: 0 | rate: / | |
I just got one of the famous xMG WUs on my EVGA GTX 650 Ti with 2GB: | |
ID: 31546 | Rating: 0 | rate: / | |
I do not understand is that with short tasks work perfectly my GTX770 (Ubuntu 13.04) and the long NOELIA failure, so far I have not tried with a different long task. The NOELIA tasks make the error in the driver appear, but they are not causing it. Noelia is testing new functionality, that's why the error doesn't appear with short queue tasks or other long-runs. Again, the exact problem you're describing has been seen by at least 2 others and has been solved by downgrading the driver. That's the best we can offer, including the .inf mod propsed by Zoltan. Or keep it running under Win, of course. John C MacAlister wrote: Such a pity about the NOELIA tasks: wasting energy and computer resources..... Noelia is trying to make new functionality work, and GDF said they definitely need them. So while the execution of these tests may be lacking, the results are not worthless. I wonder if they'd be better off in the beta queue, though. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31547 | Rating: 0 | rate: / | |
The NOELIA tasks make the error in the driver appear, but they are not causing it. Noelia is testing new functionality, that's why the error doesn't appear with short queue tasks or other long-runs. Hello: Thanks for the comment. I'll wait for the final output of Nvidia 325.08 driver for Linux (currently in Beta, I do not think that take) and see if it solves, if anything I can always move to Windows without problem. | |
ID: 31548 | Rating: 0 | rate: / | |
Many thanks: | |
ID: 31550 | Rating: 0 | rate: / | |
I just got one of the famous xMG WUs on my EVGA GTX 650 Ti with 2GB: I, too, just got my first xMG (44x5-NOELIA_7MG_RUN-0-2-RND1940) for my GTX 570. Holding my breath... ____________ | |
ID: 31551 | Rating: 0 | rate: / | |
I see Nathan WU's back in the Long queue. | |
ID: 31553 | Rating: 0 | rate: / | |
I've tried the latest (320.49) driver on my least reliable host (it has a WinXPx64), and it became completely unreliable :) Every (Noelia) task is stuck at 0% with 0% GPU usage using the 320.49 driver (btw it's CUDA 5.5). I have mentioned it before: my quad core with Vista x86 is using driver 320,49 with a GTX550Ti and has relative the least errors from my rigs. It does the Noelia´s but very slow, but that´s the card. It seems a bit "bad luck". I have checked a lot of my WU´s and the wingman (in case of error) and I saw a lot "error while downloading", thus even before the Noelia´s WU. ____________ Greetings from TJ | |
ID: 31560 | Rating: 0 | rate: / | |
I, too, just got my first xMG (44x5-NOELIA_7MG_RUN-0-2-RND1940) for my GTX 570. Holding my breath... Okay, so that wasn't so bad: Run time 65,793.69 CPU time 3,755.88 Validate state Valid Credit 150,000.00 Per GPU-Z, Avg GPU Load = 85%, Max GPU Memory = 1050MB ____________ | |
ID: 31562 | Rating: 0 | rate: / | |
No the Noelia´s warn´t to bad. I have Nathan´s again and I see a GPU load between 82-88% and a MCU load of 28-30%, GPU time and CPU time almost the same. | |
ID: 31563 | Rating: 0 | rate: / | |
The Noelia _MG7_ on the GTX660 with 1% MCU load finished without error! | |
ID: 31565 | Rating: 0 | rate: / | |
Sadly I wish I could agree - the NOELIA_klebes have been running ok (just about) however the *MGs have been a nightmare. I noticed GpuGrid's total performance has dropped by some 15% in the last 3 months - so I guess the jury is out on that. | |
ID: 31566 | Rating: 0 | rate: / | |
And another one bites the dust... 4x2-NOELIA_1MG_RUN1-0-2-RND8035_3. I guess I shouldn't pile on. | |
ID: 31568 | Rating: 0 | rate: / | |
I just got one of the famous xMG WUs on my EVGA GTX 650 Ti with 2GB: The above WU finished successfully! I checked memory load several times and it was always about the same: 1500 MB. GPU time: 73,064.51 s, CPU time: 33,299.85 s, the low credit is due to my low up-load speed - missed 24 deadline again… So I am pretty satisfied with my GTX 650 Ti with 2 GB memory. The only sad thing, I just purchased one of the two cards on Amazon “used – like new” for the same price well below the 1 GB versions (new and used). Missed an opportunity… | |
ID: 31571 | Rating: 0 | rate: / | |
I'm currently crunching one of these dreaded NOELIAs on my 1GB 650Ti: Slot: 0 It's consuming 622MB on the card, CPU usage is ~45%, so it appears to be progressing normally - sorry, no GPU load info, I'm on Linux. Let's see how it goes! ____________ | |
ID: 31574 | Rating: 0 | rate: / | |
Another 'wonderful' unannounced WU: NOELIA_2HRUN - 25% in 7.5hrs (30hrs estimated) on a 1gb 650ti. | |
ID: 31575 | Rating: 0 | rate: / | |
re: NOELIA_2HRUN - at least it appears to only be using 714MB GPU mem(of 1024MB); mem controller usage is low at 21% (normally 41% for non NOELIA); GPU usage 99%, CPU 30%. | |
ID: 31577 | Rating: 0 | rate: / | |
It is a good thing that I retired my 1GB GTX 650 Ti, since my 2GB GTX 660s are now getting only the larger work units. | |
ID: 31578 | Rating: 0 | rate: / | |
Maybe a memory allocation issue has been fixed. Noticed against the task properties the NOELIA estimated app speed is 96.06 GLFOPs/sec (650ti) whereas a SANTI_bax is 516 GLFOPs/sec (670ti @ 72% power). Normally my 650s are about 70% the performance of my 670 - wondering if double precision is being more heavily used? There again its is probably more related to cache size (256k v 512k). | |
ID: 31579 | Rating: 0 | rate: / | |
MW ist the only Project that needs DP. | |
ID: 31583 | Rating: 0 | rate: / | |
If you have a look at the acellera website which I understand was the commercial spin off of GpuGrid they state otherwise: | |
ID: 31584 | Rating: 0 | rate: / | |
If you have a look at the acellera website which I understand was the commercial spin off of GpuGrid they state otherwise: That makes sense. I have noticed that of the three Noelia failures on my GTX 660s that have been successfully completed by others, the ones that completed successfully were all on higher-level cards (GTX 670, GTX 680 and a GTX 690). Those very likely have higher-performance floating point units than the GTX 660. I think the light bulb is beginning to go on. | |
ID: 31585 | Rating: 0 | rate: / | |
Huh ok thx that must be very new info its not long ago that there was no use of DP here O.o | |
ID: 31586 | Rating: 0 | rate: / | |
Just taking a wild guess, and saying the DP, if any, could be sent to the CPU. | |
ID: 31588 | Rating: 0 | rate: / | |
If DP is a bottleneck for Noelias then I would have expected the Fermi base cards to equal or outperform their Kepler equivalents. I have noticed a 650ti is typically equal in performance to a 560ti with non-Noelias, but not sure about Noelias. | |
ID: 31592 | Rating: 0 | rate: / | |
Still I am not convinced DP is at fault here as we know the Noelias are working on very large molecules - logically increasing the likelihood of execution stalls and hence why I suspect cache size maybe so important. We users need a "super-size" category to know what to put on the project. Otherwise, the stalls will cause bonus point (and more importantly the work itself) to go out the window. | |
ID: 31594 | Rating: 0 | rate: / | |
As far as I know, GPUGrid has not and does not use fp64 (double precision). What the ACEMD application can do is a different question. For research groups that need cuda and double precision they could use ACEMD along with GK110 Titans or 780's (at least in theory), as they have superior double precision over the GK10x cards. | |
ID: 31595 | Rating: 0 | rate: / | |
Hello over an hour ago I started a new job in Ubuntu 13.04 NATHAN long and is working perfectly in the GTX770 and 89% of CPU. | |
ID: 31596 | Rating: 0 | rate: / | |
As repeatedly said it is clear that NOELIAs (beyond the Nvidia driver etc ...) have a problem that requires adequate analysis and if interested in maintaining a stable job, lost time and interest in the project. Quite so. It is not clear whether this is a random problem with this batch of work, or whether a lesson has been learned that will prevent it from happening again. In fact, it is not even clear whether GPUGrid considers it to be a problem at all, or merely an acceptable cost of doing business. But for those of us who do not want to baby-sit our rigs, it would be of interest to know the answers. | |
ID: 31597 | Rating: 0 | rate: / | |
SK wrote: As far as I know, GPUGrid has not and does not use fp64 (double precision). What the ACEMD application can do is a different question. I agree. And as long as GT240 and simiular cards can still run the code (although too slow nowadays) we can be sure that not a single DP instruction is needed from the hardware (those chips don't have any such units). Jim1348 wrote: Those (GTX 670, GTX 680 and a GTX 690) very likely have higher-performance floating point units than the GTX 660. No, they don't. They're using the exact same SMX's as building blocks, down to the smallest Kepler. What differs iis just the amount and clock speed of these units. The exception is GK110 (Titan and GTX780), which did indeed get more DP units (but again of the same type). MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31598 | Rating: 0 | rate: / | |
And as long as GT240 and simiular cards can still run the code (although too slow nowadays) we can be sure that not a single DP instruction is needed from the hardware (those chips don't have any such units). We haven't checked the latest batch; the Keplers can't run them, so there is no guarantee about the GT 240. But the cause of the difference is not so important as the fact that it is there, in beta-test form, or maybe even alpha. | |
ID: 31599 | Rating: 0 | rate: / | |
My guess is that Noelia went back to a previous app, for scientific/testing/reassessment reasons. | |
ID: 31600 | Rating: 0 | rate: / | |
Nice to see the interaction between project managers and the contributors that make that project possible. | |
ID: 31601 | Rating: 0 | rate: / | |
And as long as GT240 and simiular cards can still run the code (although too slow nowadays) we can be sure that not a single DP instruction is needed from the hardware (those chips don't have any such units). My 680s ran all the Noelia's, save several that failed within 10-30s. However, I've checked the ones that failed I've successfully completed, just different tasks, but same WU type. As there were multiple Noelia WUs out and about. | |
ID: 31607 | Rating: 0 | rate: / | |
My 680s ran all the Noelia's, save several that failed within 10-30s. However, I've checked the ones that failed I've successfully completed, just different tasks, but same WU type. As there were multiple Noelia WUs out and about. I just completed a 2-NOELIA_2HRUN (a new type for me) in 25 hours 31 minutes. There is nothing necessarily wrong with that; you don't get so many bonus points, but so what. I just think they need to alert the users about the different requirements, so that you can base your GPU purchasing decisions accordingly. It seems to be a new era; maybe when the dust settles, they will offer some guidance. Otherwise, it is just hit-or-miss as to what cards will work on what work units. | |
ID: 31609 | Rating: 0 | rate: / | |
Seems to be a matter of pure luck :-) I just got an Santi SR error after 5000 seconds on a GTX660. So its not only Noelia. | |
ID: 31610 | Rating: 0 | rate: / | |
Nice to see the interaction between project managers and the contributors that make that project possible. Communications are limited to say the least, but at least 2 of the researchers are on leave and that means the others have to keep the project running, which is a challenge in itself. I just think they need to alert the users about the different requirements, so that you can base your GPU purchasing decisions accordingly. It seems to be a new era; maybe when the dust settles, they will offer some guidance. Otherwise, it is just hit-or-miss as to what cards will work on what work units. It's always been the case that the more expensive cards are faster and usually more reliable, however last time I looked they were not the best bang for buck. I don't know how the 670's and above are performing relative to the more mid-range cards such as the 650Ti and 660, but I'm seeing similar issues on my 660 and my 660Ti (mostly on Windows). On my Linux rig (650TiBoost) I've had no issues (using 304.88 drivers) but I have not run enough NOELIA WU's to say for sure... The one thing we do know is that 1GB GPU's struggle with WU's that require over 1GB GDDR. That's been noted and is stipulated in the Recommended GPU list, which does get updated when new GPU's arrive, and when we learn the hard way about task requirements. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 31614 | Rating: 0 | rate: / | |
re: NOELIA_2HRUN some positive news: | |
ID: 31616 | Rating: 0 | rate: / | |
It's always been the case that the more expensive cards are faster and usually more reliable, however last time I looked they were not the best bang for buck. I don't know how the 670's and above are performing relative to the more mid-range cards such as the 650Ti and 660, but I'm seeing similar issues on my 660 and my 660Ti (mostly on Windows). On my Linux rig (650TiBoost) I've had no issues (using 304.88 drivers) but I have not run enough NOELIA WU's to say for sure... The one thing we do know is that 1GB GPU's struggle with WU's that require over 1GB GDDR. That's been noted and is stipulated in the Recommended GPU list, which does get updated when new GPU's arrive, and when we learn the hard way about task requirements. My conclusion is that it now takes at least a GTX 670 on the Longs in order to avoid the great slowdown we see. Even the 660s with 2 GB memory are not enough; I have learned that the hard way, and am going to put mine on Shorts. There is no point spending 20 hours or more grinding away when they could be doing more productive work. Even the higher-level cards may still have problems, but I think those will be worked out over time. It is all in the way of scientific progress, which is fine with me, but they could have mentioned it to us. | |
ID: 31623 | Rating: 0 | rate: / | |
Hi, Jim: It's always been the case that the more expensive cards are faster and usually more reliable, however last time I looked they were not the best bang for buck. I don't know how the 670's and above are performing relative to the more mid-range cards such as the 650Ti and 660, but I'm seeing similar issues on my 660 and my 660Ti (mostly on Windows). On my Linux rig (650TiBoost) I've had no issues (using 304.88 drivers) but I have not run enough NOELIA WU's to say for sure... The one thing we do know is that 1GB GPU's struggle with WU's that require over 1GB GDDR. That's been noted and is stipulated in the Recommended GPU list, which does get updated when new GPU's arrive, and when we learn the hard way about task requirements. | |
ID: 31625 | Rating: 0 | rate: / | |
Earlier in this thread Zoltan said he had problems on one of his systems (GTX670's I think). Moving from 320 to 307.9 (on XP) seems to have resolved the issues. | |
ID: 31626 | Rating: 0 | rate: / | |
I'm now about a month experimenting with my new GTX660 and to me it also depends on the set up of the system. In some systems (could be low PSU, wrong MOBO or wrong MOBO settings) it did not great an another they do. | |
ID: 31627 | Rating: 0 | rate: / | |
Looked at from a purely PPD point of view on my GTX 660s, I just got 20,500 points from a I654-SANTI_baxbim1 in the Short que, which took 3 hours 33 minutes, or about 139k PPD. In contrast, the last Noelia Long to complete was a 2-NOELIA_2HRUN which yielded 112,500.00 points in just over 24 hours. So with this card I may be slightly better off in the Shorts, though there will be some Nathans in the Long que that would probably complete without incident in the usual times. | |
ID: 31629 | Rating: 0 | rate: / | |
Earlier in this thread Zoltan said he had problems on one of his systems (GTX670's I think). Moving from 320 to 307.9 (on XP) seems to have resolved the issues. It did. I'm crunching NOELIA tasks error free. I have 4 active hosts at the moment, every one has different drivers and OS. 1. WinXPx64, v310.33, 2xGTX680: 2 errors (in short time) 2. WinXPx64, v307.90, 2xGTX670: 11 errors due to the previous driver, and the experiments with the v320.49 3. WinXPx64, v314.22, GTX680+GTX670: 1 error: NATHAN_KIDKIXc22 :)) 4. WinXPx86, v314.07, GTX680: 0 errors Not active host: 5. Win7x64, v311.06, GTX480: 6 errors due to low GPU voltage (1000mV) BTW: it seems that the long queue nearly run out of NOELIA workunits, as my hosts have 8 NATHAN's and 5 NOELIA's in their queue. | |
ID: 31630 | Rating: 0 | rate: / | |
Reporting back on the NOELIA_xMG_RUN I was crunching on my 1GB 650Ti: it took long but went fine! Here is the WU info. | |
ID: 31631 | Rating: 0 | rate: / | |
I even stopped / started BOINC and suspended / resumed the WU a couple of times without issue :O Well, the joys of Linux, I guess! No, no I did that too in Vista x86 and it worked as well without failing the WU. Even booting the system. I tried everything I knew to got more than the 1% MCU load but nothing helped. I guess you had not the 1% load, because when I had it, the estimated time to finish was wrong every time. it was several times updated by BOINC, though wrong. Happy crunching![/quote] ____________ Greetings from TJ | |
ID: 31632 | Rating: 0 | rate: / | |
Thanks Zoltan, that helps paint a picture. | |
ID: 31641 | Rating: 0 | rate: / | |
I had 4 of these units fail simultaneously, on my windows 7 computer. I did a routine reboot, without first suspending the units, and they all crashed. | |
ID: 31643 | Rating: 0 | rate: / | |
alax117-NOELIA_UBQ1-0-1-RND6675_0: 16.3h on GTX 650Ti / Linux, 142.5K credit! Mmm, yummy!! :D | |
ID: 31648 | Rating: 0 | rate: / | |
I have got a new type of Noelia: leux12-NOELIA_UBQ1-0-1-RND9216, it took almost 83000 seconds to complete. I saw its MCU load was only 15%. | |
ID: 31690 | Rating: 0 | rate: / | |
You have got to be kidding me... | |
ID: 31701 | Rating: 0 | rate: / | |
Well thanks, than its not my hardware :) | |
ID: 31703 | Rating: 0 | rate: / | |
24.11h on a GTX570 does sound bad. | |
ID: 31705 | Rating: 0 | rate: / | |
Do the 2hrun units had the same issue about gpu ram? Because his win7 desktop needs more of that memory while my xp with 570's (or perhAps the desktopless and vram empty second card) needed "only" between 60-62k secs. | |
ID: 31706 | Rating: 0 | rate: / | |
24.11h on a GTX570 does sound bad. Right now, I'm running CPDN, POGS, WCG, Einstein CPU tasks, nothing unusual that I haven't run before alongside GPUGrid. I didn't note any MCU reading. I did notice a little while ago that the NATHAN_KIDKIX that's been running since the lengthy NOELIA_2HRUN completed was taking unusually long and the GPU load was maybe a little low. I rebooted and it seems to be back on track to finish in 15 hours. Also, I see that I completed another NOELIA_2HRUN several days ago in a time that I would have expected (61.7 Ksec). So, it seems I probably was a victim of downclocking. Must remember to check that! :D Thanks! ____________ | |
ID: 31708 | Rating: 0 | rate: / | |
If BOINC suspends the Noelia (for whatever reason) you may get a driver crash, which may lead the card in some strange state. I've seen memory downclocking (which actually increase Memory controller load, so quite the opposite of what SK and others are seeing) and just downclocking the chip. In the 1st case it's enough to set proper clocks again, in the 2nd only a reboot helps. | |
ID: 31712 | Rating: 0 | rate: / | |
I saw just that the clock was down to 50% and MCU load of 17% doing a Santi LR. | |
ID: 31716 | Rating: 0 | rate: / | |
The reason was likely a driver reset triggered by some error happening in the GPU. It should be quite hot now in your attic, isn't it? Maybe GPU clocks of -13 MHz are in order. | |
ID: 31758 | Rating: 0 | rate: / | |
Yeah way to hot 33.7°C. I have taken one PC downstairs but my girlfriend is not enjoying the noise. So I guess a bit less crunching in the next days. | |
ID: 31766 | Rating: 0 | rate: / | |
Happy you :P we have here 38 degress ^^ tommorow it should get 39 or above :( :( :( | |
ID: 31771 | Rating: 0 | rate: / | |
Happy you :P we have here 38 degress 104° F in Austria? Is that a record? | |
ID: 31772 | Rating: 0 | rate: / | |
Happy you :P we have here 38 degress ^^ tommorow it should get 39 or above :( :( :( We're 1 day behind you here in Budapest. According to the local weather forecast we'll have 37°C tomorrow, and 39°C on Monday, so I've set my hosts not to request new tasks. Maybe I'll crunch only 1 workunit per GPU during the night until this heatwave is gone. | |
ID: 31773 | Rating: 0 | rate: / | |
Happy you :P we have here 38 degress No, the alltime hottest day was 39,7. So whe're getting close tomorrow ^^ or better :( ____________ DSKAG Austria Research Team: http://www.research.dskag.at | |
ID: 31774 | Rating: 0 | rate: / | |
Wow, that's crazy for Central and even Eastern Europe, I never realized it got that hot there. Those temperatures are normal for the Northern San Joaquin Valley here in Northern California, I live up in the Sierra's by Yosemite where it can get as high as 32°C and I thought I was suffering with my water cooling. | |
ID: 31775 | Rating: 0 | rate: / | |
Edit: by the way ETA, I see your RAC is very low for you, also under pressure by the heat wave? My main GPU is actually suffering from a healthy supply of POEM WUs, GPU-Grid is "only" a backup ;) And I switched to the short queue due to the recent Noelia problems.. but since I usually wasn't getting any credit bonus on the long runs any more this should be fine. It's "only" ~30°C over here.. but was said to reach 38°C as well. Can't get the temperautres inside down any more without inviting all the spidies and mosquitos! MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31776 | Rating: 0 | rate: / | |
Only high 20s here in Canada and Sevilla, Spain. Last year in Sevilla during my visit: 42C. | |
ID: 31777 | Rating: 0 | rate: / | |
And I switched to the short queue due to the recent Noelia problems.. but since I usually wasn't getting any credit bonus on the long runs any more this should be fine. I haven't gotten any errors on my GTX 660s on the longs for the last 10 days. I think it is safe to come home. | |
ID: 31778 | Rating: 0 | rate: / | |
Happy you :P we have here 38 degress ^^ tommorow it should get 39 or above :( :( :( No, no not happy me :( that temperature is in my attic (34.1°C at the moment). Outside it is okay with 24°C now but was 32 for a couple a days in a row. But you have a nice summer then in Austria, but not nice for computers that have to work hard. ____________ Greetings from TJ | |
ID: 31779 | Rating: 0 | rate: / | |
And I switched to the short queue due to the recent Noelia problems.. but since I usually wasn't getting any credit bonus on the long runs any more this should be fine. I wish I could say that as well. It was a long time back my 660 finished a LR, in the last days all Santi LR crashed after long running, near finish. I like the Noelia´s, when they error they did that very quickly. I even downgraded the drivers again on several advise. Nathan´s still seem the best...for crunchers. ____________ Greetings from TJ | |
ID: 31780 | Rating: 0 | rate: / | |
Guys, I think it would be better to move this discussion to other subforums to keep this subforum for actual news. This thread especially has 328 posts and I think it has served its purpose a while now, so I will lock it to let it rest in peace. | |
ID: 31786 | Rating: 0 | rate: / | |
Message boards : News : Old Noelia WUs