Message boards :
News :
WU: NOELIA_INS1P
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I wonder if it will behave itself for dskagcommunity? He's running it with v8.14 (cuda42). If my WU completes (<5h now) limited credits might go to both of us, or just not to me. Something else that's still broken! FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 958,266,958 RAC: 34 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Oh. I looked onto the machine but it seems the WU run normal. I think it will need slight bit more than your 5h :( But i will survive one WU with reduced credits. DSKAG Austria Research Team: http://www.research.dskag.at
|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Oh. While I was reading your words, this video just snapped into my mind. Sorry for being off topic. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yeah, I can see/hear why that popped into your head! This is how I felt, FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The WU completed on both systems and both systems got partial credit, pnitrox120-NOELIA_INS1P-3-12-RND7171 7273360 154384 13 Sep 2013 | 3:43:04 UTC 18 Sep 2013 | 21:17:36 UTC Completed and validated 42,071.90 41,448.82 101,000.00 Long runs (8-12 hours on fastest card) v8.03 (cuda55) 7289613 117426 18 Sep 2013 | 7:52:19 UTC 18 Sep 2013 | 23:23:28 UTC Completed and validated 46,687.89 2,714.13 101,000.00 Long runs (8-12 hours on fastest card) v8.14 (cuda42) The 8.14 app produced a more informative stderr output, Stderr output <core_client_version>7.0.28</core_client_version> <![CDATA[ <stderr_txt> # GPU [GeForce GTX 560 Ti] Platform [Windows] Rev [3203] VERSION [42] # SWAN Device 0 : # Name : GeForce GTX 560 Ti # ECC : Disabled # Global mem : 1279MB # Capability : 2.0 # PCI ID : 0000:04:00.0 # Device clock : 1520MHz # Memory clock : 1700MHz # Memory width : 320bit # Driver version : r301_07 : 30142 # GPU 0 : 60C # GPU 0 : 62C # GPU 0 : 63C # GPU 0 : 64C # GPU 0 : 65C # GPU 0 : 66C # GPU 0 : 67C # GPU 0 : 68C # GPU 0 : 69C # GPU 0 : 70C # Time per step (avg over 4200000 steps): 11.114 ms # Approximate elapsed time for entire WU: 46680.219 s 01:15:25 (2124): called boinc_finish </stderr_txt> ]]> Wouldn't it be better to include a time with the GPU temp changes? BTW. The # GPU 0 isn't needed every time you report a temp change. If there is only one GPU then reporting the name of the GPU is sufficient, if there is more than one GPU then report which GPU the device runs on and then only if that changes. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just wanted to say noelia recent wu are beating nathan by a long shot on the 780s. Avg gpu usage and mem usage is 80%/20% nathan v 90%/30% noelia. Her tasks also get a lot less access violations. Nicely done. |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just wanted to say noelia recent wu are beating nathan by a long shot on the 780s. Avg gpu usage and mem usage is 80%/20% nathan v 90%/30% noelia. Her tasks also get a lot less access violations. Yes I see the same on my 770 as well. I even got a Noelia beta on my 660 and did a better performance than the other beta, Santi“s I think they where. Greetings from TJ |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just wanted to say noelia recent wu are beating nathan by a long shot on the 780s. Avg gpu usage and mem usage is 80%/20% nathan v 90%/30% noelia. The other side of the coin is that Noelia WUs cause CPU processes to slow somewhat and can bring WUs on the AMD GPUs to their knees (my systems all have 1 NV and 1 AMD each). These problems are not seen with either Nathan or Santi WUs. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Personally, I have no issues with gpus stealing cpu resources if needed. Id rather feed the roaring lion than the grasshopper. |
rittermSend message Joined: 31 Jul 09 Posts: 88 Credit: 244,413,897 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Arrgh... potx21-NOELIA_INS1P-1-14-RND1061_1. Another one with this in the stderr output: The simulation has become unstable. Terminating to avoid lock-up No other failures for this WU. Let's see how the next guy does on it. |
rittermSend message Joined: 31 Jul 09 Posts: 88 Credit: 244,413,897 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Let's see how the next guy does on it. Just fine, I see...Never mind. Move along. Nothing to see here. |
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Arrgh... potx21-NOELIA_INS1P-1-14-RND1061_1. Another one with this in the stderr output: A large part could be due to the fact that these WU's consume a GIG of vRam and OC |
|
Send message Joined: 25 Mar 12 Posts: 103 Credit: 14,948,929,771 RAC: 14 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My failure rate on these units is getting quite bad, three of them in the last couple of days and wingmen are doing them right. No problem with any other type of WUs. Any advise? 2x660GTi in linux, driver is 304.88 but it has been rock solid so far. No much info in stderr at least to my knowledge. <core_client_version>6.10.58</core_client_version> <![CDATA[ <message> process exited with code 255 (0xff, -1) </message> <stderr_txt> </stderr_txt> ]]> |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
4 errors from 67 WU's isn't very bad, but they are all NOELIA WU's, so there is a trend. You are completing some though. I had one fail on a Linux system with 304.88 drivers, but my system is prone to failures due to the GTX650TiBoost which has been somewhat troublesome in every system and setup I've used (the card operates too close to the edge). I also have two GPU's in my system and I got the same output,
FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have had a long run of success (61 straight valid GPUGrid tasks over the past 2 weeks!), including 4 successful NOELIA_INS1P tasks, on my multi-GPU Windows 8.1 x64 machine. |
|
Send message Joined: 25 Mar 12 Posts: 103 Credit: 14,948,929,771 RAC: 14 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
4 errors from 67 WU's isn't very bad, but they are all NOELIA WU's, so there is a trend. You are completing some though. 4 out of 67 is ok I agree, but it's around 50% for these NOELIA_INS1P, so the trend is there as you say. No other type has failed in the last months included other NOELIAS's types. The two following units of the same type after the last failure have completed right.... maybe the Moon influence :) |
rittermSend message Joined: 31 Jul 09 Posts: 88 Credit: 244,413,897 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Betting Slip wrote: A large part could be due to the fact that these WU's consume a GIG of vRam and OC Yep, good call. I've got another one of these running. Afterburner shows memory usage at slightly more than 1.1GB...I suppose that's stressing my GTX 570 (1280MB), isn't it? |
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Betting Slip wrote:A large part could be due to the fact that these WU's consume a GIG of vRam and OC I had one fail for becoming unstable on a GTX560 TI with same amount of memory. They should be OK on that amount but they're still failing. It's this sort of problem that scares away contributors and annoys the hell out of me. |
|
Send message Joined: 5 Mar 13 Posts: 348 Credit: 0 RAC: 0 Level ![]() Scientific publications ![]() |
Apparently they have quite small error rate (<10%), so nothing systematic to worry about. I guess it's the same memory problem (WU's being too large) which has been troubling Noelia's WU's lately. Apparently these large ones should be finishing soon so it's gonna get better. As for the reason they cause problems to some I don't know :/ |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I've only had 12 failures this month (that are still in the database), but 3 of the last 4 were NOELIA_INS1P tasks. If I include 2 recent NOELIA_FXArep failures that's 5 out of the last 6 failures. Of course I've been running more of Noelia's work recently, as there has been more tasks available. I've also been running some short SANTI_MAR tasks. These shorter runs would have less chance of failing so there would be little chance of seeing a trend in failures. I suspect most of my failures occur on the mid-range cards; GTX650TiBoost and GTX660, rather than the slightly bigger cards. Again, these mid-range cards tend to run closer to their power targets so there is more chance of failure. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
©2025 Universitat Pompeu Fabra