Advanced search

Message boards : Number crunching : NATHAN_FAX3 and FAX4 discussion

Author Message
Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23855 - Posted: 10 Mar 2012 | 10:22:52 UTC
Last modified: 10 Mar 2012 | 10:24:10 UTC

These NATHAN_FAX3 workunits take really long time to finish.
For example:
49 seconds less than 13 hours on Stoneageman's oveclocked (820MHz) GTX 580
13 hours 17 minutes on Venec's factory clocked GTX 580.
Don't be surprised if a NATHAN_FAX3 takes more than 30 hours to finish on a lesser card (GTX 460, GTX 560)

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23858 - Posted: 10 Mar 2012 | 11:57:08 UTC - in response to Message 23855.
Last modified: 10 Mar 2012 | 11:58:07 UTC

My overclocked (850MHz) GTX 580 (supported by a Core i7 980X running at 32*134MHz (4.288GHz)) finished its first NATHAN_FAX3 in 12 hours 55 minutes and 20 seconds.
It's upload size is also huge: about 116MB.

Profile ritterm
Avatar
Send message
Joined: 31 Jul 09
Posts: 88
Credit: 244,413,897
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23859 - Posted: 10 Mar 2012 | 12:01:26 UTC - in response to Message 23855.

These NATHAN_FAX3 workunits take really long time to finish...Don't be surprised if a NATHAN_FAX3 takes more than 30 hours to finish on a lesser card (GTX 460, GTX 560)

Yep...My stock GTX 570 is almost 13 hours into this bad boy and the BOINC manager shows 54% completion. Bye-bye time bonus?
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23861 - Posted: 10 Mar 2012 | 12:51:07 UTC - in response to Message 23854.
Last modified: 10 Mar 2012 | 12:53:17 UTC

They look closer to 4times as long. Perhaps slightly protracted, but better than 4h!

I4R77-NATHAN_FAX3-0-62-RND0299_0 3250554 9 Mar 2012 | 19:47:56 UTC 10 Mar 2012 | 12:29:21 UTC Completed and validated 47,832.74 47,832.74 114,150.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
I4R65-NATHAN_FAX3-0-62-RND7183_0 3250541 9 Mar 2012 | 19:24:54 UTC 10 Mar 2012 | 12:15:48 UTC Completed and validated 47,790.18 47,790.18 114,150.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
I4R24-NATHAN_FAX3-0-62-RND0334_0 3250495 9 Mar 2012 | 18:59:58 UTC 10 Mar 2012 | 11:03:26 UTC Completed and validated 47,437.34 47,437.34 114,150.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
I2R19-NATHAN_FAX3-0-62-RND9800_0 3250258 9 Mar 2012 | 17:40:30 UTC 10 Mar 2012 | 9:29:08 UTC Completed and validated 47,802.01 47,802.01 114,150.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)

I1R68-NATHAN_FAX-7-50-RND4254_0 3250267 9 Mar 2012 | 17:14:17 UTC 9 Mar 2012 | 22:58:49 UTC Completed and validated 11,905.66 11,905.66 30,000.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23863 - Posted: 10 Mar 2012 | 13:12:14 UTC - in response to Message 23859.

Yep...My stock GTX 570 is almost 13 hours into this bad boy and the BOINC manager shows 54% completion. Bye-bye time bonus?

These NATHAN_FAX3 workunits could run much faster on your PC if you set the SWAN_SYNC=0 environmental setting, and free up one CPU core.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23867 - Posted: 10 Mar 2012 | 13:44:51 UTC - in response to Message 23861.

They look closer to 4times as long. Perhaps slightly protracted, but better than 4h!

Yes, the NATHAN_FA series has 500.000 steps, while the NATHAN_FAX series has 2.000.000 steps. That's exactly 4 times. But it's not the end of the story, because the time per step of the NATHAN_FAX is also slightly higher (23.2ms) than NATHAN_FAn's (21.8ms).

Profile ritterm
Avatar
Send message
Joined: 31 Jul 09
Posts: 88
Credit: 244,413,897
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23869 - Posted: 10 Mar 2012 | 15:02:18 UTC - in response to Message 23863.
Last modified: 10 Mar 2012 | 15:03:55 UTC

These NATHAN_FAX3 workunits could run much faster on your PC if you set the SWAN_SYNC=0 environmental setting, and free up one CPU core.

Thanks, RZ. I'll free up one CPU and see how it goes.

Is SWAN_SYNC applicable in Windows hosts?
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23871 - Posted: 10 Mar 2012 | 15:13:16 UTC - in response to Message 23869.

Is SWAN_SYNC applicable in Windows hosts?

Yes, it still seems to make some difference on Windows, but probably more for Linux (different apps). Remember to free a CPU core/thread (for each GPU), or it makes no difference.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23872 - Posted: 10 Mar 2012 | 15:25:59 UTC - in response to Message 23869.
Last modified: 10 Mar 2012 | 15:28:19 UTC

Is SWAN_SYNC applicable in Windows hosts?

Sure.
Start button ->
type systempropertiesadvanced in the search box, press enter ->
press the environmental variables button near the bottom of the window ->
press the new button near the buttom (under the "system variables") ->
type swan_sync to the upper box (name), and 0 (zero) to the lower box (value)
Press OK three times.
After that, you need to restart the BOINC client (stop the scientific applications on close), or you can restart Windows.

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23873 - Posted: 10 Mar 2012 | 15:59:40 UTC
Last modified: 10 Mar 2012 | 16:44:17 UTC

Our times:

GTX 590 (1 core) (607Mhz graphic clock, 1707 Mhz memory clock, 1215 Mhz processor clock)
time 16 hours
credit 114k

GTX 580 (832Mhz graphic clock, 2100 Mhz memory clock, 1664 Mhz processor clock)
time 12-13 hours
credit 114k

all card with swan_sync with 1 dedicated hyper threading core.

Our 560 haven't got FAX3 task yet, but times above are ok.

Profile ritterm
Avatar
Send message
Joined: 31 Jul 09
Posts: 88
Credit: 244,413,897
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23875 - Posted: 10 Mar 2012 | 16:31:13 UTC

Okay... I set swan_sync=0 and have freed up a CPU. The acemd process is now using a full core (25% on my C2Q) and about 230MB RAM. Is that what I should expect?
____________

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23876 - Posted: 10 Mar 2012 | 16:39:31 UTC - in response to Message 23875.
Last modified: 10 Mar 2012 | 16:39:47 UTC

Okay... I set swan_sync=0 and have freed up a CPU. The acemd process is now using a full core (25% on my C2Q) and about 230MB RAM. Is that what I should expect?


Yes. Our 1 GPU task consumes about 97% of one cpu core (i7-2600k).

Profile ritterm
Avatar
Send message
Joined: 31 Jul 09
Posts: 88
Credit: 244,413,897
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23885 - Posted: 10 Mar 2012 | 23:11:23 UTC
Last modified: 10 Mar 2012 | 23:37:18 UTC

On my GTX 570: 23-plus hours, 95K. It was reported 29.5 hours after it was delivered, so I guess I didn't get the time bonus... :P

Hard for me to tell how much better it did after I added swan_sync and dedicated a CPU. That would have been at about 19 hours in, but I don't remember what its progress was at that point.
____________

JLConawayII
Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23890 - Posted: 11 Mar 2012 | 5:03:00 UTC

I just heard a tiny scream from my GTX 260.

Profile Mad Matt
Send message
Joined: 29 Aug 09
Posts: 28
Credit: 101,584,171
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 23893 - Posted: 11 Mar 2012 | 13:37:17 UTC

Also, GPU usage is below 90% again, while the old Nathans finally got up to 97% on a Fermi (GTX 570).

You guys just don't get enough? WUs running for that long on a Fermi, ever thought how long they would run on a CPU? And finally paying these credits or zero if they fail?

I think one reason why this project has lost any measure in fair credits and useful WU sizes is the lack of a CPU app. Why isn't there any? Because it would demonstrate the monstrous demands you are posing on hardware and crunchers...
____________

RobertN
Send message
Joined: 18 Nov 09
Posts: 7
Credit: 52,996,450
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwat
Message 23895 - Posted: 11 Mar 2012 | 14:51:06 UTC

First NATHAN_FAX3 unit completed here and it took almost 40 hours with a GTX 560 Ti (384 cores). Bit too much IMO. Don't want to free a CPU-core in combination with the swan setting since I run a CPU project as well.

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23897 - Posted: 11 Mar 2012 | 16:06:33 UTC
Last modified: 11 Mar 2012 | 16:13:56 UTC

wow 115MB upload? This will kill my Mobil connection i think, where three BOINC Machines are connected on some bad connection O.o

Its interesting that i got a fax on my 285 and it only took ~12Hours (only P4 prozessor with no extra free Core and no tuned BOINC. Only all standart). Are there different types of them?

I now have a new one, lets wait how long it takes now ^^

PS: oh i see the first one was a FAX WU, the second now is a FAX3. Oh yeah i think this will take now a little bit of time :D
Hmpf i was happy to get now 24h Bonus on all of my long queue WUs with the new card in one of the 24h/7d Machines and now i dont get it again cos FAX3 ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23900 - Posted: 11 Mar 2012 | 18:03:47 UTC - in response to Message 23895.
Last modified: 11 Mar 2012 | 18:06:31 UTC

First NATHAN_FAX3 unit completed here and it took almost 40 hours with a GTX 560 Ti (384 cores). Bit too much IMO. Don't want to free a CPU-core in combination with the swan setting since I run a CPU project as well.

I run CPU project too. Freeing cores doesn't mean CPU task are going to slow down. In fact I experienced that they are running faster. The reason is that the CPU with so much work get stucked and slows down (slowing GPU too).
Freeing cores don't mean they are not going to be used!! It means less task are being exectuted at a time.
8 cores doesn't necessary means x8. You have to consider too that GPU usually takes 0,2-0,4 CPU but when the CPU is not been in use (because is waiting the GPU to return results) will make the others CPUs task run faster.
This task took 31,64 hours on a GTX 560 Ti without SWAN_SYNC but freeing 2 cores. Now I chaged it to experience the difference.
____________
HOW TO - Full installation Ubuntu 11.10

Profile SMTB1963
Avatar
Send message
Joined: 27 Jun 10
Posts: 38
Credit: 524,420,921
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 23901 - Posted: 11 Mar 2012 | 18:19:19 UTC - in response to Message 23890.

I just heard a tiny scream from my GTX 260.

lol...maybe your 260 can get with my 275s to form a PTSD support group...

RobertN
Send message
Joined: 18 Nov 09
Posts: 7
Credit: 52,996,450
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwat
Message 23902 - Posted: 11 Mar 2012 | 19:13:52 UTC
Last modified: 11 Mar 2012 | 20:08:34 UTC

@Damaraland: I guess you have an Intel CPU with hyper threading switched on. If you free up one HT core, the other process on the shared core will speed up. But at the moment I run 4 PSP-PRP tasks, this project benefits too little from HT so it is switched off.

I think one should get the bonus anyway whether you finish these behemoth tasks within 24 hours or not.

Edit: And perhaps the bonus should be bigger compared to the regular long units.

Profile ritterm
Avatar
Send message
Joined: 31 Jul 09
Posts: 88
Credit: 244,413,897
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23903 - Posted: 11 Mar 2012 | 20:22:47 UTC - in response to Message 23902.

I think one should get the bonus anyway whether you finish these behemoth tasks within 24 hours or not...

+1

____________

JLConawayII
Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23906 - Posted: 11 Mar 2012 | 21:47:34 UTC - in response to Message 23902.
Last modified: 11 Mar 2012 | 22:02:58 UTC


I think one should get the bonus anyway whether you finish these behemoth tasks within 24 hours or not.



Well, the posted estimate for long runs is 8-12 hours for the fastest cards. This was a gross overestimate until now, as my 260 would complete most of them in 8 hours, and some of the biggest ones would take 12-15 hours. Now it seems we've gone in the other direction, taking well over half a day to complete on 5xx series cards. If the size of the tasks is putting such a demand even on the most powerful cards, it seems reasonable that the bonus cutoffs would be extended a bit, maybe to 36/72 hours for 50%/25%, or something along those lines. Obviously you don't want to go crazy with bonus points, but it's something to consider if we're going to be modeling larger molecular systems that take considerably longer to complete.

rreit
Send message
Joined: 26 Nov 10
Posts: 1
Credit: 253,890,485
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23908 - Posted: 12 Mar 2012 | 1:53:18 UTC
Last modified: 12 Mar 2012 | 1:55:02 UTC

I've had 2 NATHAN_FAX3 tasks complete. Both tasks were run with a dedicated core. One task was run without SWAN_SYNC being set, and the other task was run with SWAN_SYNC=0 (restarted BOINC after the change). The results were the same though.

NVIDIA GeForce GTX 570 (stock clock)
Windows 7 x64
Nvidia driver: 295.73
Core i7 920 @ 3.8Ghz

Dedicated core, no SWAN_SYNC
21 hrs. 8 min.

Dedicated core, SWAN_SYNC=0
21 hrs. 7 min.

So SWAN_SYNC didn't help at all for my setup.

I don't see why the GTX 570 takes 21 hours+ if the GTX 580 takes 12-13 hours. That seems like too large of a difference between the GTX 570 and GTX 580.

alephnull
Send message
Joined: 8 Jul 09
Posts: 13
Credit: 306,850,267
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23909 - Posted: 12 Mar 2012 | 3:45:18 UTC

there does seem to be some credit disparity for the FAX3 wu in terms of time vs credit rewarded but im not sure. fair enough, so long as the work keeps coming!

anyway, i definitely turned off long work units for my machines running some gtx275 cards. it was hurtin those poor, poor cards. the gtx500 series cards seem to be ok with them albeit they do take a while.

my question is this: i like running the older long wu (i.e. non FAX3 wu) on my gtx275s. they didnt have issues completing those in time. since there's no separation of long wu type on the preferences screen, would it be possible to add something like that so we can select to still run longs without FAX3? for example:

ACEMD standard
ACEMD beta
ACEMD for long runs (8-12 hours on fastest GPU)
ACEMD for long runs (8-12 hours on fastest GPU) & FAX3

or something to that effect? that way the slower cards can still get longs but not struggle with the FAX3s. just thought i'd ask. thanks for the consideration.

bob

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23910 - Posted: 12 Mar 2012 | 7:29:58 UTC

Perhaps we need a short, medium and long queues. All the current longs except FAX3 go to the medium queue. We'll need an option to select which type of work to allow as well. Deselect the long queue by default and put a note next to the long queue option to suggest GTX570 or better.

It may also be possible to limit the long queue to certain speed cards by checking the est flops and memory values returned in the scheduler request (assuming it does pass it across).
____________
BOINC blog

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23912 - Posted: 12 Mar 2012 | 9:26:10 UTC - in response to Message 23910.

Perhaps we need a short, medium and long queues. All the current longs except FAX3 go to the medium queue. We'll need an option to select which type of work to allow as well. Deselect the long queue by default and put a note next to the long queue option to suggest GTX570 or better.

I proposed something similar. Response: out of the question because maintenaince costs (it seems with the team they have they can only handle 3 queues.
It may also be possible to limit the long queue to certain speed cards by checking the est flops and memory values returned in the scheduler request (assuming it does pass it across).

I agree with this. I proposed it too. I think the program that distributes the task should me smarter. It wouldn't need to look in the Flops, I think the best would be that the server just look the computer configuration and determine if it's slow or fast.

____________
HOW TO - Full installation Ubuntu 11.10

ich_eben
Send message
Joined: 31 Aug 10
Posts: 1
Credit: 9,517,287
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwat
Message 23916 - Posted: 12 Mar 2012 | 11:27:20 UTC - in response to Message 23893.

Also, GPU usage is below 90% again, while the old Nathans finally got up to 97% on a Fermi (GTX 570).

You guys just don't get enough? WUs running for that long on a Fermi, ever thought how long they would run on a CPU? And finally paying these credits or zero if they fail?

I think one reason why this project has lost any measure in fair credits and useful WU sizes is the lack of a CPU app. Why isn't there any? Because it would demonstrate the monstrous demands you are posing on hardware and crunchers...


look over to primegrid and the Genefer World Record tasks.
I finished two tasks on my gtx 580 and both of them took easily over 75 hours.
Thats long and part of why i switched back for a bit gpugrid. (And to get the next batch here ;-))
____________

JLConawayII
Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23919 - Posted: 12 Mar 2012 | 18:48:22 UTC

Target acquired. FAX3!!

At the current rate of progress, it SHOULD take 37 hours on my GTX 260. We'll see how this estimate evolves as the WU progresses.

MD
Send message
Joined: 1 Apr 09
Posts: 4
Credit: 224,382,750
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23920 - Posted: 12 Mar 2012 | 21:22:53 UTC

41hrs in and still have 21 to go?!?!?!?! gts450, core 980, shader 1960, ram 2050 hooked upto a 9950be with slight overclock. had just switched over to long wu's, regular long wu's ran 1~3hrs longer than short ones till i got this fax3. miffed at the idea that im loseing the 24hr bonus plus losing credits from the wu's i could have ran waiting for this to finish.

[AF>Belgique] bill1170
Send message
Joined: 4 Jan 09
Posts: 13
Credit: 1,309,120,107
RAC: 3,723,715
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23924 - Posted: 13 Mar 2012 | 8:38:04 UTC - in response to Message 23920.

34 hours with my GTX275 @633 1134 1521 (shaders slightly over-clocked).

A typical NATHAN was crunched in ~ 8 hours. Other tasks not more than 12 hours.

The GTX 275 was qualified for long queue work units, but is not any more. I'm downgrading to short queue. Unfortunately cobblestones will probably drop from 100K/day to 30K/day.

The drop in cobblestone is a little bit disappointing, but in the other hand the link newly established between our work and scientist's publications is exciting enough to compensate ;-)

Would be nice to build a table of GPU that are qualified for the long queue in it's new configuration.


____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23926 - Posted: 13 Mar 2012 | 9:03:46 UTC - in response to Message 23924.

Would be nice to build a table of GPU that are qualified for the long queue in it's new configuration.

We don't know their exact model numbers yet :)

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23927 - Posted: 13 Mar 2012 | 10:35:34 UTC - in response to Message 23926.
Last modified: 13 Mar 2012 | 10:41:10 UTC

SWAN_SYNC doesn't make any effect. In both cases I freed 2 cores out of 8. Linux 3.0.0-16-generic. i7-2600K CPU @ 3.40GHz

SWAN_SYNC=0
31,70 h GTX 560 Ti @1.76 GHz
36,35 h GTX 260 @1.41 GHz
Without SWAN_SYNC
31,64 h GTX 560 Ti @1.76 GHz
____________
HOW TO - Full installation Ubuntu 11.10

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23928 - Posted: 13 Mar 2012 | 10:47:09 UTC - in response to Message 23927.
Last modified: 14 Mar 2012 | 10:56:59 UTC

For the new apps I have not tested the benefit of using SWAN_SYNC. I will start now for Windows, but it could do with being tested for Linux as well (especially). Performance differences may fluctuate by task type, so several task types would need to be looked at. Remember to restart for the changes to be applied, and to use capitals for Linux.

Both your comparison tasks ran without SWAN_SYNC=0.

If SWAN_SYNC was in use then the stderr output would include, "SWAN: Using synchronization method 0"
For example.

BTW, Why do you have your GTX260 in PCIE 0 and your GTX560Ti in PCIE 1?

- On WinX64 it looks like SWAN_SYNC is only increasing performance by around 3.5% (though I've only run one CB1 task without SWAN_SYNC on).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23929 - Posted: 13 Mar 2012 | 12:17:48 UTC
Last modified: 13 Mar 2012 | 12:24:44 UTC

Pff O.o Over 5! times more computingtime but only ~2,8time more credits O.o Kick it over 100k and IĀ“m back happy ^^ Little disapointing, i had fun with the "new" 285 for only 2 days :(

Unfortunaly the Mobile Connection itself needed additional 12 Hours for upload O.o Still in 125% Bonustime with it *lucky*


5094249 3254378 117426 11 Mar 2012 | 15:03:07 UTC 13 Mar 2012 | 8:21:35 UTC Fertig und BestƤtigt 136,916.69 29,691.78 95,125.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)

5092622 3255475 117426 11 Mar 2012 | 8:13:44 UTC 11 Mar 2012 | 19:21:52 UTC Fertig und BestƤtigt 24,709.95 1,202.67 35,811.00 Long runs (8-12 hours on fastest card) v6.16 (cuda31)
____________
DSKAG Austria Research Team: http://www.research.dskag.at



alephnull
Send message
Joined: 8 Jul 09
Posts: 13
Credit: 306,850,267
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23950 - Posted: 14 Mar 2012 | 4:28:22 UTC

got a FAX4 wu just now. is there any significant difference between this and a FAX3?

im interested to see how long this will take to complete. im still only getting about 75-80% gpu use (for longs) out of my cards with swan_sync set and 1 free hyper threaded core per gpu on all machines with gtx500 series cards. when those machines run the shorts, they seem have higher gpu use but i havent observed carefully enough to say that with 100% conviction. just informational really, its been good enough up to this point.

the FAX series gpu use is about the same as described above for longs. will be interested to see how long this FAX4 wu takes if there is a difference to the FAX3s. anything in particular to look out for?

bob

JLConawayII
Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23967 - Posted: 15 Mar 2012 | 0:24:59 UTC

My GTX260 completed the FAX3 WU in 36.5 hours. The FAX4 it's working on now looks like it will finish in around 22.5 hours. I think upload is still going to put it over the 24h limit, but anyone with a newer card should probably be okay now.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23968 - Posted: 15 Mar 2012 | 7:40:16 UTC - in response to Message 23967.

I think upload is still going to put it over the 24h limit, but anyone with a newer card should probably be okay now.

Still figuring out what I did wrong with SWAN_SYNC (it's off now)

Because upload didn't make 24h on GTX 260, but I did on GTX 560
GTX 260 22,7 h FAX4
GTX 560 Ti 19,82 h FAX4




____________
HOW TO - Full installation Ubuntu 11.10

francescocmazza
Send message
Joined: 8 Oct 11
Posts: 2
Credit: 9,961,113
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 23970 - Posted: 15 Mar 2012 | 11:59:16 UTC

Hi all,
I have a gtx 560 ti as well.
Damn, these nathan 3 are really huge. Have been working on one for 22 Hours and still have 13.30 to go. My card is neither overclocked nor underclocked. If you can, please try to reduce the size of the WU by 1/2, I am afraid that the reduction with nathan 4 are still going to take us still very long. Also the effective damage of one computational error becomes much greater with such huge wu. Unforuntunately there is no way for me to improve my stats, as my card has a bad manufacturer heatsink IMHO and runs at 80c. Is it normal btw?

Thank you and take care
Francesco

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23971 - Posted: 15 Mar 2012 | 12:24:38 UTC - in response to Message 23970.

The researchers stated that they will actively review task sizes. If for example they see higher failure rates they will most likely make changes to reduce runtime. In the mean time if any crunchers don't like the duration or experiences failures, crunch some of the normal tasks. While credit will be lower you will get badges for contributing to different research papers ;)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

DavidVR
Send message
Joined: 20 Jan 11
Posts: 6
Credit: 10,705,495
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 23972 - Posted: 15 Mar 2012 | 17:26:02 UTC

On my GTX 260 it's looking like it's going to take about 50 hours to complete a NATHAN_FAX WU. Typically long runs would take 20 - 25 hours.

pvh
Send message
Joined: 17 Mar 10
Posts: 23
Credit: 1,173,824,416
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23975 - Posted: 16 Mar 2012 | 1:11:58 UTC

I get a FAX3 done in just under 45 hours on my GTX 460 and a FAX4 in 28 hours. A bit too big for my taste... No chance of a time bonus here. Until now I always got it...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23978 - Posted: 16 Mar 2012 | 2:39:38 UTC - in response to Message 23975.

There are two time bonuses; one for <24h (50%) and one for <48h (25%).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23981 - Posted: 16 Mar 2012 | 9:14:39 UTC - in response to Message 23978.

There are two time bonuses; one for <24h (50%) and one for <48h (25%).


I'm not really bothered by all this credit/time wise. However, if this project doesn't want to lose others perfectly good crunching machines I would raise the credits massively.

On my overclocked GT460's on long wu's I used to get around 1 cobblestone for 1 second computing and now with Nathan WU's on the same cards I am only getting 1 cobblestone for 2 seconds computing.

It seem there is serious credit deflation on this project and thet isn't good for the amount of computing power your going to get and keep.

There also seems serious discrimination against machines with cards that aren't top rank. I think attitudes need to be rebalanced with requirments on this project.

None of this matters much to myself as I am more concerned that enough power is left over to run other things.


____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile nenym
Send message
Joined: 31 Mar 09
Posts: 137
Credit: 1,308,230,581
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23987 - Posted: 16 Mar 2012 | 11:51:23 UTC - in response to Message 23981.

By my point of wiev there is much bigger difference between the longest a and the shortest GPU tasks on Seti (runnig KWSN package: 2x on GPU CC1.1, 3x on GPU CC1.3 and on Fermi GPU makes me no sense to waste cycles and electricity for Seti), than in long queue on GPUGRID. On the other you side are right comparing to DistrRTgen or PG PSA manual tpsieving, which are giving great credit (5x - 8x more than NATHAN_CB1 series).

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,510,923,465
RAC: 948,602
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24005 - Posted: 17 Mar 2012 | 10:41:49 UTC

I run long tasks on a GTX 460 and have come to the conclusion that it is better for me to abort these FAX3 and FAX4 units....

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24054 - Posted: 20 Mar 2012 | 13:56:39 UTC

The method that I use to estimate completion times is this:

((total minutes so far) / %done)/(0.6)

This gives total processing hours. I have found this to be reasonably accurate especially after the WU is about 10% done. It also seems to be a better estimate of the total time than the "time remaining" field plus current run time until, of course, the remaining time is small.

The FAX4 that I just finished gave about 28-hours using that method, and that is what it took.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24056 - Posted: 20 Mar 2012 | 14:30:00 UTC - in response to Message 24054.

For GPUGrid tasks, the time taken and % completed, can be used to accurately measure the total time. So your formula is sound. If after 1h a task is 10% completed, then it will take 10h to complete the task.

Don't go by the time "Remaining (estimate)". The estimate is based on previous GPUGrid tasks completed, and thus includes tasks of different length. It's not going to be exact.

For some other projects even the % complete is not accurate. These projects have task types with varying run times and the % complete should be considered as an very rough estimate. On such projects it's common to see the % complete time freeze for extended periods and often sit at 100% complete for a long time.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile nate
Send message
Joined: 6 Jun 11
Posts: 124
Credit: 2,928,865
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24064 - Posted: 21 Mar 2012 | 14:37:43 UTC


The researchers stated that they will actively review task sizes. If for example they see higher failure rates they will most likely make changes to reduce runtime. In the mean time if any crunchers don't like the duration or experiences failures, crunch some of the normal tasks. While credit will be lower you will get badges for contributing to different research papers ;)


Adding to this comment from skgiven, I just want to reiterate that indeed we are evaluating many factors with respect to these long work units. Your comments/complaints are not falling on deaf ears, so keep them coming. Additionally, we are looking at ways to reduce the size of uploads for the users, thought it is not clear at this point if we will be able to do that. Please be patient with the implementation of changes/improvements.

Betting slip, nenym: Interesting to hear your analysis about how we compare to other projects and long vs short. I don't think we're interested in getting into a credit war with anyone, but you're absolutely correct that it's important for us to remain relevant.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 24067 - Posted: 21 Mar 2012 | 18:51:35 UTC - in response to Message 24064.

Additionally, we are looking at ways to reduce the size of uploads for the users, thought it is not clear at this point if we will be able to do that. Please be patient with the implementation of changes/improvements.

I tried to compress one of the workunits and it compressed by a factor of ~5:1. Maybe the project is already set up to take advantage of BOINC's built in compression (http://boinc.berkeley.edu/trac/wiki/FileCompression) but I can't tell from the logs on my end.

If you're not doing any compression then it looks like there's a considerable benefit to be had.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 24073 - Posted: 22 Mar 2012 | 7:47:34 UTC - in response to Message 24064.
Last modified: 22 Mar 2012 | 7:48:51 UTC

Your comments/complaints are not falling on deaf ears, so keep them coming. Additionally, we are looking at ways to reduce the size of uploads for the users, thought it is not clear at this point if we will be able to do that. Please be patient with the implementation of changes/improvements.

Here goes my point of view:
1) The easiest way: put 4 queues of work
long (>12 h), medium (12-8), short (<8) and beta.
Everybody happy and the system prepared to new Keppler cards.
In this scenario I might choose short & medium, but anyone could decide upon their % dedication on this project and hardware.

2) The advaced way. Make a decent Job Planner.
This might consume a lot of ressources, but I see it profitable.
http://boinc.berkeley.edu/trac/wiki/SchedMatch
I can't believe that BOINC guys didn't work on this before with the huge differencies in hardware. Maybe the system is too adapted for SETI????
____________
HOW TO - Full installation Ubuntu 11.10

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24123 - Posted: 24 Mar 2012 | 5:11:24 UTC - in response to Message 24067.
Last modified: 24 Mar 2012 | 5:24:06 UTC

Additionally, we are looking at ways to reduce the size of uploads for the users, thought it is not clear at this point if we will be able to do that. Please be patient with the implementation of changes/improvements.

I tried to compress one of the workunits and it compressed by a factor of ~5:1. Maybe the project is already set up to take advantage of BOINC's built in compression (http://boinc.berkeley.edu/trac/wiki/FileCompression) but I can't tell from the logs on my end.

If you're not doing any compression then it looks like there's a considerable benefit to be had.

This may also reduce or eliminate the problems that I am having in uploading finished WUs. If the completed task upload files are not compressed, this could explain why WUs only upload after multiple retries. (Please note that I am not the only GPUGRID cruncher that is experiencing this exact problem - please see this thread.) I think I can reasonably say that I have done everything from my end that is possible to resolve this problem, however, I was unsuccessful at resolving it.

I have had prior experience in uploading / downloading uncompressed files. My experience is that uncompressed files are prone to transmission error. IP will retry transmission, and if over a dubious connection like the one that I seem to have between my ISP and the GPUGRID server, retries may further exacerbate the problem.

In addition, this would cut the size of the upload files considerably, and lessen the time that it takes to transmit them.

As such, I am highly in favor of having completed WU upload files compressed.
____________

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24277 - Posted: 6 Apr 2012 | 14:12:19 UTC
Last modified: 6 Apr 2012 | 14:17:02 UTC

Hello, i pushed my GTX 460 to the limits, full voltage, at 900 chipset clock, hopefully it wont burn up, hopefully it will fall into the first 24 hour bonus time. It will be very very close to 24 hours, my upload is at max. 45 KB/s, when it happen i will get 75.000 points on FAX4 , arenĀ“t i ?

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24279 - Posted: 6 Apr 2012 | 17:12:39 UTC

close, 71400 is the exact value :)
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24287 - Posted: 6 Apr 2012 | 20:13:57 UTC - in response to Message 24279.

too close , i think, the WU got crashed , i lowered the chipset clocking about 10 Mhz and will see.

Post to thread

Message boards : Number crunching : NATHAN_FAX3 and FAX4 discussion

//