Advanced search

Message boards : Graphics cards (GPUs) : GTX 460

Author Message
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 17951 - Posted: 14 Jul 2010 | 11:59:21 UTC

The new GTX 460 could be quite good in terms of performance for gpugrid. Possibly faster than a GTX480 with a little overclock.

http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king

GDF

If you have one let us know here, to make some benchmark with the beta tests.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17956 - Posted: 14 Jul 2010 | 16:34:02 UTC - in response to Message 17951.

I think you are getting carried away. I expect it will be faster than a GTX285 and a GTX465. It might even come close to a GTX470, especially if you get an 800MHz one, but I doubt it will do more than a GTX480. That said two GTX460's should do more than one GTX480, and cost about the same.



Also,
http://images.anandtech.com/graphs/gtx460_071110174503/23755.png
http://images.anandtech.com/graphs/gtx460_071110174503/23756.png

Interestingly in the above test there was no difference between the cheaper and more expensive versions of the card when it came to crunching at Folding. We will have to wait and see if that holds true here.

GDF, perhaps if you had a fixed test WU to crunch people would be able to bench according to that, and reviewers would add GPUGrid to their reviews!

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 17957 - Posted: 14 Jul 2010 | 17:56:26 UTC - in response to Message 17956.

Maybe we will have to wait for the GTX475 then for a faster performance.
The new design of the cores makes much more sense and it's perfect for our application.
I am still curious to test on GPUGRID. How do they do at F@H to give the benchmark away?

gdf

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17958 - Posted: 14 Jul 2010 | 19:24:23 UTC

I was just about to pull the trigger on one of the GTX 460 cards but at the last minute thought to check other hosts running the Fermis. Turns out the GTX 470 is only 10-35% faster (depending on OC) than my GTX 260 and the only GTX 465 I could find is actually slower. Think I'll wait till we see some real results. The price looks good if the performance lives up to the hype though.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17960 - Posted: 14 Jul 2010 | 19:47:28 UTC - in response to Message 17957.
Last modified: 14 Jul 2010 | 19:57:57 UTC

Hopefully the new design will actually work better for GPUGrid than for Folding@Home, but these cards seem to have been introducing more technology rather than trying to improve on existing performances. That said, I agree the architecture is much better, and the cards are more sellable! I’m slightly worried about the shaders, but we wont know for sure how they perform until someone actually buys one, attaches it to GPUGrid and successfully runs a work unit. I think they will be an excellent card to buy for those just wanting one descent card in their system; they look like good value for money.

It’s probably still best just to look at the GTX460 compared to the GTX465, GTX470 and GTX480, rather than the GT200s. All the Fermi’s are likely to benefit from driver and CUDA improvements relatively evenly in the future, while any such improvements would make all the Fermi cards improve against the GT200 cards. So is the GTX460 better value for money than a GTX465, GTX470 or GTX480, and do they do more work per Watt? With respect to gaming the answer is Yes and Yes, but we have to wait and see about GPUGrid performance.

I think the Folding test just involves folding a known Protein, using different GPU cards and then comparing the time taken, and power used.
So for example, if they folded a reference, 236aa transcription regulator CL Protein (Lambda Repressor), using one app they could test multiple cards relative performances. In six months or a year when a new card comes out, as long as they use the same app and other hardware, then they don’t have to test all the cards again, just the new one!

At the time being Fermi's still only work well on Linux and XP when crunching here - there is a 40% hit running on W7.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17963 - Posted: 14 Jul 2010 | 21:16:39 UTC

That graph style is Anandtech, but I can't find any GP-GPU stuff in the 2 launch day reviews. Where is it from? I thought it was unsatisfying to bench games only, as there's not much to see there anyway..

@GDF: when I ran F@H (a few years ago) it was basically a command line app. For benchmarking purposes people mostly compared run times of the same project (=different clients / algorithms) and same WU (=protein I think, they'd just conquered nr. 2000 back then) which saw small variance.
However, for reliable benchmarking it may have been possible (and could still be) to copy a WU file into the same directory and launch the client with other (or none) parameters. It would then just process the data, write an output file but don't attempt to upload anything.

Actually.. the more I think about it, the less I trust my memory on this one. Could have mixed it up with some other project.

MrS
____________
Scanning for our furry friends since Jan 2002

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17965 - Posted: 14 Jul 2010 | 22:50:04 UTC

Looks to be a couple of choices on the 460 ... I am open to all opinions on the following psuedo poll ... I'll get one just because I want to see what it can really do and GDF sounds much to enthused for me not to :-)

This will be a crunching card. No water cooling / air only.

The price difference between the options is negligable so is not an important factor but if the different options won;t make a difference I willgo with the cheapest.

Mfg: EVGA is my personal preference but if anyone has a compelling reason to go with a differetn Mfg I'll take it under consideration.
MEM: 1024/ 768 - anyone think this will make a difference?
Speed: Stock or any of the factory OC'd verions?
EVGA has the standard cooling and then they have a pre-order for External exhaust. It will be installed in an open case so I'm not concerned about the internal case temps, more like what's going to keep the GPU itself coolest.

Like I said, all things being equal I will go with the least expensive, no external exhaust, stock clocks (I'll be OCing myself anyway) and the 768 MEM.

GDF ... which card would would YOU most like to see?
____________
Thanks - Steve

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17966 - Posted: 14 Jul 2010 | 23:25:54 UTC - in response to Message 17965.

Looks to be a couple of choices on the 460 ... I am open to all opinions on the following psuedo poll ... I'll get one just because I want to see what it can really do and GDF sounds much to enthused for me not to :-)

Contrary to my post above, just ordered the EVGA 768MB Superclocked (763 MHz factory OC). It was $20 more than the stock version but has a lifetime warranty instead of 2 years. Couldn't pass up the price though, under $167 after discounts and Bing cashback. Hope it's faster than my GTX 260 :-)

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 17967 - Posted: 14 Jul 2010 | 23:42:15 UTC - in response to Message 17966.

We have to test on one to know the performance. I am just saying that it is better designed and should allow higher overclock than the original Fermi.

gdf

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17971 - Posted: 15 Jul 2010 | 0:35:56 UTC - in response to Message 17967.

There is no doubt it is more competitively designed – designed to sell, and they will, partially because they will overclock well; 800MHz for the GPU should be the standard, but the problem might be the memory controller. I suspect this is not the case and that Beyond made a good choice; nice high factory clock, the less expensive 768MB card, and with a lifetime warranty :) I'm guessing at least 30% faster than a GTX260. In the UK we can pick a GTX460 up for just over £150 - My GTX470 cost more than twice that!
Although I expect there will be little or no difference in performance for cards with more RAM (1GB), we will have to wait and see. I hope these do not suffer from driver issues, as is the case with GT240's that only have 512MB (Vista/W7 only).
Also, when the CUDA 3.1 issue is resolved there could be significant improvements.

MrS, I remember something similar.

[TiDC] Revenarius
Send message
Joined: 19 Aug 09
Posts: 4
Credit: 4,582,561
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 17973 - Posted: 15 Jul 2010 | 7:38:54 UTC - in response to Message 17960.

Don`t worry, I am waiting for two new GTX 460 of 786Mb. One arrive tomorrow, and the second the next week. I will mount a SLI but, during several days I will word with one.
This weekend my first 460 will "said" what it can do. The next week two in SLI (672 CUDA cores, cheaper than a single 480 of 480 CUDA cores)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17975 - Posted: 15 Jul 2010 | 7:47:49 UTC

@SnowCrash:
I'd also go for a factory clocked one, if OC'ed ones cost more and OC myself. The factory OC'ed ones might be binned for better chips - but who knows. The difference is not going to be huge.

The amount of 786 MB RAM should be fine. The main difference is 33% more bandwidth and L2 cache for the 1 GB version (the 33% more ROPs don't matter for crunching, only for pushing many pixels). In the game benchmarks I've seen there was a surprisingly small difference between both versions. On the other hand we do know that GPU-Grid requires some memory bandwidth. Without actually testing it's impossible to say which difference it's going to make, though.

MrS
____________
Scanning for our furry friends since Jan 2002

JG4KEZ(Koichi Soraku)
Send message
Joined: 3 Aug 07
Posts: 5
Credit: 243,198,915
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17987 - Posted: 15 Jul 2010 | 16:54:27 UTC

I use GTX460-1GB now. But it dose not work well.

http://www.gpugrid.net/result.php?resultid=2669990

<core_client_version>6.10.56</core_client_version>
<![CDATA[
<message>
- exit code -40 (0xffffffd8)
</message>
<stderr_txt>
# Using device 0
# There are 2 devices supporting CUDA
# Device 0: "GeForce GTX 460"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1073283072 bytes
# Number of multiprocessors: 7
# Number of cores: 56
# Device 1: "GeForce GTX 460"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1073414144 bytes
# Number of multiprocessors: 7
# Number of cores: 56
SWAN : Module load result [.fastfill.cu.] [200]
SWAN: FATAL : Module load failed


</stderr_txt>
]]>

http://www.gpugrid.net/result.php?resultid=2669959
http://www.gpugrid.net/result.php?resultid=2672409

I tried the reset of the project, but did not work well.

Specifications
M/B: ASUS P5E
CPU: Inel Xeon X3360
RAM: PC2-6400 2GBx2
GPU: Kuroutoshikou GF-GTX460-E1GHD (made by Sparkle Computer) x2
Driver: FW258.96
OS: Windows XP Pro SP3 x86

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17989 - Posted: 15 Jul 2010 | 19:17:23 UTC

Doesn't seem that any of the Fermi cards are being correctly identified:

GTX 480:

# Number of multiprocessors: 15
# Number of cores: 120


GTX 470:

# Number of multiprocessors: 14
# Number of cores: 112


GTX 460:

# Number of multiprocessors: 7
# Number of cores: 56


Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 17992 - Posted: 15 Jul 2010 | 21:43:30 UTC - in response to Message 17991.

The core is so different, that it is likely that CUDA3.0 does not work the GTX460.
Tomorrow I will updload the working cuda31 application.

gdf

JG4KEZ(Koichi Soraku)
Send message
Joined: 3 Aug 07
Posts: 5
Credit: 243,198,915
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17998 - Posted: 16 Jul 2010 | 14:08:11 UTC

I challenged ACEMD beta version v6.32 (cuda31) in GTX460, but after all did not work well.

http://www.gpugrid.net/result.php?resultid=2600191
http://www.gpugrid.net/result.php?resultid=2599853

<core_client_version>6.11.1</core_client_version>
<![CDATA[
<message>
- exit code -40 (0xffffffd8)
</message>
<stderr_txt>
# Using device 1
# There are 2 devices supporting CUDA
# Device 0: "GeForce GTX 460"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1073283072 bytes
# Number of multiprocessors: 7
# Number of cores: 56
# Device 1: "GeForce GTX 460"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1073414144 bytes
# Number of multiprocessors: 7
# Number of cores: 56
SWAN: Using synchronization method 0
SWAN : Module load result [.fastfill.cu.] [200]
SWAN: FATAL : Module load failed


</stderr_txt>
]]>


The BOINC client revised it to 6.11.1.

Does the person used GTX460 else for come? As for it, does GPUGRID work well?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18000 - Posted: 16 Jul 2010 | 14:32:21 UTC - in response to Message 17998.

Might be an idea to close Boinc and then open it again (wait for 10sec before opening it).
I did this before running a beta on my GTX470, and it finished OK; different card I know but might be worth a try!

PS might be better to post back to here.

[TiDC] Revenarius
Send message
Joined: 19 Aug 09
Posts: 4
Credit: 4,582,561
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18002 - Posted: 16 Jul 2010 | 18:07:08 UTC
Last modified: 16 Jul 2010 | 18:29:06 UTC

My new gtx460 don't work in GPUGRID!!!!

Every unit send error in 1-2 seconds.

The other projects work fine, perhaps the new gtx 460 don't work fine in the new program.

I will not download more work for the gtx460 in several days. I hope that this is fix in a short rime.

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18004 - Posted: 16 Jul 2010 | 20:40:25 UTC - in response to Message 18002.

The 460 came today.

I'll give it a go late tonight (after midnight) on both XP and Linux.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18005 - Posted: 16 Jul 2010 | 20:57:13 UTC - in response to Message 17966.

Contrary to my post above, just ordered the EVGA 768MB Superclocked (763 MHz factory OC). It was $20 more than the stock version but has a lifetime warranty instead of 2 years. Couldn't pass up the price though, under $167 after discounts and Bing cashback. Hope it's faster than my GTX 260 :-)

Received the GTX 460 today. Good & bad news so far, all results at stock factory OC.

The good news: The above GTX 460 runs Collatz very well for NVidia. WU time was 14:30 versus 30 minutes for the GTX 260 and 58 minutes for the GT 240, both of which have the shader clocks well over stock. Temps are 53C at full load. As a comparison with ATI my HD 4770 cards average 15:45 per WU.

More good news: It runs DNETC well too, 23:52 versus 101 minutes for both my 9600GSO and GT 240 which have the shaders pushed to as high as they will run reliably.

The bad news: So far will run neither the GPUGRID Fermi WUs nor the latest beta WUs. Here's the error message for the beta, always the same:

<core_client_version>6.10.57</core_client_version>
<![CDATA[
<message>
- exit code -40 (0xffffffd8)
</message>
<stderr_txt>
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GTX 460"
# Clock rate: 0.81 GHz
# Total amount of global memory: 774307840 bytes
# Number of multiprocessors: 7
# Number of cores: 56
SWAN : Module load result [.fastfill.cu.] [200]
SWAN: FATAL : Module load failed

</stderr_txt>

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18008 - Posted: 16 Jul 2010 | 21:56:38 UTC - in response to Message 18005.
Last modified: 16 Jul 2010 | 21:58:35 UTC

Although it does not work here yet, it still sounds like a very good card. The GF100's did not work straight out of the box either, so don't fret it.

Four times faster than a GT240, and using 160W! I have 4 GT240s in one system and they use 260W. The GT240 was as efficient as the bigger G200 cards, so these still look like a good buy.

What does Boinc report its performance as?

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18010 - Posted: 16 Jul 2010 | 22:22:32 UTC - in response to Message 18008.

Although it does not work here yet, it still sounds like a very good card. The GF100's did not work straight out of the box either, so don't fret it.

Four times faster than a GT240, and using 160W! I have 4 GT240s in one system and they use 260W. The GT240 was as efficient as the bigger G200 cards, so these still look like a good buy.

What does Boinc report its performance as?


> NVIDIA GPU 0: GeForce GTX 460 (driver version 25856, CUDA version 3010, compute capability 2.1, 738MB, 363 GFLOPS peak

Notice also it's listed as compute capability 2.1. I think the other Fermis were 2.0. What's the difference?

I just stuck the Kill-A-Watt on it and the total system draw is 227 watts running Collatz and 4 CPU projects at 100% with a Phenom 9600BE CPU.

M J Harvey
Send message
Joined: 12 Feb 07
Posts: 9
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 18012 - Posted: 16 Jul 2010 | 22:54:51 UTC - in response to Message 18010.


Notice also it's listed as compute capability 2.1. I think the other Fermis were 2.0. What's the difference?


Ok, that explains why the current app isn't working on the 460s. We can get that fixed pretty quickly.

MJH

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18013 - Posted: 16 Jul 2010 | 22:54:53 UTC - in response to Message 18010.
Last modified: 16 Jul 2010 | 23:15:17 UTC

Yeah, the GF100 Fermi's are only CC2.0. I'm guessing the GF104 cards will better exploit CUDA 3.1 and 3.2.

That 363 GFlops is a joke though (a correction factor is called for there). It will do about 4 times what a GT240 does; they only have 96 shaders, whereas a GTX460 has 336 shaders and improved architecture!

- It should be 907, http://en.wikipedia.org/wiki/GeForce_400_Series

- 16/07/2010 23:50:01 NVIDIA GPU 0: GeForce GT 240 (driver version 19621, CUDA version 3000, compute capability 1.2, 512MB, 307 GFLOPS peak) - OC'd a bit but nothing special.

A system draw of 227 Watts running Collatz and 4 CPU projects at 100% is good going!
My Phenom II 940 with the 4 GT240s pulls 355W at the wall. You are doing about the same work for 36% less power.
Even my i7-920 (with Turbo off) uses 300 to 320W with my GTX470 running GPUGrid.

This is one of the failed GTX460 WU's:
Name 347-KASHIF_HIVPR_auto_spawn_2_90_ba1-26-100-RND1903_1
Workunit 1704860
Created 16 Jul 2010 20:03:26 UTC
Sent 16 Jul 2010 20:06:38 UTC
Received 16 Jul 2010 20:13:36 UTC
Server state Over
Outcome Client error
Client state Compute error
Exit status -40 (0xffffffffffffffd8)
Computer ID 67635
Report deadline 21 Jul 2010 20:06:38 UTC
Run time 1.450003
CPU time 0.920406
stderr out

<core_client_version>6.10.57</core_client_version>
<![CDATA[
<message>
- exit code -40 (0xffffffd8)
</message>
<stderr_txt>
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GTX 460"
# Clock rate: 0.81 GHz
# Total amount of global memory: 774307840 bytes
# Number of multiprocessors: 7
# Number of cores: 56
SWAN : Module load result [.fastfill.cu.] [200]
SWAN: FATAL : Module load failed

The core ratio to Multiprocessor is still out; it's 6 to 1 and should be 48 to 1:
7 Multiprocessors and 7 groups of 48 shaders (3x16).

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18014 - Posted: 16 Jul 2010 | 23:29:08 UTC - in response to Message 18012.


Notice also it's listed as compute capability 2.1. I think the other Fermis were 2.0. What's the difference?


Ok, that explains why the current app isn't working on the 460s. We can get that fixed pretty quickly.

MJH

I actually stumbled across something useful? What's the difference between 2.0 & 2.1?

M J Harvey
Send message
Joined: 12 Feb 07
Posts: 9
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 18015 - Posted: 17 Jul 2010 | 1:03:53 UTC - in response to Message 18014.

I actually stumbled across something useful?


Yep, ta.

What's the difference between 2.0 & 2.1?


With Cuda 3.1 it looks likes there's a gnat's crotchet of difference between code targeted for 2.0 and 2.1, but I expect it will change in future compiler releases. Although the ISA likely hasn't changed between the GF100 and 103, with the latter being superscalar, instruction ordering is going to be much more important than on a Fermi and will mean more optimisation work in the compilation.

The only reason the current app isn't working is because it doesn't know that the 2.0 Fermi kernels can be used on 2.1 devices.

MJH

PS Intriguingly, the compiler also accepts 2.2, 2.3 and 3.0 as valid compute capabilities. Make of that what you will.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18017 - Posted: 17 Jul 2010 | 4:28:12 UTC - in response to Message 17991.

Anyone care to drop the BOINC alpha mailing list a note that their number of multiprocessors are correct, but they have to multiply them by 32 for GF100 and by 48 for GF104 to get the correct number of shaders?

MrS


Have just done. Also ordered one card (a 768Mb version, factory OC'ed).
____________
BOINC blog

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18018 - Posted: 17 Jul 2010 | 7:30:24 UTC - in response to Message 18017.

We will try to give a new application for the 460 on Monday.

gdf

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18019 - Posted: 17 Jul 2010 | 8:45:03 UTC

Awesome! Can't wait to try it out.

Not having any luck with the 460 on Folding. It is running 3dmark 06 right now so the drivers must be working, correct?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18021 - Posted: 17 Jul 2010 | 9:29:13 UTC

@MarkJ: sorry, this incorrect reporting is an issue of GPU-Grid, not BOINC! SK reported this to them yesterday or so.. don't be surprised if they're a little p***** off. Just tell them it was my fault ;)

@GDF: will the correct number of shaders be reported in the new monday app?

@GTX460 & Collatz: don't forget that CC is a comparably light workload, i.e. the cards draw considerably less power here than under GPU-Grid (or Milkyway for ATIs).

Anyway, 14:30 is a nice result! THe 1 GB version may even improve this a bit, as CC loves memory bandwidth. For comparison: my HD4870 takes about 13 mins at 800 / 950 MHz core / mem. That's for 1.28 TFlops and 122 GB/s bandwidth. The GTX460 786 MB weights in at 0.9 TFlops and 86 GB/s. Apparently we're seeing slightly better utilization of the nVidia shaders here.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18025 - Posted: 17 Jul 2010 | 11:41:41 UTC - in response to Message 18021.
Last modified: 17 Jul 2010 | 11:51:22 UTC

Not having any luck with the 460 on Folding. It is running 3dmark 06 right now so the drivers must be working, correct?

The GTX 460 works very nicely in both Collatz & DNETC.

@GTX460 & Collatz: don't forget that CC is a comparably light workload, i.e. the cards draw considerably less power here than under GPU-Grid (or Milkyway for ATIs).

Just checked DNETC with the Kill-A-Watt. The GTX 460 is now running at 800 core & 1600 shaders, stock voltage (a bump from the 763/1526 factory OC). 239 watts total system draw at 97% GPU. The core/shader speed seem to be locked together at least in MSI Afterburner v1.61 and unlike earlier cards are not allowed to be pushed separately. Not sure if this is a Fermi requirement or a problem in Afterburner. Not a big deal though.

Anyway, 14:30 is a nice result! THe 1 GB version may even improve this a bit, as CC loves memory bandwidth. For comparison: my HD4870 takes about 13 mins at 800 / 950 MHz core / mem. That's for 1.28 TFlops and 122 GB/s bandwidth. The GTX460 786 MB weights in at 0.9 TFlops and 86 GB/s. Apparently we're seeing slightly better utilization of the nVidia shaders here.

With half a days results to average with the card set to 800/1600, Collatz is now averaging 13:40/WU and DNETC 21:50/WU. Temps still running at 53C.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18026 - Posted: 17 Jul 2010 | 13:02:02 UTC - in response to Message 18025.

Then Fermi cores and shaders are linked for both GF100 and GF104 card alike; we have to overclock both at the same time.

53C - I wish!

I'm running a GPUGrid WU that is only using 60% GPU (going by EVGA Precision) and my GTX470 is at 67'C. It would default to about 91deg, if I did not up the fan speed.

By the way anyone with a GF100 Fermi should increase the fan speed while crunching. There are 2 reasons:
1. The card stays cooler and should last longer.
2. When it is cooler it uses less energy (10 to 20W) - GF100's are leaky!
The power usage of running a fan faster is more than offset by the savings from leakage by the GPU. It will still leak, but not as much.
Think of it like an off shore oil well cap. Dont use one and it leaks everywhere. Only use one a bit and it still leaks, badly. Use a good one the correct way, and you have stemmed the flow as much as you can.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18035 - Posted: 18 Jul 2010 | 12:42:56 UTC - in response to Message 18025.
Last modified: 18 Jul 2010 | 12:53:06 UTC

Anyway, 14:30 is a nice result! THe 1 GB version may even improve this a bit, as CC loves memory bandwidth. For comparison: my HD4870 takes about 13 mins at 800 / 950 MHz core / mem. That's for 1.28 TFlops and 122 GB/s bandwidth. The GTX460 786 MB weights in at 0.9 TFlops and 86 GB/s. Apparently we're seeing slightly better utilization of the nVidia shaders here.

With half a days results to average with the card set to 800/1600, Collatz is now averaging 13:40/WU and DNETC 21:50/WU. Temps still running at 53C.

Since I use the optimized Collatz app on my ATI cards, decided to give it a try on the GTX 460. After finding the optimum settings it's now averaged just under 10:58 for the last 20 WUs (with a range of 10:38 - 11:20). GPU use has increased to 99% and temps to 55C - 57C depending on ambient (we're in the middle of a heat wave for us in Minnesota), fan bumps a bit to 44% at 57C. Core/shaders still at 800/1600. Memory is stock (for the Superclocked version) at 1900 according to Afterburner. Total system draw has increased to 247 watts with 4 CPU projects running at 100%. PS is an Antec EarthWatts 380 watt which is considerably less than recommended.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18038 - Posted: 18 Jul 2010 | 18:01:10 UTC - in response to Message 18035.
Last modified: 18 Jul 2010 | 18:03:44 UTC

Keep the memory at stock. If you overclock the memory on the GTX460 it will end up slowing the card down due to increased errors (the controller is almost at its max as is, which is why they used 4000MHZ max. rated RAM rather than 5000MHz)!

Your Antec EarthWatts 380 is a good PSU, enough AMPS. I have one supporting a Q6600 and an OC'd GTX260-216, which uses more power than your GTX460.

I ran a Folding@home task again today, while running GPUGrid (on my GTX470), and the temps did not go up from 72degC (GPU Fan at 83% mind you). I think most apps are reporting the actual usage of GPUGrid incorrectly (or at least a skewed version). EVGA Precision reported GPUGrid as only using 60% GPU, but the temp was just the same, 72degC. When I suspend tasks it drops below 40degC. Running Folding it also (as with your 460 running Collatz) said 99% GPU usage, but the temp was just the same, 72degC.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18041 - Posted: 18 Jul 2010 | 20:14:34 UTC - in response to Message 18026.

By the way anyone with a GF100 Fermi should increase the fan speed while crunching. There are 2 reasons:
1. The card stays cooler and should last longer.
2. When it is cooler it uses less energy (10 to 20W) - GF100's are leaky!
The power usage of running a fan faster is more than offset by the savings from leakage by the GPU. It will still leak, but not as much.

Completely agreed!

Think of it like an off shore oil well cap. Dont use one and it leaks everywhere. Only use one a bit and it still leaks, badly. Use a good one the correct way, and you have stemmed the flow as much as you can.


I'd rather put it this way: temperature is equivalent to movement of particles, including the atoms (should have fixed positions in your chip) and the free electrons. If it's hotter the latter are more often kicked around randomly and thus they sometimes go where they shouldn't - and that's your leakage.

MrS
____________
Scanning for our furry friends since Jan 2002

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,805,636,851
RAC: 9,763,957
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18042 - Posted: 19 Jul 2010 | 8:25:27 UTC - in response to Message 18021.

@MarkJ: sorry, this incorrect reporting is an issue of GPU-Grid, not BOINC! SK reported this to them yesterday or so.. don't be surprised if they're a little p***** off. Just tell them it was my fault ;)

Report-back from boinc_alpha mailing list:

Tell it to NVIDIA; they don't provide an API for getting the number of cores.

I've asked NVIDIA to confirm that all CC 2.1 chips have 48 cores/MP;
if this is the case I'll add that logic to the client.

Note: this matters only for providing an accurate peak FLOPS message
on startup. Everything related to scheduling and credit is determined by
the actual performance, not the peak FLOPS.

-- David

On 17-Jul-2010 2:11 AM, Richard Haselgrove wrote:
> The 32x part of that is already hard-coded into the BOINC client, as a fix
> for the original GF100-based Fermi crads (GTX 470 and GTX 480) - the
> original hard-coded value of 8 for compute capability 1.x cards produced
> under-reporting as well. (Changesets 21034, 21036)
>
> Cards based on the GF104 chip - the GTX 460 released five days ago, and the
> GTX 475 due later in Q3, have a shader count of 48 per multiprocessor, so
> need yet another hard-coded CC 2.1 test.
>
> Surely this is a prehistoric way of doing things? We shouldn't have to
> change the infrastructure framework every time a new chip is released.
> Shader count detection belongs in the NVidia driver and API, not at the
> application level.
>
> Is there any way BOINC itself, and the projects directly affected, can join
> together and make representations to NVidia to get their API extended?

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18048 - Posted: 19 Jul 2010 | 12:36:39 UTC
Last modified: 19 Jul 2010 | 12:37:20 UTC

SLI stock 460's; currently running DNETC while waiting for GPUGrid. Cuda 3.1 WU's complete in 13 min (win7 x64.)

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18053 - Posted: 19 Jul 2010 | 14:07:41 UTC - in response to Message 18048.

Now it runs on gtx460 with acemdbeta 6.36.

gdf

[TiDC] Revenarius
Send message
Joined: 19 Aug 09
Posts: 4
Credit: 4,582,561
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18065 - Posted: 19 Jul 2010 | 19:41:36 UTC

Fisrt unit ok in a gtx460

http://www.gpugrid.net/result.php?resultid=2692716

less than 946 s in complete.

The new version is running ok

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18066 - Posted: 19 Jul 2010 | 20:17:00 UTC - in response to Message 18053.

Now it runs on gtx460 with acemdbeta 6.36.

gdf

How does one run "acemdbeta 6.36"?

I downloaded some work units and they all failed.

They said something about CUDA 30? Does that mean I am running CUDA 3.0? How can you tell?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18068 - Posted: 19 Jul 2010 | 21:33:05 UTC - in response to Message 18066.

You are now picking them up. Just let them run. You have driver 25896, which is good. For some reason you aborted one 6.36 Beta? Anyway hope the next one works for you.
PS. It's CUDA 3.1 (3010), not to be confused with CC2.1.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18070 - Posted: 19 Jul 2010 | 21:39:50 UTC - in response to Message 18048.

SLI stock 460's; currently running DNETC while waiting for GPUGrid. Cuda 3.1 WU's complete in 13 min (win7 x64.)

That's running both GPUs in tandem on one WU? They're now running at around 21:50 on my single GTX 460.

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18074 - Posted: 20 Jul 2010 | 0:26:59 UTC - in response to Message 18070.

Thats with both GPU's working on one WU, so your output from one card is better than my SLI setup. I'll test it on the grid and see if I should go back to using 2 cards for 2 WU's.

[TiDC] Revenarius
Send message
Joined: 19 Aug 09
Posts: 4
Credit: 4,582,561
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18075 - Posted: 20 Jul 2010 | 5:01:20 UTC

a question: Are the ACEMD beta version v6.36 a bit so little? my gtx460 do it in 15 min and I can't download more than 1 or 2. I need more units or the biggest units

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18084 - Posted: 20 Jul 2010 | 13:25:09 UTC - in response to Message 18075.

The full tasks are now available for your GTX460. They are called, ACEMD2: GPU molecular dynamics v6.11 (cuda31).
The Betas were called ACEMD beta version v6.36 (cuda31)

The full tasks are likely to take several hours, perhaps about 4h (just a rough guess) on a GTX460.

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18087 - Posted: 20 Jul 2010 | 15:04:30 UTC - in response to Message 18084.

My 2 460's were crunching the new 3.10 app WU's when I last checked them. They were saying ~6 hours for each WU on Win 7. I haven't seen the complete by now so i'm guessing my computer crashed :(

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18095 - Posted: 20 Jul 2010 | 18:38:18 UTC

Is there any way to keep from downloading 3.0 tasks (which won't run) and only download 3.1 tasks?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18097 - Posted: 20 Jul 2010 | 20:27:28 UTC - in response to Message 18095.
Last modified: 20 Jul 2010 | 20:49:33 UTC

Is there any way to keep from downloading 3.0 tasks (which won't run) and only download 3.1 tasks?

No. There use to be, but the techs had to change the setup when Fermi arrived.

trn-xs, the best thing you could do is use XP or Linux. If you cant do that then make sure your system has at least one free CPU and use the swan_sync=0 variable.

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18100 - Posted: 21 Jul 2010 | 1:42:47 UTC - in response to Message 18097.

[quote]
trn-xs, the best thing you could do is use XP or Linux. If you cant do that then make sure your system has at least one free CPU and use the swan_sync=0 variable.


Thanks, I plan to move these 460's to a dedicated crunching computer with Win XP x64 when I return home. I'm away from my crunchers now and all I have is a Win7 computer :( I've enabled swan_sync=0 and left 2 threads open.

Unfortunately once I added my 460's my system has become unstable. I'm not quite sure why, I've reinstalled Nvidia reference drivers and upped my fan speed to 70% with no overclock. Its not just a GPUgrid, I had crashes with DNETC also.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18102 - Posted: 21 Jul 2010 | 7:10:13 UTC - in response to Message 18095.

Is there any way to keep from downloading 3.0 tasks (which won't run) and only download 3.1 tasks?


If you a fermi and a 3.1 driver you should be downloading only 3.1 tasks. Isn't it the case?

gdf

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18106 - Posted: 21 Jul 2010 | 11:28:18 UTC - in response to Message 18102.

trn-xs now has driver 25896 on Win7 x64 and is using a 6 core i7.
Yesterday morning he was downloading 6.05 Work Units. They all failed (obviously). Since the evening he only downloaded 6.11 and Beta tasks. He now has 3 tasks in progress, so I expect he removed one of his GTX460's, which would remove the possibility of power draw issues (not enough Amps). All Betas ran well, but the only 6.11 that was returned failed,

2696618 1716552 20 Jul 2010 9:03:03 UTC 21 Jul 2010 3:25:49 UTC Error while computing 9,380.86 8,030.28 4,428.01 --- ACEMD2: GPU molecular dynamics v6.11 (cuda31)

trn-xs, there seems to be lots of task restarts, so perhaps it is worth checking your Boinc settings - Perhaps you selected to not use the card when the user is active? Such start/stop/start/stop running slows the tasks down, a lot, and increases the chance of failures.

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18112 - Posted: 21 Jul 2010 | 14:43:56 UTC - in response to Message 18106.
Last modified: 21 Jul 2010 | 14:53:07 UTC

trn-xs now has driver 25896 on Win7 x64 and is using a 6 core i7...


Excellent detective work SK! I was having lots of stability issues, I've pulled my 2nd GTX460 and so far so good. All the restarting was due to the crashing, and the cuda 3.1 non beta WU that failed was due to crashing also.

Also, as soon as the Cuda 3.1 app went into production I never received another 3.0 WU.

I think you may be right about the amps on the PSU. I have a Corsair HX850 running a mildly overclocked 980x, was trying to run 2 gtx460's, 4 hds, 1 ssd. Wattage wise that should draw ~550 under load. CPU usage didn't seem to affect my crashing, under full load or idle I would still lock up. I'm also running on 220v (if that matters for internal computer amperage?)

*Edit, HX850 has 70A on the 12v rail, that ought to be about double what is needed to run 2 gtx460's.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18114 - Posted: 21 Jul 2010 | 16:22:17 UTC - in response to Message 18112.
Last modified: 21 Jul 2010 | 16:24:30 UTC

It's possible that one of the cards is error prone, or is just running too hot, but it is more likely that the buildup of heat in the case is causing overheating on other components. Did you check the card, case and CPU temperatures?
When one of these cards is in a system, it runs nice and cool, but when you pop two in, they reportedly start to get very warm (over 90degC), unless you manually turn the fan speeds up. The card nearest to the Northbridge tends to be hotter.
If your system was stable before you installed the cards and is stable now with one card, then the instability was probably down to using two cards; increased stress on system components causing crashes.

Still, it was a good idea to separate them. If you run one card for a day or two at stock, and then the other there are two likely outcomes:
- One card causes errors, in which case return it.
- Both run fine, in which case your hardware configuration was not up to the task.

If both work separately, its best to start by setting default clocks and improving the cooling; keep an eye on the temperatures and turn up the fan speed on the cards. You might well need another extractor fan, as these cards dissipate heat back into the case.

If that does not stabilize your system, look at other parts; check the PSU, RAM, for read/write issues.

Even though 220V is at the low end, that is a high end PSU designed to run GPUs. So 70A should be enough; at stock the cards will only draw about 27Amps between them running flat out(324W).

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18119 - Posted: 21 Jul 2010 | 20:54:06 UTC - in response to Message 18102.
Last modified: 21 Jul 2010 | 20:55:14 UTC

Is there any way to keep from downloading 3.0 tasks (which won't run) and only download 3.1 tasks?


If you a fermi and a 3.1 driver you should be downloading only 3.1 tasks. Isn't it the case?

gdf


The tasks seem to working correctly now.

When the small sized (187 point) beta work units first came out the GTX-460 computer #75987 was downloading both the CUDA 3.1 beta tasks and CUDA 3.0 work units as well. The CUDA 3.0 tasks would fail straight away.

Take a look:

http://www.gpugrid.net/results.php?hostid=75987

Anyhow the tasks seem to be working fine now...

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18120 - Posted: 21 Jul 2010 | 21:00:15 UTC - in response to Message 17963.

That graph style is Anandtech, but I can't find any GP-GPU stuff in the 2 launch day reviews. Where is it from? I thought it was unsatisfying to bench games only, as there's not much to see there anyway..

MrS


http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/16

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18121 - Posted: 21 Jul 2010 | 21:36:59 UTC

Switched cards around so now the GTX 460 is running in XP64. GPU utilization went up from 49% to 95%. Times look to be much faster than before but still slower than my GTX 260. One would think that a more advanced 336 shader card should be faster than an old 216 shader card. The GTX 460 is also looking to be less than twice as fast as the 96 shader GT 240. Think there's still much work to do on the new app.

As a comparison with another client, Collatz is almost 3 times faster on the GTX 460 than on the GTX 260 and almost 5 times faster than on the GT 240. Of interest in terms of power draw, in the same machine the total draw is 175 watts with the GT 240 and 239 watts with the GTX 460, both running Collatz at 99% GPU.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18122 - Posted: 21 Jul 2010 | 22:11:06 UTC - in response to Message 18121.
Last modified: 21 Jul 2010 | 22:22:14 UTC

Switched cards around so now the GTX 460 is running in XP64. GPU utilization went up from 49% to 95%. Times look to be much faster than before but still slower than my GTX 260. One would think that a more advanced 336 shader card should be faster than an old 216 shader card. The GTX 460 is also looking to be less than twice as fast as the 96 shader GT 240. Think there's still much work to do on the new app.

As a comparison with another client, Collatz is almost 3 times faster on the GTX 460 than on the GTX 260 and almost 5 times faster than on the GT 240. Of interest in terms of power draw, in the same machine the total draw is 175 watts with the GT 240 and 239 watts with the GTX 460, both running Collatz at 99% GPU.



This is interesting information. We suffer the fact that multiprocessors went down from G200 to GF100/104. It is probably the same problem that we have with ATI cards, very fat multiprocessors.

gdf

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18124 - Posted: 21 Jul 2010 | 22:39:53 UTC - in response to Message 18121.

Switched cards around so now the GTX 460 is running in XP64. GPU utilization went up from 49% to 95%. Times look to be much faster than before but still slower than my GTX 260. One would think that a more advanced 336 shader card should be faster than an old 216 shader card. The GTX 460 is also looking to be less than twice as fast as the 96 shader GT 240. Think there's still much work to do on the new app.

As a comparison with another client, Collatz is almost 3 times faster on the GTX 460 than on the GTX 260 and almost 5 times faster than on the GT 240. Of interest in terms of power draw, in the same machine the total draw is 175 watts with the GT 240 and 239 watts with the GTX 460, both running Collatz at 99% GPU.



What a leap in GPU utilization! Do you think the GTX470 might get the same kick in performance as the GTX460 if running XP 64bit? Skgiven said he had an XP 64bit somewhere & he has a GTX470 but didn't think that it was worth giving a try if he had to get more RAM & possibly a new license. How much RAM do you have & what CPU was used?

____________

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18127 - Posted: 22 Jul 2010 | 0:26:15 UTC - in response to Message 18124.

What a leap in GPU utilization! Do you think the GTX470 might get the same kick in performance as the GTX460 if running XP 64bit? Skgiven said he had an XP 64bit somewhere & he has a GTX470 but didn't think that it was worth giving a try if he had to get more RAM & possibly a new license. How much RAM do you have & what CPU was used?

I'm running XP 32-bit and it seems to be working at about the same speed as XP 64-bit. 15,913.17 seconds for a GTX-460 to complete a 4,535.61 / 6,803.41 point work unit on an old AMD 4000+ system.

As I recall XP-64 bit was a pain in the neck (at least it was when it first came out), YMMV.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18129 - Posted: 22 Jul 2010 | 1:00:12 UTC - in response to Message 18124.
Last modified: 22 Jul 2010 | 1:05:51 UTC

Bigtuna, your GTX 460 with 768MB using the 25896 driver and XP x86, is doing quite well for some tasks:
This TONI_CAPBIND for example took 15906 sec (6,803.41). Faster than this task, 17150 sec.
The more recent task is only 5% slower than a similar task, on my GTX260-sp216, 15095 sec.

My GTX260-sp216 is slightly factory overclocked and I have my shaders at 1525MHz, about the same as yours. My card is also on XP, and is supported by a Q6600 CPU and 4GB RAM. I have 1 CPU free and I am using swan_sync. So it is reasonably well optimised for GPU crunching.

Are you also using the swan_sync environmental variable, and do you have a core free?

I noticed earlier that there was a 10% difference between the 1GB and 768MB version you have. So what I am saying is that on XP a reference 1GB version might be slightly faster than a reference GTX 260 crunching TONI_CAPBIND tasks (about 5%). Dont know about the other tasks yet. Also your overclock from 1350MHz is not quite as much of a leap as mine, from 1242.

It would be nice to see a 1GB GTX460 on XP at 1600MHz, to see what they can really do.

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18130 - Posted: 22 Jul 2010 | 1:29:42 UTC - in response to Message 18127.

Hi Bigtuna,

I've checked your PC & compared to the one Beyond uses. He has 2GB VS your 1GB RAM, but you've had lots of failed WU's, too few to make estimates (My personal opinion), Beyond has had lots of successful WU's.

But that's beside the point. What I wanted to know, was if the GTX470 which gpugrid has had more time to play with, also had a nice boost running XP 64bit compared to running XP 32bit.

The only way I'd guess how to make a comparison, was if someone who ran XP 32bit with a GTX470 installed XP 64bit w/o modifying his PC & shared that knowledge.
____________

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18131 - Posted: 22 Jul 2010 | 1:53:44 UTC - in response to Message 18130.

Hi Bigtuna,

I've checked your PC & compared to the one Beyond uses. He has 2GB VS your 1GB RAM, but you've had lots of failed WU's, too few to make estimates (My personal opinion), Beyond has had lots of successful WU's.

But that's beside the point. What I wanted to know, was if the GTX470 which gpugrid has had more time to play with, also had a nice boost running XP 64bit compared to running XP 32bit.

The only way I'd guess how to make a comparison, was if someone who ran XP 32bit with a GTX470 installed XP 64bit w/o modifying his PC & shared that knowledge.

Failed work units?

So far the only failed work units have been the incompatible CUDA 3.0 work units that got sent during the beta testing. Full sized CUDA 3.1 tasks have been rock solid AFAIK. Of course they have only been running a day or two so that isn't much of a test.

Are you also using the swan_sync environmental variable, and do you have a core free?

That computer is running swan_sync=0 currently. Turned it "on" after the first couple full sized tasks. You can tell by the CPU time. With swan_sync "off" CPU time is minimal and with swan_sync=0 CPU time is about the same as the the GPU time, and yes there is a free core available on that box.

Did we ever decide exactly what swan_sync was?

The card came with a factory OC to 763/1526.

So far I'm impressed and disappointed at the same time. The 460 runs cool and quiet which is good but the performance is only about 2.5 x GT-240 but the price is 4 x GT-240.

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18132 - Posted: 22 Jul 2010 | 3:14:11 UTC - in response to Message 18131.

The good news is I think I have figured out my gtx460 problems; bad news is I have one defective card that is prone to crashing. Running them both 1 at a time trouble shoot the issue.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18135 - Posted: 22 Jul 2010 | 10:45:20 UTC - in response to Message 18132.
Last modified: 22 Jul 2010 | 10:48:14 UTC

Did we ever decide exactly what swan_sync was?

It is an environmental (system) variable used to synchronise the GPUGrid application to the CPU. I originally thought it linked the app specifically to a core but I now think it forces the operating system to continuously poll the app, allowing the app to have immediate CPU resources when required (removing a bottleneck). I expect the number zero causes continuous polling, whereas if a value of 5 was used for example it would only poll every 5 sec.

So far I'm impressed and disappointed at the same time. The 460 runs cool and quiet which is good but the performance is only about 2.5 x GT-240 but the price is 4 x GT-240.

This is the first app that actually works for the new GTX460 cards. There has been no time to look at the GF104 architecture in greater detail, and then refine the apps and tasks to suit the card. It may still have significant unrealised potential. The latest application v6.11 also improved performance for the first Fermi's (GF100), so hopefully an improved app will do the same for the GF104 cards in due course. The 6.11 app & latest driver combo also improves the Vista/W7 performance dramatically. Vista and Win 7 still perform slower than XP and Linux, but now do so to a similar extent, across all cards (they are equally slower, roughly speaking). Prior to the latest app the performance of Fermi's on Win7 and Vista was almost half what it is on XP. It's now only about 10 to 15% slower, on properly configured systems. As for price, that will drop over time in the same way the GF100 Fermi's dropped in price: My GTX470 cost £320, now it can be picked up for £277, and there are cheaper cards. You are also comparing the new GTX460 (2 weeks old) to a GT240 card that has dropped by some 40% since its release.

People should not be too surprised about the present performance given the shader layout; it's totally different than for the GF100 cards. Basically its performing as if it has 2/3rds the number of shaders, due to the way these are accessed. I doubt that the present app can tap into these last 33% of shaders efficiently, if at all. So perhaps there is good room for improvement.

trn-xs, take the faulty card back and get a refund or a replacement.

GPUGRID Role account
Send message
Joined: 15 Feb 07
Posts: 134
Credit: 1,349,535,983
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 18136 - Posted: 22 Jul 2010 | 11:58:41 UTC - in response to Message 18135.
Last modified: 22 Jul 2010 | 12:02:50 UTC

Did we ever decide exactly what swan_sync was?


It controls the method that ACEMD tells the CUDA runtime to use for polling for GPU work completion. The default method is for the application to block until completion; this keeps CPU load to a minimum but introduces latency that slows the program down.

Setting SWAN_SYNC=0 will cause ACEMD to poll for kernel completion, which minimises latency at the cost of CPU.

0 is the only valid value for SWAN_SYNC - anything else will cause undefined behaviour, so don't do it!


The 460 runs cool and quiet which is good but the performance is only about 2.5 x GT-240 but the price is 4 x GT-240.


We will be turning our attention to improving the performance on GF104 cards after the summer vacations. We know what needs to be done.

MJH

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18145 - Posted: 23 Jul 2010 | 11:07:54 UTC

Well mine arrived today. I have swapped out a GTX275 for it and its off and running.

Interestingly GPU-Z 0.4.4 says it only has 224 shaders. Maybe they know something we don't :-)
____________
BOINC blog

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18151 - Posted: 24 Jul 2010 | 1:11:50 UTC

Well initial performance is actually worse than the GTX275. As MJH said they need to work on the app. I'd say GPU-Z needs an update too.

Thats the price for rushing out and buying the latest & greatest toy. We just need to wait a while for the s/w to catch up.
____________
BOINC blog

trn-xs
Send message
Joined: 12 Feb 10
Posts: 8
Credit: 17,551,984
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 18152 - Posted: 24 Jul 2010 | 6:18:36 UTC
Last modified: 24 Jul 2010 | 6:20:36 UTC

Its a bit early to clearly tell here, but it looks like my 460 is out performing my 280 so far. Its not a comparison because my 280 is running old drivers and the 460 is OC'd to 750/1500.

The best comparison I can draw so far is:
gtx460 completes a 6,800 credit WU in 17,400 seconds
gtx280 completes a 6,800 credit WU in 18,800 seconds

Stock vs stock with both on Cuda 3.1 drivers might be a pretty even match. Both computers crunch wcg on all cores and i do not use swan_sync.

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18154 - Posted: 24 Jul 2010 | 9:37:17 UTC
Last modified: 24 Jul 2010 | 9:37:44 UTC

Kill-a-Watt power info for the GTX-460. I've got the 768 MB version with a factory OC to 763/1526 MHz.

At idle the AMD 4000+ system draws 93 watts.

Running GPUGRID with Swan_Sync=0 the system draws between 195 and 210 watts.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18155 - Posted: 24 Jul 2010 | 11:15:45 UTC - in response to Message 18154.

I think it would be generically more accurate to compare the GTX460 768MB with the GTX260-192, and compare the GTX460 1GB with the GTX260-216:
The 192 version of the GTX260 was about 10% slower than the 216 version (at reference, for most applications and games that it worked on; not here).
The 768MB version of the GTX460 is about 10% slower than the 1GB version.

If the GTX460 cards are only using 224 of their 336 shaders then they square up fairly evenly in terms of shader performance to the GTX260, with the 1GB version perhaps being about 5% better than the GTX260-216 at reference (on XP or Linux). Obviously if you get a good overclocked 768MB version it could outperform a GTX280, but equally a good GTX260-216 with a high clock would outperform a native 1GB version of the GTX460.

The present GTX460 benefits for crunching here are that they are quieter, run cooler, use less electric, and overclock more.
The problems are that they are still slightly too expensive, there are not enough 1GB card available, the drivers just really got the card working and need to mature, as does the app (after the holidays).

Come the autumn (and the GTX475 with 8cores and 384shaders) there will likely be new drivers and a new app, which should improve performances. The CC1.3 cards are almost 2 years old, so the drivers and apps are well refined at this stage; the present 6.05 app is well over twice as fast as the original.

It is likely that the power usage will rise slightly once these cards are better utilized, and the performance will increase somewhat more, so it’s a bit too early to do an accurate crunching performance per watt comparison here. The good news is that the architecture is set to remain for the GTX475, so when that turns up, the scientists will be better prepared for it.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18156 - Posted: 24 Jul 2010 | 12:50:06 UTC - in response to Message 18151.

Well initial performance is actually worse than the GTX275. As MJH said they need to work on the app. I'd say GPU-Z needs an update too.

Worse performance also than the GTX 260. I bought the GTX 460 specifically for GPUGRID but can't justify using it here until the performance improves. The good news is that it runs extremely well in Collatz and DNETC.

@bigtuna: What was the percentage of GPU usage for your Kill-A-Watt readings?

@sk: Wow, just when it seemed "silliest and most irrelevant post of the month" was wrapped up. It would be nice to stick to reality instead of speculation based on fantasy and misinformation. BTW, the 768MB and 1GB models of the GTX 460 perform the same in Folding, Collatz and DNETC. Collatz is an app that is very much affected by memory speed. I think we know the GTX 460 isn't being utilized well by GPUGRID. Hopefully that will improve.
_______________________

Tried running the GTX 460 in XP64 with swan_sync set and 1 core reserved for the GPU (GX780 quad core system). The result was no significant speed increase in GPUGRID (1 minute or so) and loss of 1 CPU core in another project. In addition the system became very slow to respond. Without swan_sync the system response was fine.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18157 - Posted: 24 Jul 2010 | 15:45:24 UTC - in response to Message 18156.

The shader speed is more important here than the RAM speed. Besides, the actual speed of the GDDR5 is not the issue with the 768MB card versions, the issue is the memory interface!

"The more expensive GTX 460 1GB version is based on the full-fat GF104 GPU and so has the maximum 256-bit memory interface. This is because the 1GB version of the GTX 460 has more memory links, and therefore more memory bandwidth. While the GTX 460 768MB has only six links to memory, so has only six 128MB memory chips on the card (6 x 128MB = 768MB), the GTX 460 1GB GPU has the full eight connections (8 x 128MB = 1GB). As each memory connection is 32 bits wide, the GTX 460 768MB has a 192-bit memory interface (6 x 32 bits = 192 bits) while the GTX 460 1GB has a 256-bit bus". Bit-tech, by James Baker.
Many games also observe this 10% reduction in performance between the 1GB and 768MB versions.

I see the GTX460 as performing about the same as a GTX260, with the 768MB version falling slightly behind the GTX260 and the 1GB version slightly outperforming a reference GTX260. There are plenty of results to go by.

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18158 - Posted: 24 Jul 2010 | 19:17:32 UTC - in response to Message 18156.
Last modified: 24 Jul 2010 | 19:17:52 UTC


@bigtuna: What was the percentage of GPU usage for your Kill-A-Watt readings

GPUZ reports between 75 and 90 percent GPU load (it varies and so does the power).

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18159 - Posted: 24 Jul 2010 | 19:27:45 UTC - in response to Message 18157.

Many games also observe this 10% reduction in performance between the 1GB and 768MB versions.

Exaggeration and far too many exclamation points doesn't provide credibility. Even your own cherry-picked article doesn't show anywhere near 10% improvement in games. As you can see by the Anandtech test that you posted here, there is no difference between the 2 models at all in Folding:

http://www.gpugrid.net/forum_thread.php?id=2227&nowrap=true#17956

As I'm sure you know the 1GB card also uses a higher stock voltage and thus uses more power, runs at higher temps and is generally less overclockable compared to the 768MB model even at it's lower stock voltage (again, shown in the Anandtech article where you grabbed the Folding chart above).

The truth is results vary depending on the application. Most DC apps are not memory constricted. A few are. Exclaiming something else doesn't make it true.

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18161 - Posted: 24 Jul 2010 | 22:39:31 UTC - in response to Message 18136.
Last modified: 3 Nov 2011 | 21:28:33 UTC

Did we ever decide exactly what swan_sync was?


It controls the method that ACEMD tells the CUDA runtime to use for polling for GPU work completion. The default method is for the application to block until completion; this keeps CPU load to a minimum but introduces latency that slows the program down.

Setting SWAN_SYNC=0 will cause ACEMD to poll for kernel completion, which minimises latency at the cost of CPU.

0 is the only valid value for SWAN_SYNC - anything else will cause undefined behaviour, so don't do it!

MJH

Awesome, thanks for the info.

SWAN_SYNC is working great for me:

The first work unit I ran was with SWAN_SYNC "off". It took 17,150 seconds for a 6,803 point task.

With SWAN_SYNC "on" (SWAN_SYNC=0) similar tasks are taking about 15,850 seconds which is about an 8% difference.

The difference is significant enough that I'm leaving SWAN_SYNC "on".

Different operating systems and different hardware could react differently.

Running those 6803 point tasks (larger tasks have been a bit slower) the GTX-460 is good for about 37,083 points per day with SWAN_SYNC=0.

With SWAN_SYNC "off" the unit would pull down 34,272 points per day so the one sacrificed CPU core is good for about 2.8k points per day.

2.8k/day is more points that that one core would pull down running Rosetta.

NOTE: Edited to have SWAN_SYNC in caps which seems to be required for at least some Linux distros.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18162 - Posted: 25 Jul 2010 | 7:57:42 UTC

I don't think the 1Gb cards are going to provide any better bang for the buck.

According to GPU-Z the memory controller load when running GPUgrid 6.11 app and Swan_Sync=0 is around 15 to 17% on my 768Mb card. Its hardly under pressure.
____________
BOINC blog

michaelius
Send message
Joined: 13 Apr 10
Posts: 5
Credit: 2,204,945
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18163 - Posted: 25 Jul 2010 | 14:32:33 UTC - in response to Message 18161.


2.8k/day is more points that that one core would pull down running Rosetta.


But it's less than 1 core can get in Aqua@home hence why i keep my gtx 260 without swan_sync.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18164 - Posted: 25 Jul 2010 | 18:37:10 UTC - in response to Message 18163.


2.8k/day is more points that that one core would pull down running Rosetta.

But it's less than 1 core can get in Aqua@home hence why i keep my gtx 260 without swan_sync.

Same here, and CPU credits aren't exactly comparable to GPU credits. Some projects can only be run on the CPU and a core on those projects means a lot.

On the other hand lets compare credits between the GTX 260 and GTX 460. My GTX 260 does around 40,000 credits/day in GPUGRID and about the same or even a bit less in Collatz. On the other hand my GTX 460 does around 35,000 credits/day in GPUGRID and about 103,000 credits/day in Collatz without sacrificing a CPU core. Quite a difference...

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18165 - Posted: 25 Jul 2010 | 19:12:51 UTC - in response to Message 18164.
Last modified: 25 Jul 2010 | 19:20:39 UTC

IMO the only way as not to discriminate or favoritise projects, is if BOINC Projects can agree to give credits based on CPU/GPU performance per watt & the CPU/GPU time used on WU's x deadline bonus. Is this even possible though???

If effective CPU/GPU's were encouraged & ineffective CPU/GPU's were discouraged. BOINC could send the message that wasting watts is a waste of time.
____________

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18168 - Posted: 26 Jul 2010 | 15:06:11 UTC - in response to Message 18164.


2.8k/day is more points that that one core would pull down running Rosetta.

But it's less than 1 core can get in Aqua@home hence why i keep my gtx 260 without swan_sync.

Same here, and CPU credits aren't exactly comparable to GPU credits. Some projects can only be run on the CPU and a core on those projects means a lot.

On the other hand lets compare credits between the GTX 260 and GTX 460. My GTX 260 does around 40,000 credits/day in GPUGRID and about the same or even a bit less in Collatz. On the other hand my GTX 460 does around 35,000 credits/day in GPUGRID and about 103,000 credits/day in Collatz without sacrificing a CPU core. Quite a difference...


We are working on optimizing the application even further. In September some news.

gdf

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18169 - Posted: 26 Jul 2010 | 16:06:03 UTC - in response to Message 18168.
Last modified: 26 Jul 2010 | 16:08:09 UTC


We are working on optimizing the application even further. In September some news.

gdf


Will the other Fermi cards have performance benefit from this optimization, or just the GF104 based ones?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 18170 - Posted: 26 Jul 2010 | 16:34:02 UTC - in response to Message 18169.

If it works, I would expect that all of them will benefit even g200 cards, but GF100 more and GF104 even more.

gdf

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18172 - Posted: 26 Jul 2010 | 19:11:06 UTC - in response to Message 18165.

IMO the only way as not to discriminate or favoritise projects, is if BOINC Projects can agree to give credits based on CPU/GPU performance per watt & the CPU/GPU time used on WU's x deadline bonus. Is this even possible though???

If effective CPU/GPU's were encouraged & ineffective CPU/GPU's were discouraged. BOINC could send the message that wasting watts is a waste of time.

Good idea. Thing is the more credit a project gives the more likely crunchers concerned mostly about points will crunch for said project. That gives projects a motive to hand out more points.

Personally I'm not concerned about cross project points, only points within my favorite projects, and only because the points represent a relative level of contribution. More points equals a bigger contribution.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18178 - Posted: 26 Jul 2010 | 22:33:24 UTC - in response to Message 18170.

If it works, I would expect that all of them will benefit even g200 cards, but GF100 more and GF104 even more.

gdf

This sounds really promising. Looking forward to it and thanks for all the hard work.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18190 - Posted: 28 Jul 2010 | 16:50:08 UTC
Last modified: 28 Jul 2010 | 17:09:24 UTC

i jusr read the topic quickly. correct me if i'm wrong, but if i've got OC'd GTX275 there is no sense to buy GTX460?

"With Swan_Sync "on" (Syan_Sync=0) similar tasks are taking about 15,850 seconds"
right now it takes me 14,550 (http://www.gpugrid.net/workunit.php?wuid=1737805)

the only thing - i'm using linux.

So, guys, what's the answer - should I rush to the shop or not? or may be it's better to wait a bit for GTX475 with 384 cores? what's the latest rumours - when we can expect it? later the summer?
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18191 - Posted: 28 Jul 2010 | 16:59:36 UTC - in response to Message 18151.
Last modified: 28 Jul 2010 | 17:00:55 UTC

as MarkJ said:

Well initial performance is actually worse than the GTX275.


ok, it's clear for me - no rush at all :-)
____________

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18192 - Posted: 28 Jul 2010 | 17:58:51 UTC - in response to Message 18190.
Last modified: 28 Jul 2010 | 18:17:27 UTC

It wouldn't hurt Nvidia if you bought a GTX460, & I don't think GPUGRID would complain either. Personally I'm happy when others buy things fast, expensive, & help to make it cheap & mature for me when I feel like going for it myself.

The question is, are you selfish or selfless, are you rich or poor, & are you impatient or patient?

A last question I could ask is, a GPU "should" last for at least 2-3 years even if you OC & run it 24/7, but how long do you usually keep a GPU? Nobody wants Nvidia to not offer something better some time later, not even Nvidia.

I was late to buy even a GTX260(216) & I've only had them running 24/7 for under a year, that means I'm stuck with something for at least a year or two, & everyone else is busy trying to kill their new GTX4xx series. So even though I waited until the prices dropped, I waited too long, but the GTX2xx series is still being tweaked, so I might still get better results before I expect them to die in two years time.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18193 - Posted: 28 Jul 2010 | 18:06:50 UTC
Last modified: 28 Jul 2010 | 18:08:24 UTC

i'm trying to be smart :-) if i see no reasons to do smth - i will not do that.

if GTX460 gives less credits then my GTX275 on the same project - what the sense to buy it? :-) My question was to clarify that.

i'm pretty much patient, but i can afford to buy a small gift for myself :-)
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18195 - Posted: 28 Jul 2010 | 18:26:47 UTC - in response to Message 18191.

CTAPbIi, at present your overclocked GTX275 is supported by mature drivers and 2 years of application development. It outperforms most GTX460's by about 25%. The GTX460 has only been released and will take time to optimise, after the summer holidays.

I don’t see the GTX460 as an upgrade path for someone with a GTX275. It would be a step up to a new generation of card but a step down in terms of relative performance. Wait and see how the GTX475 turns out - it's due out in the autumn. It's not as if you are desperate to replace a failing, highly inefficient or obsolete card.

Even if the GTX460 prices fell or the GTX475 turned up early there would be little point replacing a faster card with a slower one. Enough people have a GTX460 for the scientists to try out new applications. After the GTX460 is better supported, it may or may not be faster than your card. It could take several application updates and driver releases before the GTX460 is a match. Your GTX275 is a good card and will be for some time yet. The shops can wait.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18196 - Posted: 28 Jul 2010 | 18:39:32 UTC

My general advice is: don't rush to buy anything, sit back and remember it's only a hobby after all. But if you see the opportune moment - go for it (e.g. some relative or friend needs a new card anyway). And if you buy go for best bang for the buck - but also remember that you only have so many PCIe slots and that running a PC to feed the GPU also costs power. I'd rather have 1 24/7 PC crunching with a fast GPU than 2 PCs with 2 slower GPUs.

MrS
____________
Scanning for our furry friends since Jan 2002

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18197 - Posted: 28 Jul 2010 | 18:54:12 UTC

thx a lot, guys :-) that's what i wanted to hear - clear answer, yes or no.

for sure i'll wait till fall and then we'll see.
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18199 - Posted: 28 Jul 2010 | 21:02:32 UTC - in response to Message 18193.

if GTX460 gives less credits then my GTX275 on the same project - what the sense to buy it? :-) My question was to clarify that.

The real answer is: it depends. If you have the card to run GPUGRID then your GTX 275 will do more work but also will use more electricity. If you're running Collatz the GTX 460 will be more than twice as fast as the GTX 275. The GTX 460 is also faster in DNETC and Folding...

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18200 - Posted: 28 Jul 2010 | 22:04:50 UTC - in response to Message 18190.

i jusr read the topic quickly. correct me if i'm wrong, but if i've got OC'd GTX275 there is no sense to buy GTX460?

"With Swan_Sync "on" (Syan_Sync=0) similar tasks are taking about 15,850 seconds"
right now it takes me 14,550 (http://www.gpugrid.net/workunit.php?wuid=1737805)

the only thing - i'm using linux.

So, guys, what's the answer - should I rush to the shop or not? or may be it's better to wait a bit for GTX475 with 384 cores? what's the latest rumours - when we can expect it? later the summer?

I don't think a GTX-460 makes sense for you to run GPUGRID at this time.

Expect both drivers and apps for the GF104 to improve with time, we are expecting a boost sometime after Summer break.

As I understand it the GF104 is underutilized at this time, much like my ATI HD-5770 cards are when Folding. They work, but they don't work up to their full potential.

This is all quite normal as Software has traditionally lagged behind hardware.

That said I'm not sorry I purchased a GTX-460. The card is currently making decent credit and has the potential to do even better.

I know it can be frustrating to have shiny new hardware that is not fully utilized but you can be assured that "they" will eventually work these things out.

You might wait and see what becomes of the GTX-475 and see also what happens when the newer software becomes available.

It is a good time to just wait a bit IMHO.




CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18203 - Posted: 29 Jul 2010 | 12:38:46 UTC

bigtuna,
yep, i'll wait for gtx475 and then we'll see.

Beyond,
talking about collatz, IMO it makes way more sense to get ATI ;-)
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18204 - Posted: 29 Jul 2010 | 13:14:26 UTC - in response to Message 18203.

Beyond,
talking about collatz, IMO it makes way more sense to get ATI ;-)

I agree for sure if you're only running Collatz, DNETC & RC5-72. The HD 5770 is as fast as the GTX 460 at Collatz and faster at the other two. In addition the HD 5770 is quite a bit less expensive. The advantage for the GTX 460 is that it can currently run GPUGRID and will hopefully run it well in the not too distant future.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 18205 - Posted: 29 Jul 2010 | 14:50:34 UTC

pricewise GTX460 is pretty close to 5850, which will well outperform GTX460 in collatz, MW, etc. For MW i'm using 4870 and 4890, which gonna be replaced by 6970 this fall, but for GPUGRID i'm not sure what should I do. may be GTX475 is the answer...
____________

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 18209 - Posted: 31 Jul 2010 | 0:05:25 UTC - in response to Message 18204.

Beyond,
talking about collatz, IMO it makes way more sense to get ATI ;-)

I agree for sure if you're only running Collatz, DNETC & RC5-72. The HD 5770 is as fast as the GTX 460 at Collatz and faster at the other two. In addition the HD 5770 is quite a bit less expensive. The advantage for the GTX 460 is that it can currently run GPUGRID and will hopefully run it well in the not too distant future.


I've got twin HD-5770 cards in one system. Just for grins I ran the 5770 cards on Collatz for a day and they scream. They made more points in that one day than all my CPUs had made on Rosetta in months of crunching.

I think the GTX-460 will be the one to have for GPUGRID. They already work pretty well considering how new they are.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18213 - Posted: 31 Jul 2010 | 19:45:53 UTC - in response to Message 18209.

I came across a few GTX460 2GB cards (Palit/Gainward, ECS, and Sparkle). Going by the specs, some offer a small advantage in clock speeds, but all are more expensive than 1GB cards. While these are not quite gimmick cards (they should be better for video editing and some games, especially 3D games and for supporting 3 monitors in 2-way SLI gaming) dont buy one just to crunch with here - you would be better of with a cheaper card and high clocks, such as this Galaxy.

TechPowerUp's comparison of an OC'd Zotac GTX460 to a GTX275 - just remember which is presently faster here (GTX275), and that a GTX475 is due out soon.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18215 - Posted: 1 Aug 2010 | 2:21:18 UTC
Last modified: 1 Aug 2010 | 2:22:48 UTC

I've gone through a bunch of results for my GTX460 and compared them to the GTX275 that was in the machine previously.

GTX275 average run time 19833 sec (sample of 9)
GTX460 average run time 21968 sec (sample of 9)

Remembering that the science app needs to be tweaked for the GTX460. I suspect its probably using 224 out of the 336 shaders, but we'll have to wait and see.

I've got some pics and stuff on my blog regarding the 460.
____________
BOINC blog

Andrew3000
Send message
Joined: 1 Feb 10
Posts: 24
Credit: 1,220,848
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18216 - Posted: 1 Aug 2010 | 4:36:45 UTC

should i buy a 5850 or a gtx460 ? what would work best with gpugrid ? (and if you will say that i shouldn't buy the gtx460 use arguments)

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18217 - Posted: 1 Aug 2010 | 9:34:17 UTC - in response to Message 18216.

should i buy a 5850 or a gtx460 ? what would work best with gpugrid ? (and if you will say that i shouldn't buy the gtx460 use arguments)

GPUGrid does not have an app that will run on a 5850 so the 460 would be a better choice.
____________
Thanks - Steve

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18218 - Posted: 1 Aug 2010 | 13:05:56 UTC - in response to Message 18216.

should i buy a 5850 or a gtx460 ? what would work best with gpugrid ? (and if you will say that i shouldn't buy the gtx460 use arguments)


If UR into "spicemen", math, or cryptography go with Ati, if UR into medical research go with Nvidia. If UR not interested in DC or GPGPU at all, go with Ati.

That's my POV...
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18219 - Posted: 1 Aug 2010 | 13:39:23 UTC - in response to Message 18218.
Last modified: 1 Aug 2010 | 13:53:33 UTC

If I were you I would sell that GTX 260-192, if you have not already done so.

You might also want to buy a cheap GTX460 now and then sell your 8800 GT,
or you might want to wait for a month or two; to see what else becomes available, if performances improve and if prices drop.

- The noteworthy cards that will become available are the GTX475 and the GTS450.
- I expect performances will improve for both versions of the GTX460 and the yet unreleased GTX475 (as it is architecturally the same; just one extra core and 48 more shaders). However performances for other cards may also improve.
- While GTX460 prices may drop a bit (10 or 20%) this might be at the expense of quality rather than through competition; the cards have an open design, so look for cards with a good warranty.

Andrew3000
Send message
Joined: 1 Feb 10
Posts: 24
Credit: 1,220,848
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18221 - Posted: 1 Aug 2010 | 14:28:35 UTC

ok.. thanks for the arguments everyone.. so i'll go with the gtx460 then.. now the problem is: should i wait 2 months to buy it cheaper or not?

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18222 - Posted: 1 Aug 2010 | 18:08:23 UTC - in response to Message 18221.

If you decide to wait then I would ask what your price point is ... the amount they *might* drop is quite small overall ... if you really need to save $20 USD then I doubt you would be crunching to begin with ... start building up those GPUGrid points sooner rather than later :cheers:
____________
Thanks - Steve

Andrew3000
Send message
Joined: 1 Feb 10
Posts: 24
Credit: 1,220,848
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18224 - Posted: 1 Aug 2010 | 18:49:08 UTC

i'm looking for a 25$ drop in price because it costs a bit too much for me..

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18225 - Posted: 1 Aug 2010 | 19:07:55 UTC - in response to Message 18224.
Last modified: 1 Aug 2010 | 19:30:30 UTC

Perhaps the sale of your other cards would sufficiently offset the purchase of a GTX460?
The later you leave it to sell them the less you will get, so sell that GTX260-192 now and start window shopping.

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 22,170
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18413 - Posted: 27 Aug 2010 | 13:42:37 UTC - in response to Message 18219.


- While GTX460 prices may drop a bit (10 or 20%) this might be at the expense of quality rather than through competition; the cards have an open design, so look for cards with a good warranty.


Hi all,

the price is going down and the time to make a decision is there.

I've seen a lot of different cards, found infos about overclocking aso.
Speaking about 1024MB, GTX460 are clocked from 675 up to 815 MHz. Price difference is not that much so it makes sense thinking about that.

Some people posted, that overclocked cards are not stable, so these cards may be good for gaming but not for crunching.
Could owners of GTX460 cards please post their experiences please?

Kind regards,
Alexander

poppageek
Avatar
Send message
Joined: 4 Jul 09
Posts: 76
Credit: 114,610,402
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18431 - Posted: 28 Aug 2010 | 0:14:19 UTC
Last modified: 28 Aug 2010 | 0:16:25 UTC

I got the MSI N460GTX CYCLONE 1GD5/OC GeForce GTX 460 (Fermi) 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP and am very happy with it. While on GPUGrid RAC was about 30k ppd. On F@H now and doing very well, looks like 9kppd or a little better. I have it OCed to 800/1600/1800 and with a 24c ambient runs at 50c with a 93% load on F@H WU. Fan set to auto is 61%. I cannot hear it and it is less than 3 feet from me in a case with an open mesh screen on the side.

I have not read of anyone not able to run these cards at 800/1600.

Cheers!

I forgot to mention concerning the PPD I mentioned, I play an MMO most nights for 1-3 hours so RAC would be a bit higher at 24/7.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18436 - Posted: 28 Aug 2010 | 8:09:46 UTC - in response to Message 18413.

Some people posted, that overclocked cards are not stable, so these cards may be good for gaming but not for crunching.


A factory OC does not increase the card's cost much but gets you a proportionally higher RAC - so it's probably worth it. If such a card is not stable as-is, you can send it back (faulty product), although that should not normally be the case. This would be expensive for the manufacturer, so they test the chips before they decide whether they should get normal or increased clocks. So you should actually get a slightly better chip with a factory OC'ed card.

A little more interesting is long term stability: all chips degrade after time, depending on (1) voltage (2) temperature and (3) frequency, in descending order of importance. The result is that after some running time the chip can only reach slightly lower clock speeds at the same voltage and temperature. And at some point this "clock speed potential" crosses the stock clock. That's where it gets unstable.
And since most chips of a batch are rather similar, they all have approximately the same clock speed potential, with a slight advantage for the better chips choosen for factory OC'ed cards. Therefore these chips may fail their factory clock speed earlier than stock clocked ones (e.g. it may be able to go 30 MHz higher but is clocked 50 MHz higher, so the "degradation margin" is reduced by 20 MHz). In this sense factory OC'ed cards may be less stable.
However, you could still downclock them after 2 or 3 years and in the end reach a longer lifespan due to the higher clock speed potential.

MrS
____________
Scanning for our furry friends since Jan 2002

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 22,170
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18466 - Posted: 29 Aug 2010 | 18:45:17 UTC - in response to Message 18436.

THX, that helped!

Alexander

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18467 - Posted: 29 Aug 2010 | 19:11:05 UTC - in response to Message 18436.
Last modified: 29 Aug 2010 | 19:33:57 UTC

Some people posted, that overclocked cards are not stable, so these cards may be good for gaming but not for crunching.


A factory OC does not increase the card's cost much but gets you a proportionally higher RAC - so it's probably worth it. If such a card is not stable as-is, you can send it back (faulty product), although that should not normally be the case. This would be expensive for the manufacturer, so they test the chips before they decide whether they should get normal or increased clocks. So you should actually get a slightly better chip with a factory OC'ed card.

A little more interesting is long term stability: all chips degrade after time, depending on (1) voltage (2) temperature and (3) frequency, in descending order of importance. The result is that after some running time the chip can only reach slightly lower clock speeds at the same voltage and temperature. And at some point this "clock speed potential" crosses the stock clock. That's where it gets unstable.
And since most chips of a batch are rather similar, they all have approximately the same clock speed potential, with a slight advantage for the better chips choosen for factory OC'ed cards. Therefore these chips may fail their factory clock speed earlier than stock clocked ones (e.g. it may be able to go 30 MHz higher but is clocked 50 MHz higher, so the "degradation margin" is reduced by 20 MHz). In this sense factory OC'ed cards may be less stable.
However, you could still downclock them after 2 or 3 years and in the end reach a longer lifespan due to the higher clock speed potential.

MrS


Might I add the power consumption of the GTX460 is less than the GTX470. That some may argue that this in now ways mean that it's "green", so long as the WU's don't have a high rate of failure & the science is valid, that the work done on even the more power hungry GTX470 is much, much more than the "green" cards, CPU's, or anything else can do per watt consumed.

That some "might" still use a GTX460 after 3 years is debatable. That some might complain about too much of the CPU being dedicated to the GPU is also IMO also debatable, since no matter if the WU takes 3-4 hours more or less, that having the PC running 24/7 with the GPU running all the time, sending WU's back 3-4 hours faster or slower still uses a heck of a lot of power.

I'm testing Fedora 13 LXDE 64bit & it hogs the CPU much more than other Linux Distro's I've tried. It seams stable (I've only briefly had it running). So if it's good, I'm happy. The only thing I've noticed with Fedora 13 is that it's easier (for me) to manually upgrade/downgrade the Nvidia Driver & BOINC Client on Ubuntu/Mint than on Fedora, so I'm not sure if it's a good idea to use it with new GPU's with the constant need for the newest driver & BOINC Client.
____________

Profile leprechaun
Send message
Joined: 22 Jun 09
Posts: 8
Credit: 45,224,378
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 18621 - Posted: 11 Sep 2010 | 9:01:13 UTC - in response to Message 18467.

I have seen which books an application for GTX 460 a complete core in the CPU in the task manager. Is this normal now? Before these were only 2-6%
GPU load 84%.
Boincmanager (6.10.58) 0.12 CPUs + 1.00 GPUs.
Win7 64bit
Phenom 940 BE

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18627 - Posted: 11 Sep 2010 | 10:50:01 UTC - in response to Message 18621.
Last modified: 11 Sep 2010 | 10:50:52 UTC

leprechaun, on Linux it will automatically use a full CPU core/thread but not on Win7. Not quite sure what you are asking, so a few general things that might cover your concerns:
If a CPU core is allocated to the Fermi GPU it significantly increases the GPU speed, especially if you use swan_sync=0. This is the recommended configuration for Fermi users, especially GF100 cards.
http://www.gpugrid.net/forum_thread.php?id=2123
Make sure you are using the latest driver with your GTX460.
There are at least two light Boinc applications that you can also run. These do not register as using a CPU core/thread.

If your system settings are hidden we cannot see them. What driver do you have?
Also, you using Swan_Sync, and have you set Boinc to use all but one CPU core/thread?

Profile leprechaun
Send message
Joined: 22 Jun 09
Posts: 8
Credit: 45,224,378
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 18629 - Posted: 11 Sep 2010 | 12:54:53 UTC - in response to Message 18627.

Thank you, I think it lies with the variable SWAN_SYNC=0.
Driver 258.96
Boincmanager: Use of the processor 99%.
At the moment there run 3 X Simap + 1 GPUGrid.

Profile Fred J. Verster
Send message
Joined: 1 Apr 09
Posts: 58
Credit: 35,833,978
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18634 - Posted: 11 Sep 2010 | 22:16:14 UTC - in response to Message 18629.
Last modified: 11 Sep 2010 | 22:27:21 UTC

Just crunched afew GPUGrid tasks with an GTX480.
WU 1.
WU 2 with 470.

Run 3 SETI CUDA tasks at a time and 4 SETI or Einstein on CPU.
But many projects don't allow, running more then 1 instance of the science app..
I think, it only matters if faults are being produced.
The same goes for heavy OC'ed CPU's, (of which I think it's pointless).
CUDA/OpenCL/BROOK have so much more computing power, that OC'ing your CPU, till it's 'edge', doesn't contribute not much more. IMO, it's nor worth it, a little, max 10%, but DRAM setting are more important, FSB; Timings!; Speed.

Just my 2 €0.02 ;^)
____________

Knight Who Says Ni N!

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18635 - Posted: 12 Sep 2010 | 10:08:11 UTC

Before you completely dismiss CPUs please consider that if we could not crunch with CPUs there is a whole world of research that would never get done because GPUs simply are not flexible enough. GPUs are great at parallel processing but taking a look at the number of CPU projects vs GPU projects is an indicator of what the current state of GPU crunching/ folding capabilities and popularity. I consider GPUGrid the only GPU project worth any attention (well maybe F@H is OK)

but DRAM setting are more important

Could not be further from the truth. DRAM timmimgs, bandwith, etc. make almost no difference in CPU / GPU grunching. The one project I know of where DRAM makes any difference at all is climateprediction. Other than that, for CPU crunching raw core GHz is king and for GPUs it is all about the shaders.
____________
Thanks - Steve

Profile Fred J. Verster
Send message
Joined: 1 Apr 09
Posts: 58
Credit: 35,833,978
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18654 - Posted: 13 Sep 2010 | 13:15:40 UTC - in response to Message 18635.

Hi, you are absolutely right, I should have mentioned, which Projects benefit
from Memory timings and high CPU speed's.
And running without a CPU, isn't possible. GPUGrid has done a great job in using
GPU's and it uses the GPU quite efficient, more, if compaired to Einstein.
And I've to admit not having much experience with GPU using for other purposes then the Graphical (OpenGL; Direct X).
OpenCL; BROOK, CAL, are all new to me and I do find it hard to learn.
But it is the 'future' in parallel computing and have to do a lot of reading.
I'll keep my mouth shut, ;-) as others have far more knowledge of GPU processing.


____________

Knight Who Says Ni N!

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18657 - Posted: 13 Sep 2010 | 21:38:32 UTC

One can put it this way: if an app is very dependent on memory it's probably not programmed in a good way, as CPUs are built to avoid memory access. Such programs will likely benefit from basic code optimization, using different algorithms to solve the probleme etc. A program should normally mature past this point before it's deployed massively parallel via BOINC.
That's why most BOINC apps are not heavily memory bandwidth or latency dependent. However, in former times SETI reacted quite well to improvements in the memory subsystem. And for GPU-Grid you definitely don't want to underclock your GPUs memory ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Mephist0
Send message
Joined: 15 Sep 09
Posts: 5
Credit: 1,466,872
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 19284 - Posted: 6 Nov 2010 | 0:12:55 UTC
Last modified: 6 Nov 2010 | 0:16:03 UTC

Hello!

I bought a cheap GTX 460 768MB yesterday..

I had a GTX 275 before..

First i was dissapointed.. It seemed the 460 was slower than the 275...

04-Nov-2010 17:30:04 [---] NVIDIA GPU 0: GeForce GTX 275 (driver version 25896, CUDA version 3010, compute capability 1.3, 873MB, 701 GFLOPS peak)

05-Nov-2010 09:18:52 [---] NVIDIA GPU 0: GeForce GTX 460 (driver version 25896, CUDA version 3010, compute capability 2.1, 738MB, 363 GFLOPS peak)

EDIT:
I then also upgraded the drivers.. But still same performance..
05-Nov-2010 12:36:14 [---] NVIDIA GPU 0: GeForce GTX 460 (driver version 26099, CUDA version 3020, compute capability 2.1, 738MB, 363 GFLOPS peak)

I tested dnetc@home with the 460..
460:
first result around 23 min
second result around 50!!! min..

Then i gave up dnetc

My 275 runs dnetc at 23min unclocked

Then i tested Collatz (CUDA 31)
460 13min (Cuda 31)
460 15min (Cuda 23)
275 runs at 25-27 min
This looks better! Have not recieved the credits yet so i don't know if it will give more credits...

GPUGRID
Dont know yet, still running the first WU...

What is your experience with GPUGRID 460 vs 275? And also other projects..
I have read about your success with collatz and also seen that dnetc runs at 23 min and that seems sucky...

I dont know whats up with dnetc and the GTX 460 card???

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19287 - Posted: 6 Nov 2010 | 0:48:38 UTC
Last modified: 6 Nov 2010 | 0:50:17 UTC

gtx460, 35,820 secs (average) per wu (9.95 hours). My GTX295's are around 10 hours a wu

Mephist0
Send message
Joined: 15 Sep 09
Posts: 5
Credit: 1,466,872
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 19290 - Posted: 6 Nov 2010 | 7:44:50 UTC - in response to Message 19287.

Hmm ok.. now i only have one WU to compare with and i had it stopped once..

It took 10h 41min on the 460..

My GTX 275 card made the last 2 WU in
9h 16 min and 6h (?)

So far the GTX 275 seems faster for this project too...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19296 - Posted: 6 Nov 2010 | 12:22:19 UTC - in response to Message 19290.

Lots of posts about this in the Forums and your observations tally well with others.
The GTX275 has been faster than the GTX460 since the release of the GTX460. In fact a GTX260-216 was slightly faster, about 5% at reference clock speeds. Why? Poor support by NVidia; basically the card is not well utilized and this comes down to the applications and drivers. On the one hand NVidia go on about their new design and all those Cuda cores, on the other they don’t give you the drivers to use them. Personally I think this is a deliberate ploy to reduce RTM’s. It’s like getting a car with a 2year warranty only to find it won’t go over 45mph for 2years.
Over the last week the GPUGrid techs started playing with different apps, so now if your GTX275 has a recent driver it will probably not be running the fast CUDA 2.2 app and will instead be using a CUDA 3.1 app, designed for Fermis and not G200 series cards. How individual cards performs on these is open to debate, but in particular the GT240s took a big hit in performance running the CUDA3.1 app, and the latest drivers resulted in many cards dropping their clocks.
To get the older app (for non-Fermi’s) you need to use older drivers, which for some means their system will not detect multiple cards and they have to use cables and omni ports or dummy plugs. Even the use of the Beta 25715 driver will get you CUDA3.1 tasks, so you will need to use a driver from before that.
We are now in a position where many people are running the wrong app. By the end of the month we may be running Cuda 3.2 apps on Fermis so there is not too much point trying to compare the cards right now. It was always expected that the CUDA 3.2 driver will bring a performance gain to the GTX460. Should no performance gain for the GTX460 be found I would write that entire range of Fermi’s off as failures. With the present drivers and app if a GTX475 turned up today it would not outperform a 20month old GTX275 at reference.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19298 - Posted: 6 Nov 2010 | 13:48:37 UTC - in response to Message 19296.

Personally I think this is a deliberate ploy to reduce RTM’s.


I think you're reading too much into this. It's probably much more a matter of "can't get it to work properly" than "don't want it to work properly". Last quarter AMD had surpassed nVidia as the (non-Intel) GPU king and they've taken a lot of flak for the Fermi design. Plus the issue of non-existent GT200 mainstream chips for a very long time and the notebook chips recalls.

They just can't afford to make their cards perform worse deliberately.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19300 - Posted: 6 Nov 2010 | 14:03:20 UTC - in response to Message 19296.

Over the last week the GPUGrid techs started playing with different apps, so now if your GTX275 has a recent driver it will probably not be running the fast CUDA 2.2 app and will instead be using a CUDA 3.1 app, designed for Fermis and not G200 series cards. How individual cards performs on these is open to debate, but in particular the GT240s took a big hit in performance running the CUDA3.1 app, and the latest drivers resulted in many cards dropping their clocks.
To get the older app (for non-Fermi’s) you need to use older drivers, which for some means their system will not detect multiple cards and they have to use cables and omni ports or dummy plugs. Even the use of the Beta 25715 driver will get you CUDA3.1 tasks, so you will need to use a driver from before that.
We are now in a position where many people are running the wrong app. By the end of the month we may be running Cuda 3.2 apps on Fermis so there is not too much point trying to compare the cards right now. It was always expected that the CUDA 3.2 driver will bring a performance gain to the GTX460. Should no performance gain for the GTX460 be found I would write that entire range of Fermi’s off as failures. With the present drivers and app if a GTX475 turned up today it would not outperform a 20month old GTX275 at reference.

This begs the question, why do we keep switching to apps that are slower and don't work well for many cards? I now have had enough WUs to make similar WU comparisons between 6.05 and the new 6.12 on various cards. I'm seeing a 5% to occasionally as much as a 20%(rare) slowdown with 6.12. Is there any advantage to 6.12 that made them replace the faster 6.05? Why do we keep getting new apps that seem to be inadequately tested? Is there any reason not to return to 6.05?


Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19301 - Posted: 6 Nov 2010 | 14:41:54 UTC - in response to Message 19300.

From a crunchers point of view 6.05 was better.
There are probably server side / ACEMD reasons to using 6.12 (maintaining queues, databases) but I am not fully aware of them. Might be a move to allow for CUDA3.2 in the near future, and a licence issue, but then we dont know for sure that CUDA3.2 will be a success. If, and I speculate, we move to CUDA3.2 to allow for the next round of Fermi cards then we might not also be able to have 3.1 and 3.0 and so on. I guess this might also be due to task creation, hard drive limitations on the server (they need more), and general project management, but I don’t have many details.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19302 - Posted: 6 Nov 2010 | 15:33:12 UTC - in response to Message 19301.

From a crunchers point of view 6.05 was better.
There are probably server side / ACEMD reasons to using 6.12 (maintaining queues, databases) but I am not fully aware of them.

It would be a really good idea to communicate reasons for changes if there are reasons. From the posts it looks like the Linux crunchers are taking even a bigger hit than those of us running Windows. Lately though we've had to revert to older drivers to avoid 6.11 and get back to 6.05, then as soon as we do that we get the slower running 6.12. To get get rid of that it sounds like we're going to have to use an app_info.xml. Is it OK to do that? Who knows? What was the problem if any with 6.05?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19312 - Posted: 6 Nov 2010 | 20:23:53 UTC - in response to Message 19302.

As said in another post we are looking into it.
A solution should come out next week.

gdf

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19313 - Posted: 6 Nov 2010 | 20:45:01 UTC - in response to Message 19312.

As said in another post we are looking into it.
A solution should come out next week.

gdf

And in the meantime, should we waste our precious GPU time on the rubbish 6.12? It's absolutely uncrunchable on a Linux machine. If you don't put back the working version I will think about it as willful neglect. There is absolutely no reason to punish the Linux machine with that crap! It's not about 10% slower, it's about 1000% slower. That's plain ridiculous.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19325 - Posted: 7 Nov 2010 | 9:39:00 UTC - in response to Message 19313.

The reason because on Linux is slower than before is that now it does not use SWAN_SYNC=0 by default. So now it uses less CPU. I don't think that it's useful but it was a requested more and more times. Just set
export SWAN_SYNC=0
in your .bashrc


gdf

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19326 - Posted: 7 Nov 2010 | 9:54:45 UTC - in response to Message 19325.

Just set
export SWAN_SYNC=0
in your .bashrc

As I've said in some other post: I'm no programmer, I'm a user.
In what GUI do I do that?
What's the .bashhrc?
How will that influence other programs and projects?
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19327 - Posted: 7 Nov 2010 | 11:12:00 UTC - in response to Message 19326.

How will that influence other programs and projects?


Oh the joys of linux. Sorry, can't tell you how to do it but it certainly doesn't influence other software. As long as noone else chooses to call his environment variable "SWAN_SYNC".

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19328 - Posted: 7 Nov 2010 | 12:14:24 UTC - in response to Message 19327.

I don’t presently have a Linux system to try this on, and I am no Linux expert either, but I guess you just have to open up a command terminal and type in,
.bashrc export swan_sync=0

.bashrc configures interactive Bash usage.
The term “Export” is used to set an environmental variable.
So, .bashrc export configures an interactive environmental variable.

I think -n is used to stop it.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19367 - Posted: 9 Nov 2010 | 13:43:21 UTC - in response to Message 19328.

you have to edit the file .bashrc and add the line export SWAN_SYNC=0.

gdf

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19373 - Posted: 9 Nov 2010 | 16:55:05 UTC - in response to Message 19367.
Last modified: 9 Nov 2010 | 17:35:07 UTC

you have to edit the file .bashrc and add the line export SWAN_SYNC=0.

gdf

Where is this file?

Edith says:

I found 4 different ones in different places, 2 dot.bashrc, 2 bash.bashrc
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19378 - Posted: 9 Nov 2010 | 19:04:51 UTC - in response to Message 19373.

The file it's in your home directory, but as it starts with a dot, it's hidden.
Just edit it gedit .bashrc

gdf

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19380 - Posted: 9 Nov 2010 | 19:19:09 UTC - in response to Message 19378.

The file it's in your home directory, but as it starts with a dot, it's hidden.
Just edit it gedit .bashrc

gdf

There ain't anything like that.
And of course I've set the Nautilus to "Show Hidden Files".
The only file with "bash" in it is .bash_history.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19385 - Posted: 9 Nov 2010 | 20:40:15 UTC - in response to Message 19380.

Just create it then.

gdf

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19387 - Posted: 9 Nov 2010 | 21:43:01 UTC - in response to Message 19385.

Just create it then.

gdf

What's it supposed to do?
Why should I create a hidden file in my main folder outside BOINC for your project?
Why don't you include it in your program?
What will other applications outside BOINC do with this?
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19390 - Posted: 9 Nov 2010 | 22:17:34 UTC - in response to Message 19387.

Don't do anything then. There will another way of doing sometime soon within your BOINC dir.

gdf

Just create it then.

gdf

What's it supposed to do?
Why should I create a hidden file in my main folder outside BOINC for your project?
Why don't you include it in your program?
What will other applications outside BOINC do with this?

Paul Sands
Avatar
Send message
Joined: 14 Feb 09
Posts: 3
Credit: 165,614,037
RAC: 263,408
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19401 - Posted: 10 Nov 2010 | 1:57:45 UTC - in response to Message 19387.

Just create it then.

gdf

What's it supposed to do?
Why should I create a hidden file in my main folder outside BOINC for your project?
Why don't you include it in your program?
What will other applications outside BOINC do with this?



When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist.

Mine looks like this

# This file is sourced by all *interactive* bash shells on startup,
# including some apparently interactive shells such as scp and rcp
# that can't tolerate any output. So make sure this doesn't display
# anything or bad things will happen !


# Test for an interactive shell. There is no need to set anything
# past this point for scp and rcp, and it's important to refrain from
# outputting anything in those cases.
if [[ $- != *i* ]] ; then
# Shell is non-interactive. Be done now!
return
fi

# Bash won't get SIGWINCH if another process is in the foreground.
# Enable checkwinsize so that bash will check the terminal size when
# it regains control. #65623
# http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11)
shopt -s checkwinsize

# Enable history appending instead of overwriting. #139609
shopt -s histappend

# Change the window title of X terminals
case ${TERM} in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*|interix)
PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/$HOME/~}\007"'
;;
screen)
PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/$HOME/~}\033\\"'
;;
esac

use_color=false

# Set colorful PS1 only on colorful terminals.
# dircolors --print-database uses its own built-in database
# instead of using /etc/DIR_COLORS. Try to use the external file
# first to take advantage of user additions. Use internal bash
# globbing instead of external grep binary.
safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM
match_lhs=""
[[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)"
[[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(</etc/DIR_COLORS)"
[[ -z ${match_lhs} ]] \
&& type -P dircolors >/dev/null \
&& match_lhs=$(dircolors --print-database)
[[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true

if ${use_color} ; then
# Enable colors for ls, etc. Prefer ~/.dir_colors #64489
if type -P dircolors >/dev/null ; then
if [[ -f ~/.dir_colors ]] ; then
eval $(dircolors -b ~/.dir_colors)
elif [[ -f /etc/DIR_COLORS ]] ; then
eval $(dircolors -b /etc/DIR_COLORS)
fi
fi

if [[ ${EUID} == 0 ]] ; then
PS1='\[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] '
else
PS1='\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] '
fi

alias ls='ls --color=auto'
alias grep='grep --colour=auto'
else
if [[ ${EUID} == 0 ]] ; then
# show root@ when we don't have colors
PS1='\u@\h \W \$ '
else
PS1='\u@\h \w \$ '
fi
fi

# Try to keep environment pollution down, EPA loves us.
unset use_color safe_term match_lhs

____________

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 20883 - Posted: 8 Apr 2011 | 23:59:31 UTC
Last modified: 1 Nov 2011 | 22:16:57 UTC

Re: Swan_Sync/SWAN_SYNC=0

I don't think it matters exactly were you put the command "export SWAN_SYNC=0" so long as it is a file that is automatically read at startup.

I totally don't know this for a fact but it seems to be the case.

Mine is in a file called "profile" in the directory /etc/ and it works fine.

Edit: It seems SWAN_SYNC=0 must be in caps for Linux or at least perhaps for the Linux distro I use (FatDog-64).

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 20884 - Posted: 9 Apr 2011 | 0:04:22 UTC

Did GPUGrid ever get the 460 running better/faster?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20886 - Posted: 9 Apr 2011 | 8:34:46 UTC - in response to Message 20884.

No

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21719 - Posted: 22 Jul 2011 | 16:48:35 UTC

I just spent an hour going through this thread, from first to last. I gave up half way, bemused by all the technicalities. Why would I do that?

In a few days I will have an ASUS ENGTX460 DirectCU/2DI/1GD5, to replace my creaking 9600 GSO.

Can you please tell me if the GTX 460 can now perform to its maximum capability on GPUGRID, or should I consider switching to folding@home?

Thank you,

Tom

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21720 - Posted: 22 Jul 2011 | 21:41:48 UTC - in response to Message 21719.

Can you please tell me if the GTX 460 can now perform to its maximum capability on GPUGRID, ... ?

GPUGRID can still use only the 2/3 of the shaders of any CC2.1 cards (GTX 460, GTX 560 etc). It is not clear, if GPUGRID could ever use all of the shaders of CC2.1 cards.

..., or should I consider switching to folding@home?

Should I answer this one too?

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21721 - Posted: 23 Jul 2011 | 6:42:59 UTC - in response to Message 21719.

I just spent an hour going through this thread, from first to last. I gave up half way, bemused by all the technicalities. Why would I do that?

In a few days I will have an ASUS ENGTX460 DirectCU/2DI/1GD5, to replace my creaking 9600 GSO.

Can you please tell me if the GTX 460 can now perform to its maximum capability on GPUGRID, or should I consider switching to folding@home?

Thank you,

Tom



Take a look at my machines and that should answer your question.

Both GTX460's Oc'd GPU 850 Shaders 1700 Memory 2025 Voltage Stock Max Heat 68c
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21722 - Posted: 23 Jul 2011 | 13:41:40 UTC - in response to Message 21721.

A GTX460 is far from being the best card, but it is still a good card and can certainly contribute; 70 to 80K per day, and no trouble finishing any of the tasks in time.
While the GTX460 and other CC2.1 cards cannot use all the cuda cores here (due to the app, and that would need a major redevelopment to fix, not a tweak), it's worth noting that as a result the GPU is not quite using as much electric, so it's not all negative. I would still recommend getting a CC2.0 500 series card, but what card you get comes down to how much money you are prepared to spend, among other things.

FAQ - Recommended GPUs for GPUGrid crunching

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21723 - Posted: 23 Jul 2011 | 13:58:11 UTC - in response to Message 21721.

Can you please tell me if the GTX 460 can now perform to its maximum capability on GPUGRID, or should I consider switching to folding@home?


Take a look at my machines and that should answer your question.

Both GTX460's Oc'd GPU 850 Shaders 1700 Memory 2025 Voltage Stock Max Heat 68c

Thanks for the response. I did take a look at your machines and I note you're processing the big WUs anywhere between 11 and 20 hours. But I don't see how that answers my question about shader use...

Also, "GPU 850 Shaders 1700 Memory 2025" - does that equate with what I see on the Nvidia site: Graphics Clock 675, Processor Clock 1350 and Memory Clock 1800? What's "Voltage Stock" mean?

Sorry if I'm being thick!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21724 - Posted: 23 Jul 2011 | 16:58:32 UTC - in response to Message 21723.

The Shaders (CUDA Cores) are fixed relative to the core clock.
The GTX460's Reference clock rates are 675 and 1350. So Betting Slip has OverClocked to 850MHz.

You can contribute to GPUGrid, or alternatively you could try Folding - it's your choice.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21725 - Posted: 23 Jul 2011 | 17:27:56 UTC - in response to Message 21724.

The Shaders (CUDA Cores) are fixed relative to the core clock.
The GTX460's Reference clock rates are 675 and 1350. So Betting Slip has OverClocked to 850MHz.

Ah!! So by overclocking he has allowed more of the shaders to become active for GPUGRID. Right? Or not???

No - that can't be right. Nvidia claims 336 shaders on the reference rates. This is all very complicated!!


Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21726 - Posted: 23 Jul 2011 | 18:49:01 UTC - in response to Message 21725.

Overclocking increases the processing frequency, so it just makes the existing usable shaders, and core, faster.
The 1700MHz shaders are (1700/1350)X100-100 % faster (about 26% faster than reference).
NB. this OC is quite high and a stable 10% OC is more commonly the limit.
It might be the case that Betting Slip is personally overclocking from an already Factory OC'd GPU (such GPU's tend to be more reliable).

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21727 - Posted: 24 Jul 2011 | 9:15:47 UTC - in response to Message 21726.

NB. this OC is quite high and a stable 10% OC is more commonly the limit.
It might be the case that Betting Slip is personally overclocking from an already Factory OC'd GPU (such GPU's tend to be more reliable).


That might well have been the case SK but not here as I hate to pay the extra to have the BIOS altered for a factory OC.

The clocks I have achieved are from stock and in most cases are stable. I only run into trouble when I play 3d poker at the same time. :) but if I have the poker in low detail it works fine. The cards are PNY and MSI
If anyone has 460's running at stock I would like to compare times.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21729 - Posted: 24 Jul 2011 | 15:33:29 UTC - in response to Message 21727.

I looked into this some time ago and found that return times fell fairly linearly with the rise in shader clock speed, but it would be good to see if that still holds true, and for different tasks.

The same applies to the GFlops within the same CC range; an increase in shader count or frequency results in a steady increase in performance, and GFlops-peak. The only exception was the GTX570 which was not quite as strong as the GTX580 (about 5% less per core X frequency), but on the other hand this card is slightly more energy efficient, making it highly recommendable.

Aon
Send message
Joined: 16 May 11
Posts: 10
Credit: 167,698,252
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21732 - Posted: 24 Jul 2011 | 19:21:02 UTC
Last modified: 24 Jul 2011 | 19:24:33 UTC

You can compare them with mine Betting Slip. I have two stock GTX460 (961MB) GPUs. From what I could see yours are aprox. 25% faster.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21734 - Posted: 25 Jul 2011 | 7:42:24 UTC - in response to Message 21732.

Thanks Aon, 25% looks about the case. Please tell me, have you left 2 cores free for your cards?
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Aon
Send message
Joined: 16 May 11
Posts: 10
Credit: 167,698,252
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21735 - Posted: 25 Jul 2011 | 11:00:56 UTC - in response to Message 21734.

No, they are all crunching Rosetta. Each GPU wu uses 0.08% of a CPU core.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21736 - Posted: 25 Jul 2011 | 13:51:47 UTC - in response to Message 21735.

That will probably add an hour on your GPUG tasks because not only will they have to fight for CPU time they will also have to contend with I/O congestion due the virtual cores.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Aon
Send message
Joined: 16 May 11
Posts: 10
Credit: 167,698,252
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21737 - Posted: 26 Jul 2011 | 10:37:27 UTC - in response to Message 21736.

I thought the tasks take the amount of cpu necessary to run by default. Are you saying that it is possible to use a entire core for one task? If that is the case what do I have to do? The "KKFREE" tasks finish in about 24 hours + the time it takes for uploading so I always miss the 50% bonus with these.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21738 - Posted: 26 Jul 2011 | 11:11:25 UTC - in response to Message 21737.

FAQ - Best configurations for GPUGRID

Are you using report_results_immediately in the cc_config file?

    <cc_config>
    <options>
    <report_results_immediately>1</report_results_immediately>
    </options>
    </cc_config>

It would be interesting to measure how much gain there still is when using swan_sync and freeing up a CPU core. For sure it's less that what it use to be, but that's just because the tasks are more efficient when its not in use. It is probably around 10% now and might vary with different task types.

If you are just missing the 24h bonus, definitely try it; should the improvement mean you return KKFREE tasks within 24h then the credit increase would be compound 1.1*1.25; about 37.5% more credit for those tasks (10% for the other tasks that already return within 24h).

Aon
Send message
Joined: 16 May 11
Posts: 10
Credit: 167,698,252
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21739 - Posted: 26 Jul 2011 | 14:50:10 UTC - in response to Message 21738.

Thanks skgiven. I will try report_results_immediately. Swan_sync sounds complicated to me.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21740 - Posted: 26 Jul 2011 | 19:42:15 UTC - in response to Message 21739.
Last modified: 26 Jul 2011 | 19:59:33 UTC

SWAN_SYNC is fairly easy to setup and use on Windows.
All you have to do is add the environmental variable, set the value to zero, and restart the system. It has to be used in conjunction with a free core to see the full benefits, so just configure Boinc to use one less CPU core.

I think it's easier than creating a cc_config.xml file, but try both. I'm sure you can follow the instructions and add the variable if you try.

The relevant parts from the FAQ - Best configurations for GPUGRID

To setup SWAN_SYNC on Win7:

    Start, right (alternate) click Computer, Click Properties - Opens System Window.
    Click Advanced System Settings (left side), then Environmental Variables.
    Under System Variables, Click New,
    For the Variable name type SWAN_SYNC
    For the Variable Value type 0

To report tasks immediately on Vista or win 7:
Create or edit the cc_config.xml file in this folder, C:\ProgramData\BOINC\

Just use Notepad to create a .txt file first. Add the following lines:

    <cc_config>
    <options>
    <report_results_immediately>1</report_results_immediately>
    </options>
    </cc_config>

Then 'Save As' and type the .xml file extension, cc_config.xml (you will have to allow notepad to save all file types in the save as window).

Good luck,

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21742 - Posted: 27 Jul 2011 | 17:46:25 UTC - in response to Message 21721.

Take a look at my machines and that should answer your question.

Both GTX460's Oc'd GPU 850 Shaders 1700 Memory 2025 Voltage Stock Max Heat 68c


OK. I had a brief flirtation with folding@home - too complicated, so here I am again.

My Asus GTX 460 is up and running, crunching GPUGRID WUs. Also active is Asus's SmartDoctor. With SmartDoctor I set "Engine" to 850, your 'GPU 850'(?). Temperature is at 65C.

With SmartDoctor I can also set "Vcore" and "Memory". Vcore ranges from 1 to 1.087. Memory ranges from (DDR) 3400 to (DDR) 3800.

How do these setting relate to your "Shaders" and "Memory"?

Perhaps I need a different control application?

Thanks, Tom




Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21744 - Posted: 27 Jul 2011 | 18:16:15 UTC - in response to Message 21742.
Last modified: 27 Jul 2011 | 18:19:38 UTC

Yes, GPU 850 and that should take your shaders to 1700. Memory should be 2025

Leave V Core alone.

Try EVGA Precision. Google it.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21745 - Posted: 27 Jul 2011 | 19:33:31 UTC - in response to Message 21744.

Try EVGA Precision. Google it.


That's it! Core 855, Shader 1710, Memory 2027, Temperature 55C.

Thank you!!

Tom


tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21748 - Posted: 28 Jul 2011 | 7:05:21 UTC - in response to Message 21745.

Try EVGA Precision. Google it.


That's it! Core 855, Shader 1710, Memory 2027, Temperature 55C.


Oops:



What I set on the right is not reflected on the left. And I did click "apply".

What have I done wrong??

Thanks, Tom

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21749 - Posted: 28 Jul 2011 | 7:57:45 UTC - in response to Message 21748.

Click "Apply" and check "Apply at windows startup" and reboot.

____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21750 - Posted: 28 Jul 2011 | 8:11:28 UTC - in response to Message 21749.

That looks like the clock speeds were throttled by the NVidia drivers.

Your clocks might be slightly too high.
You should also set the power to Max.

Right click on the desktop, click NVIDIA Control Panel,
On the Left side click Manage 3D Settings,
Try to find the power setting and change it from auto to max performance.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21758 - Posted: 28 Jul 2011 | 14:43:11 UTC - in response to Message 21749.

Click "Apply" and check "Apply at windows startup" and reboot.


That did it. Thank you!

Tom

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21778 - Posted: 1 Aug 2011 | 5:26:51 UTC

I think I'm finally getting the hang of getting the best out of my GTX 460. I only run the long WUs, in about 18.5 hours. After the current WU is finished I'm ready to reboot with SWAN_SYNC 0, and report results immediately. I've already freed up a core for the GPU.

One final(?) problem: Typically I get a new WU 12 hours before the active WU finishes so I can never get the 50% bonus unless I nursemaid BOINC's "no new tasks" option. Is there a way to persuade BOINC to download, say, one hour before the active WU finishes?

Thanks, Tom

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21779 - Posted: 1 Aug 2011 | 5:41:36 UTC - in response to Message 21778.
Last modified: 1 Aug 2011 | 5:43:42 UTC

Is there a way to persuade BOINC to download, say, one hour before the active WU finishes?


If you have 24/7 connectivity then in your preferences set "Connect about every..." to 0 and "Additional work buffer" to 0.1. That's what I have and I get a new task about an hour before the running task finishes.

TylerChris
Send message
Joined: 12 Feb 10
Posts: 11
Credit: 50,020,466
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 21780 - Posted: 1 Aug 2011 | 5:45:58 UTC
Last modified: 1 Aug 2011 | 5:54:34 UTC

Hi
Boinc manager (advanced view)/advanced/preferences/network usage/then change
"connect about" to 0.00 and additional work buffer to 0.05 click ok to save.
That should do it.:)
Chris.
edit.beaten by Dagorath:) 0.2 is about 4hours 40 mins ,.0.1 2hours 20 mins
0.05 about 1 hour 10 mins.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21782 - Posted: 1 Aug 2011 | 6:13:08 UTC - in response to Message 21779.

Is there a way to persuade BOINC to download, say, one hour before the active WU finishes?


If you have 24/7 connectivity then in your preferences set "Connect about every..." to 0 and "Additional work buffer" to 0.1. That's what I have and I get a new task about an hour before the running task finishes.


Great! Thank you!! Did that...

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21783 - Posted: 1 Aug 2011 | 6:15:59 UTC - in response to Message 21780.

Boinc manager (advanced view)/advanced/preferences/network usage/...


I'm running BOINC 6.12.33 and the preferences are under the tools tab...

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 28
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21784 - Posted: 1 Aug 2011 | 10:33:06 UTC - in response to Message 21782.
Last modified: 1 Aug 2011 | 10:34:22 UTC

Is there a way to persuade BOINC to download, say, one hour before the active WU finishes?


Hi All this is fine but most effective and can control when you want to download the tasks is the following.

In the Projects tab select "Do not download new tasks" and when we want to get more work simply select "Allow new tasks".

To optimize the working time it is best to wait to ask the previous tasks have been sent and if we are with long run times in danger of losing the bonds.

In addition to controlling the discharge of the tasks we avoid the BOINC client is asking the server to ask for this task and respond continuously to not send anything. Greetings.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21794 - Posted: 3 Aug 2011 | 4:04:43 UTC - in response to Message 21784.

Is there a way to persuade BOINC to download, say, one hour before the active WU finishes?


Hi All this is fine but most effective and can control when you want to download the tasks is the following.

In the Projects tab select "Do not download new tasks" and when we want to get more work simply select "Allow new tasks".

To optimize the working time it is best to wait to ask the previous tasks have been sent and if we are with long run times in danger of losing the bonds.

In addition to controlling the discharge of the tasks we avoid the BOINC client is asking the server to ask for this task and respond continuously to not send anything. Greetings.



Just set connect to 0
amd additional to 0

and it wont download new until last one is finished
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21840 - Posted: 17 Aug 2011 | 15:05:28 UTC - in response to Message 21720.

Can you please tell me if the GTX 460 can now perform to its maximum capability on GPUGRID, ... ?

GPUGRID can still use only the 2/3 of the shaders of any CC2.1 cards (GTX 460, GTX 560 etc). It is not clear, if GPUGRID could ever use all of the shaders of CC2.1 cards.

Has this problem been fixed in the new version?

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21844 - Posted: 18 Aug 2011 | 7:12:11 UTC - in response to Message 21840.

Unfortunately no.
____________
Thanks - Steve

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21846 - Posted: 18 Aug 2011 | 12:59:07 UTC

Thanks for the reply. It seems like such an easy fix. Test for compute capability 2.1, then use 48 shaders per SM instead of 32 for the compute 2.1 cards. Kind of makes one wonder. Am I missing something?

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21880 - Posted: 23 Aug 2011 | 15:12:30 UTC

Well - after a month+ of crunching GPUGRID on my Asus GTX 460 I have to say I'm very happy with its performance.

With core @ 850, shader @ 1700 and memory @ 1800, EVGA Precision tells me the GPU temperature is a constant 65, and the GPU usage is 82-83%. The fan speed % is less than 50 and the fan noise is very acceptable; wife does not notice it!!

I only do "Long Runs - 8-12 hours on fastest cards" and I do them in 10.5 to 16.5 hours. And I get the 50% credit bonus every time! I think that's pretty good for a mid-range card running an application that does not use all the shaders!

Tom


____________

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22132 - Posted: 18 Sep 2011 | 19:48:27 UTC - in response to Message 17951.

acemdlong_6.15_windows_intelx86__cuda31

This app crashed at the begining by using my Gigabyte GTX460 1GB !

Thought it was the 2nd GPU that failes (gts 450) .

Now i thought i the app that crashed and there was no

overclocking at that moment.

Windows 7 64 Bit . Nvidia 280.26 (33?)

Think that could help to improve the app.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22143 - Posted: 21 Sep 2011 | 16:49:45 UTC - in response to Message 22132.

acemdlong_6.15_windows_intelx86__cuda31

This app crashed at the begining by using my Gigabyte GTX460 1GB !


Please give us details of the crash. If you get a blue screen tell us the stop code.


____________

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22145 - Posted: 22 Sep 2011 | 11:12:05 UTC

Currently running a GTS450 on one system and all being well will be adding a GTX460 on my other system. I'm not a hardened cruncher but GPUGRID seems to be a good second project to run along side Docking.

To me the GTX460 seems to be a good value for money card with reasonable output.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22147 - Posted: 22 Sep 2011 | 15:33:54 UTC - in response to Message 22132.

Hi Rantanplan, the error you had was after 35sec on a bad task; the same task failed on other GPU's as well.

TheFiend, while the GTX460 is a reasonable GPU, the higher end CC2.0 GPU's are better value (for crunching here) overall. That said, adding a GTS450 to a system with an average PSU might well be your best option; the GTS450 requires one 6pin power connector.

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22151 - Posted: 23 Sep 2011 | 1:10:45 UTC - in response to Message 22147.

TheFiend, while the GTX460 is a reasonable GPU, the higher end CC2.0 GPU's are better value (for crunching here) overall. That said, adding a GTS450 to a system with an average PSU might well be your best option; the GTS450 requires one 6pin power connector.


Docking is my main project, GPUGRID is just a side project, and I have only just started GPU crunching after cutting down on the number of crunchers I run. Value wise, for me, a GTX460 fits the bill perfectly - the proceeds of selling a CPU/mobo/RAM bundle will pay for it.

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22192 - Posted: 30 Sep 2011 | 18:38:33 UTC

Decided to get a EVGA GTX550TI instead a 460.

And I know it's not a brilliant card for GPUGRID but it suits my requirements better.

Profile verlyol-belgium
Send message
Joined: 9 Sep 08
Posts: 34
Credit: 24,784,154
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 22273 - Posted: 14 Oct 2011 | 20:55:29 UTC - in response to Message 22192.

This card work well with ACEMD standard WU

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22362 - Posted: 24 Oct 2011 | 19:57:55 UTC - in response to Message 22273.

This card works well with ACEMD standard WU


My ASUS GTX 460 runs "ACEMD for long runs" exclusively, 24/7, and they complete anywhere between 10.5 and 16.5 hours, giving me the 50% bonus every time.

Try it!!

____________

Post to thread

Message boards : Graphics cards (GPUs) : GTX 460

//