Advanced search

Message boards : Graphics cards (GPUs) : nVidia GTX GeForce 770 & 780

Author Message
Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 143
Credit: 316,354,068
RAC: 38,123
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29752 - Posted: 8 May 2013 | 12:08:11 UTC
Last modified: 8 May 2013 | 12:12:32 UTC

GeForce 770 & 780, the same appearance as the TITAN.





http://www.chiphell.com/forum.php?mod=viewthread&tid=743127

@+
*_*
____________

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 366,975,962
RAC: 434,166
Level
Asp
Scientific publications
watwatwatwatwat
Message 29766 - Posted: 8 May 2013 | 23:20:53 UTC

More information here:

http://wccftech.com/nvidia-geforce-700-series-enthusiast-roadmap-leaked-gtx-780-feature-gk110300-gpu-3-gb-memory/

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29773 - Posted: 9 May 2013 | 10:40:47 UTC - in response to Message 29766.
Last modified: 9 May 2013 | 10:42:25 UTC

Going by that, and similar speculation, NVidia will drop a trimmed down version of the Titan into the GeForce 700 series (presumably still CC3.5) and call it a GTX780, re-brand their GTX680 as a GTX770 (probably still CC3.0), and their GTX670 as a GTX760.
While I guess there might be a few performance tweaks, such as a slight bump in frequencies, such branding/marketing strategies are really just another exercise in confusion.

We best wait and see what actually turns up (at the end of the month going by the rumors).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 143
Credit: 316,354,068
RAC: 38,123
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30044 - Posted: 17 May 2013 | 12:11:39 UTC - in response to Message 29773.

GeForce 780, After the card, the box,



Available May 23.

@+
*_*
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1104
Credit: 6,101,732,079
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30051 - Posted: 17 May 2013 | 14:21:55 UTC

Just bought another 650 Ti at the egg yesterday for $85. How much will these cost and how much faster will they be? Oh, forgot, they probably won't even run GPUGrid until...???
But joy, they're windows8 compatible;-)

Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 143
Credit: 316,354,068
RAC: 38,123
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30095 - Posted: 19 May 2013 | 22:34:33 UTC - in response to Message 30051.

NVIDIA Publicly Announces the GeForce GTX 780 At GeForce E-Sports

http://wccftech.com/nvidia-publicly-announces-geforce-gtx-780-geforce-esports/




____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30435 - Posted: 26 May 2013 | 23:43:26 UTC

As soon as the microcenter by my house gets one in stock, ill be purchasing the 780. Been itching to do a new build for awhile. Plan on using a Haswell processor, but may end up trying out IB-E. Dependson how much money I feel like spending.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30564 - Posted: 30 May 2013 | 17:38:14 UTC

The Gainward GTX 770 Phantom 2GB has a base clock of 1150MHz and a boost of 1202MHz.

The Inno3D iChill GTX 770 HerculeZ X3 Ultra 2GB has a base of 1150MHz, boost at 1210MHz, they must be getting some excellent yields from their chips! Most of the new cards come with the voltages unlocked in the video BIOS, guess we better be carful or the error rate might skyrocket.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30566 - Posted: 30 May 2013 | 22:32:57 UTC - in response to Message 30564.
Last modified: 30 May 2013 | 22:34:28 UTC

Competitive price and open to manufacturer designs - excellent!
I'm expecting a nice range of these GPU's. For here the more reference models will likely be ~5% faster than a GTX680 (perhaps 10% as the 7GHz GDDR will make up for the bus), but the more adventurous models could be ~20% faster. The 1.2v comes at the expense/lure of a 230W TDP.

Market wise, I think NVidia have excelled this time...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30569 - Posted: 31 May 2013 | 1:33:23 UTC

I can't believe EVGA has 10 GTX770's and 7 GTX780 cards not including their hydro models. I agree that the GTX770 is going to be a great series for GPUGRID, it's going to be interesting to see how well they do. The 770 will work right out of the box, unlike the Titan or 780, I think it's using an updated GK104 chip maybe?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30575 - Posted: 31 May 2013 | 9:44:40 UTC - in response to Message 30569.

I think it's using an updated GK104 chip maybe?

No, it's just using the same GK104. Maybe a newer revision, but this would apply to others as well.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1104
Credit: 6,101,732,079
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30605 - Posted: 1 Jun 2013 | 12:27:51 UTC - in response to Message 30569.

I can't believe EVGA has 10 GTX770's and 7 GTX780 cards not including their hydro models. I agree that the GTX770 is going to be a great series for GPUGRID, it's going to be interesting to see how well they do. The 770 will work right out of the box, unlike the Titan or 780, I think it's using an updated GK104 chip maybe?

Seriously. Any idea why the 770 will work and not the 780/Titan? I wasn't impressed with the price/performance ratio of the Titan or even the 780 but the 770 looks like a winner.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30612 - Posted: 1 Jun 2013 | 15:05:33 UTC

I was just assuming that it would work because it's GK104 chip same as the 600's, the Titan and 780 are GK110. I wonder if anyone has bought a GTX770 and tried it, is there any way to find out?

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30621 - Posted: 1 Jun 2013 | 21:47:43 UTC - in response to Message 30612.

Only going by the first extremely short Betas to successfully run on a Titan it doesn't look any faster than a GTX680. It might take some time to get the most from it - architecturally its a very different beast than a GK104, and conversely a GTX770 is basically a supped up GTX680. I can't think of any reason why a GTX770 wouldn't work straight out of the box, but we cannot say for sure until someone tests one. The Titans on the other hand - I'm not keen on them, so far.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30622 - Posted: 1 Jun 2013 | 21:51:52 UTC
Last modified: 1 Jun 2013 | 21:52:31 UTC

Just bought a 780 (non-reference) today. Will be buying a Haswell tomorrow, and building everything between Monday-Tuesday. Ugh this stuff get's expensive lol. Fun to build though.

Anyways, I'm not surprised the beta is running around where a 680 is. Took GPUgrid quite some time to get the first batch of Kepler working properly. I just hope they can eventually get the most out of these cards. They offer quite a bit more than the GK104 parts in terms of well, everything. All operating at pretty much the same clock rate (save maybe 50 MHz or so).

Will be starting betas when I can.

Cheers.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1104
Credit: 6,101,732,079
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30625 - Posted: 1 Jun 2013 | 22:04:55 UTC - in response to Message 30621.

Only going by the first extremely short Betas to successfully run on a Titan it doesn't look any faster than a GTX680. It might take some time to get the most from it - architecturally its a very different beast than a GK104, and conversely a GTX770 is basically a supped up GTX680. I can't think of any reason why a GTX770 wouldn't work straight out of the box, but we cannot say for sure until someone tests one. The Titans on the other hand - I'm not keen on them, so far.

So it might turn out that a GTX 770 is faster than a GTX 780 or a Titan at GPUGrid, at least for a while. Maybe longer. Interesting though.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30626 - Posted: 1 Jun 2013 | 22:22:33 UTC - in response to Message 30625.

The GPUGrid devs might need to change the app to better accommodate the Titans.
In my opinion the Titan is too expensive compared to the GTX770 and GTX780. The GTX780 may or may not turn out to be a good card, but it's really wait and see time. I just cant see the GTX770 not being a good card (if setup correctly). Tuning the GTX770 might turn out to be an issue, but this is a hands on project, and the Titans might have higher RTM rates. I hope that both the GTX780 and GTX770 turn out to be good cards for GPUGrid and the top Titan doesn't fail as much as expected (perhaps better drivers, apps and tuning will combine to make it a good GPU).

I'm looking forward to the rest of the range, hoping that a mid-range GF700 GPU turns up with high GDDR bandwidth.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30630 - Posted: 2 Jun 2013 | 7:35:16 UTC - in response to Message 30626.

If and when the GPUG devs start supporting the new top-end cards, I hope they do it in a backwards-compatible manner. I'd hate to see the performance of today's mid-range cards drop (5xx and 6xx, maybe even some 4xx), just for the sake of a few contributors that may have one of the new beasts. We're not all ready (yet) to dish out e.g. 500 euro for a 680 or 400 for a 770. Heck, I'm not even ready to give 200 euro to buy a 660!

Of course, that's just me, my priorities, my budget and my use pattern of the card: no gaming, just crunching.

PS: A 770 next year? Maybe...
____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30634 - Posted: 2 Jun 2013 | 14:05:24 UTC

They wouldn't drop support for older series JUST for those cards, so no worries there. They do offer a nice performance bump however. First step is to get them crunching, than they just need to optimize.

Anyone know the theoretical performance increase for the GPUgrid app?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30656 - Posted: 4 Jun 2013 | 20:31:27 UTC

Guys, relax! Titan is not that different. It's got the same basic SMX's, just with a few more registers per thread and some new functionality, which will surely not be used by GPU-Grid because that would brake backwards compatibility.

And don't judge performance using the numbers from the beta. Even on my relatively small GTX660Ti these have usually very low GPU utilization, which I take comes from the fact that they'd use very small molecules for such tests. This makes it harder for larger cards to stretch their wings (use all their shaders) but is sufficient for a quick test to determine if it works at all.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30675 - Posted: 5 Jun 2013 | 21:00:27 UTC - in response to Message 30634.

Anyone know the theoretical performance increase for the GPUgrid app?

Not for the app, but for SP the GTX780 is theoretically 3977/3213≈23.8% faster than the GTX770. However, it's ~60% more expensive!
How it actually pans out for the apps here is still unknown, and there are some obvious differences besides the GK104 vs GK110 architecture - the 780 has a 50% wider bus, and the 770 has 16.6% faster GDDR (hence the 230W TDP compared to the GTX680's 195W).
In theory the Titan is 40% faster than the GTX770, but costs 2.5times as much.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30678 - Posted: 5 Jun 2013 | 23:25:09 UTC

Price, while important overall, wasn't really on my list when making the decision. I just really wanted a gk110 :). They also clock REALLY well. My EVGA SC, acx? The twin fans one. Came at a default boost rate of 1100. I was quite impressed.

I'm just patiently waiting for the GPUgrid app. That is a rather nice performance increase. So far, its doing pretty well on the Folding@home CORE 17 app.

I am also looking forward to Maxwell, next year(?).

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30958 - Posted: 24 Jun 2013 | 5:46:41 UTC

Someone wanted to know if anybody had a GTX 770 yet.

I bought a 770 SuperClocked w/ACX cooler a few weeks ago. Looks like I made the right choice. The card is working fantastic. Runs fast, cool and quiet. No problems with install or anything else. I have Win7 Pro w/32GB RAM on a i7-3770k running at 4.3 GHz. Don't play computer games. I got the card to turbocharge my graphics experience, to shorten time spent encoding video and so I can do number crunching! I am new here and it is a good feeling to be able to contribute.

I have had a few errors in my WU's, but I know why. I left the clock rate on the card alone, but several times I was tweaking my CPU and crashed the system. It took me a while to realize that if BOINC was running, the WU is ruined at that point, so I went ahead and finished the WU's and they were uploaded. It took me another while to quit fooling with my CPU, especially when BOINC is running! Now I just leave the CPU alone as it runs rock solid at 4.3GHz and every project is turned in with no problems.

The specs of the 770 SC say it has a base clock of 1111MHz and a boost of 1163MHz. The card seems to set its clock rate all by itself. Is that the GPU Boost 2.0 doing that? I like to run Rosetta@home and GPUG simultaneously. Rosetta won't touch the GPU so GPUG has the 770 all to itself. According to my monitoring, with Rosetta & GPUG running I am maxing all 8 CPU cores at 100% for 100% of the time, and the GPU load is as follows:

The 770 clocks itself to 1188MHz and 1187mv. With ambient temp of 76 deg F the card temp is 57 deg C and 80% fan speed (under my fan curve). If I ramp the fan up to 100% the temp comes down a few degrees, but at 3200rpm the fan is a little noisy and 57 is a very nice temp to be at anyway. The GPU Load is mostly 80%, and GPU Power runs 58-62% TDP. I don't know why the card ramps up to 1188MHz, because the boost clock is 1163 and I don't have any offsets set. I am just using the card the way it came, although I did change the fan curve.

Hope I've answered your question. At least in my system this card is rock solid stable and just seems to sort of purr effortlessly along. One more comment: The long WU's are being finished in around 9.5 - 10 hours, so I guess that's pretty fast. Gosh, I just looked and the long run WU I'm working on now has 5 minutes left and has been running for 8:32:00, so that's only 8.62 hrs of CPU time start to finish!

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30961 - Posted: 24 Jun 2013 | 8:36:29 UTC - in response to Message 30958.

I bought a 770 SuperClocked w/ACX cooler a few weeks ago.

Nice card! How long, on average, does a Nathan take on it?
____________
Greetings from TJ

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30966 - Posted: 24 Jun 2013 | 15:28:25 UTC - in response to Message 30961.

Hi TJ,

Unless I am mistaken, the Nathans you asked about are, in fact, the long WU's I was referring to in the last paragraph of my post. I don't really pay too much attention to the speed of GTX770, but to better answer your question I looked at my stats page. Here is a link so you can see for yourself:
http://www.gpugrid.net/results.php?userid=97668

The CPU Times of my last 4 Nathans completed (6980022, 6978354, 6972456, and 6968785) were 29,932sec, 30,398sec, 31,547sec, and 31,741sec respectively, for an average of 30,905sec or 8.58 hours.

This is much quicker than what I suggested in my earlier post and right in line with what I was observing in real time in the last paragraph of that post.

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30967 - Posted: 24 Jun 2013 | 15:47:00 UTC - in response to Message 30966.

Hi again TJ,

I just noticed that there was an error in one of my tasks last night with the GTX770. The EVGA software (EVGA Precision 4.2.0, and EVGA NV-Z 0.4.10) that I was monitoring with crashed last night. The card itself and BOINC kept right on running, so the task completed and was uploaded as I slept. This has to be the cause of the error. Seems very weird to me. Why would the software crash? The card's memory load never goes over 18-20% and MCU load stays around 24%. The EVGA Precision software allows me to slow the card down if I have to. Will have to keep an eye on this.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30971 - Posted: 24 Jun 2013 | 16:40:16 UTC - in response to Message 30966.

Hi Midimaniac,

Great info, this helps me a lot.
Than your card is approx. 17000 seconds faster than (my) 660 not overclocked.
The 770 is a bit expensive in Europe, but I have a rig that should fit two of them easily. (If I replace the liquid cooling with a large CPU cooler that rig can run 24/7, and I don't need a new rig just now (but this should be on another thread)).
I start saving money and will by two 770, after the summer.

Your link doesn't work (anymore) "no access" is the message when clicking on it.

Happy crunching.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30972 - Posted: 24 Jun 2013 | 16:52:48 UTC - in response to Message 30967.

Nice to hear your card is working well! About the clock speed: nVidia states the typical expected turbo clock speed, not the maximum one. What is reached in reality depends mostly on the load itself and GPU temperature.

Regarding your crash: 2 utilities constantly polling the GPUs sensors can cause problems. I actually leave none of them on constantly. I minimize GPU-Z and set it not to continue refrsehing when in the background. If I want a current reading I pop it up again. For continous monitoring I'd use only one utility and set the refrseh rate not too high, maybe every 10 s.

The EVGA Precision software allows me to slow the card down if I have to

If you do so, lower the power target to make the card boost less. This way clock speed and voltage are reduced, so power efficiency is improved. Actually 1.187 V is a bit much for 1.19 GHz. For comparison my GTX660Ti (same chip, just a bit older and one shader cluster deactivated) reaches 1.23 GHz at 1.175 V. Close, but shows you'll probably have some headroom left in that card (which could either be used for a higher offset clock, or a tad bit lower voltage at stock clock).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30973 - Posted: 24 Jun 2013 | 16:53:10 UTC - in response to Message 30967.

Interesting crash report.
I'm also seeing MSI Afterburner crash occasionally. The fan/temp profile seems to stick but the Afterburner Task bar icon is removed. It open's again without issue but when it crashes it doesn't kill WU's (that I've noticed, and it was the first thing I checked). The problem might be related to the 320.x drivers. I'm going to reinstall Afterburner just in case it's something to do with moving GPU's around inside the case...

The last NATHAN_KIDc22_noPhos WU I returned on my GTX660Ti took just under 35K seconds (9.7h):

I5R8-NATHAN_KIDc22_noPhos-9-10-RND0487_0 4542508 23 Jun 2013 | 21:01:47 UTC 24 Jun 2013 | 6:56:23 UTC Completed and validated 34,881.54 34,462.74 133,950.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

With your setup a THAN_KIDc22_noPhos took 8.5h:

I16R9-NATHAN_KIDc22_noPhos-8-10-RND1605_1 4538546 22 Jun 2013 | 18:06:43 UTC 23 Jun 2013 | 10:52:15 UTC Completed and validated 30,617.77 29,931.82 133,950.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

So your GPU is 14% faster than my GTX660Ti (1189MHz).

8 CPU cores at 100%... GPU Load is mostly 80%, and GPU Power runs 58-62% TDP

Your system appears to be better optimized for CPU usage than GPU usage.
I'm using 3 GPU's (not running CPU tasks). The GTX660Ti (PCIE2 @X2) has 88% GPU usage and 85% power (running a NATHAN_KIDc22_SODcharge or a NATHAN_KIDc22_noPhos) and the other two are ~93% GPU usage and 92 to 94% power (NATHAN_KIDc22_full and NATHAN_KIDc22_noPhos).

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30975 - Posted: 24 Jun 2013 | 17:03:00 UTC - in response to Message 30971.

I don't think he understood completely, the longest NATHAN's give 167,550 points and it looks as though he's only done 1, it took him 10.55 hours. That's a little slower than my 770 by about 1/2 hour.

Midimaniac, I think you should have a higher GPU load, mines at 95% with a memory load of 31%. I had to downclock the 770 because of the driver issue's and my 680's are faster right now (the 770 was completing the long NATHAN_KIDc22 in about 9 hours flat).

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1104
Credit: 6,101,732,079
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30978 - Posted: 24 Jun 2013 | 18:11:32 UTC - in response to Message 30973.

Interesting crash report.
I'm also seeing MSI Afterburner crash occasionally. The fan/temp profile seems to stick but the Afterburner Task bar icon is removed. It open's again without issue but when it crashes it doesn't kill WU's (that I've noticed, and it was the first thing I checked).

Probably not crashing, but the taskbar icon is disappearing. I've also seen the icon disappear but Afterburner is still operating. According to Unwinder the icon is disappearing because of a notify bug in windows. He's intending workarounds in the next Afterburner version.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30980 - Posted: 24 Jun 2013 | 18:33:26 UTC - in response to Message 30978.
Last modified: 24 Jun 2013 | 20:24:17 UTC

Definitely an app crash in my case (though I think I've also experienced what you describe where the icon just vanishes):

    Faulting application name: MSIAfterburner.exe, version: 2.3.1.0, time stamp: 0x50f6cecb
    Faulting module name: nvapi.dll, version: 9.18.13.2014, time stamp: 0x518965f4
    Exception code: 0xc0000094
    Fault offset: 0x001ab9a4
    Faulting process id: 0x11c8
    Faulting application start time: 0x01ce6dd888a0b914
    Faulting application path: C:\Program Files (x86)\MSI Afterburner\MSIAfterburner.exe
    Faulting module path: C:\Windows\system32\nvapi.dll
    Report Id: b6db0dcf-d9e9-11e2-a7c5-d43d7e2bd120



Something else I've noticed since moving to the latest driver is that I can't adjust the fan speed to >73% in MSI Afterburner! This might have been part of the issue I had with the GTX650Ti failing some tasks - it was ~69°C while the other cards were ~60°C.
... Went back to 314.22 and Afterburner 2.3.0 and managed to reset everything and then redefine settings. I can now go past 73% fan speed on the 650TiBoost (which is closest to the OC'ed CPU). Didn't lose the WU's either :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30984 - Posted: 24 Jun 2013 | 21:24:15 UTC - in response to Message 30975.

So many replies all at once!

MrS,

A really helpful reply, thank-you. The info on the clock speed that you presented makes sense.

Regarding your crash: 2 utilities constantly polling the GPUs sensors can cause problems. I actually leave none of them on constantly. I minimize GPU-Z and set it not to continue refrsehing when in the background. If I want a current reading I pop it up again. For continous monitoring I'd use only one utility and set the refrseh rate not too high, maybe every 10 s.
Two? How about four? In addition to the 2 EVGA I also had TThrottle running and RealTemp running as well! Your advice is well taken- I will close everything but TThrottle. Uh-oh, now that I think about it EVGA Precision needs to be running for the custom software fan curve to be enabled. When software not running the chip in the hardware takes over and the card runs hotter. This A.M. when I woke the GTX770 was running at 61 C, which is only 4 deg hotter than my software fan curve would have allowed, so no big deal. Maybe I can just minimize EVGA Precision? I will cut the polling back in all my software to about 10sec as you suggested.

...you'll probably have some headroom left in that card (which could either be used for a higher offset clock, or a tad bit lower voltage at stock clock).
Apparently I can't lower just the voltage in EVGA Precision. All I can do is raise it! That seems kinda dumb! But I can lower the Power Target. That would lower the voltage as you suggested. And I can also adjust GPU & Mem clock offsets up or down. I have a feeling that polling the hardware from 4 different softwares caused the problem last night! It is interesting that when the two EVGA applications report voltage of 1187mv when crunching hard, CPUID Hardware Monitor reports voltage of only 0900mv. It's interesting because at idle EVGA reports 0861mv and CPUID reports 0862mv, almost exactly the same voltage.

Flashawk-
Thanks for the insight into the Nathans. I see what you mean.

The lower GPU load I have is apparently because of running Rosetta@home together with GPUG and maxxing out my CPU to 100% on all cores. I just now suspended Rosetta so that only GPUG is running and my GPU stats immediately changed. Apparently having Rosetta running is starving the GPU for the CPU time that it requires to run GPUG at full capacity. Interesting.

GPU load goes up with Rosetta suspended:
GPU Load: 92%
GPU Power: 66-67% TDP
Temp went up 1 degree to 58C

GPU load goes down with Rosetta running:
GPU Load: 84-86%
GPU Power: 64.1% TDP
Temp is back down to 57C

So apparently running GPUG by itself would yield the quickest times for completion of GPUG tasks. Interesting. I suppose I could time a few GPUG tasks to see how fast the card turns them over when it has the CPU to itself, but I probably won't. It's not something that really matters a lot, right?

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30985 - Posted: 24 Jun 2013 | 22:27:28 UTC - in response to Message 30984.

So apparently running GPUG by itself would yield the quickest times for completion of GPUG tasks. Interesting. I suppose I could time a few GPUG tasks to see how fast the card turns them over when it has the CPU to itself, but I probably won't. It's not something that really matters a lot, right?

Depends what you want - do you want to do more work for GPUGrid or Rosetta? It's entirely up to you, but for reference:
Your GPU is presently 14% faster than my GTX660Ti. If you didn't do anything other than not crunch CPU tasks it would be 23% faster than my GPU, just going by GPU usage. As this isn't necessarily accurate you would actually need to run a few work units without the CPU being used to get an accurate measurement, and your GPU would probably be faster than +23% over my GPU, moreso if you tweaked everything else towards GPU crunching. Many people tend to go for a reasonably happy-medium of say 6 or 7 CPU tasks and 1 GPU task. I have 3 GPU's in the one system (one on heels) so I want to give them every opportunity of success.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30988 - Posted: 24 Jun 2013 | 23:28:36 UTC - in response to Message 30984.

I have some remarks as well for you Midimaniac.

I have stopped TThrottle, it is a great program, but if you throttle anything, CPU and/or GPU then the GPU load will flip from heavy to low load, this will result in a GPU WU to take longer to finish. You can check that easily with EVGA NV-Z. Result is that my CPU is around 71°C (to hot with liquid cooling) but I will accept this for now as it doesn't run 24/7.

Secondly in EVGA Precision you have the option "GPU clock offset" this can be plus or minus. You can also click on "voltage" on the left site of the program under "test" and "monitoring". A new window will pop-up and you can lower the voltage. EVGA has neat software, that is one reason I like that brand.

Finally I have one CPU core free. So 6 are doing Rosie and 1 (0.669) is doing GPUGRID and 1 does nothing (perhaps windows system things). CPU usage is about 88%. My GTX660 runs smooth in about 12.5 hours for a Nathan LR, but your GTX770 is a way faster, so your RAC should increase significantly in the next days :)
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1104
Credit: 6,101,732,079
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30989 - Posted: 25 Jun 2013 | 0:04:11 UTC - in response to Message 30988.
Last modified: 25 Jun 2013 | 0:05:04 UTC

I have stopped TThrottle, it is a great program, but if you throttle anything, CPU and/or GPU then the GPU load will flip from heavy to low load, this will result in a GPU WU to take longer to finish.

TThrottle is best used as a safeguard against overheating for incidents such as fan failure or extremely hot environmental conditions in which you cannot access the machine. It is not a good alternative to a fan control program such as Afterburner. Think of it as a safeguard in catastrophic conditions. It's also perfect for transmitting CPU & GPU temps to BoincTasks so you can monitor them from one client.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30990 - Posted: 25 Jun 2013 | 0:04:50 UTC - in response to Message 30988.

Result is that my CPU is around 71°C


That is really high for water or even air cooling, I have water cooling on my CPU's and GPU's and my FX-8350's run from 38° to 42° and they run pretty hot.

Has anybody tried disabling Hyperthreading and run all their GPU's with CPU tasks and see if that lowers the GPU utilization? I don't experience any issue's when running all eight cores flat out with Rosetta or CPDN, I have noticed that what BOINC says and other applications say differ greatly. When running GPUGRID, my CPU utilization is from 99.42% to 99.56% for each core that feeds a GPU. I would try it but the last Intel CPU I bought that I used was a socket 370 Pentium III 800EB (I have a PIII 1.1GHz but never used it, Intel wanted it back).

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30991 - Posted: 25 Jun 2013 | 0:22:30 UTC

That is getting kind of toasty.

As a side note, I try to aim for around 50-60 on air. This is with a bullish OC though.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30992 - Posted: 25 Jun 2013 | 0:45:43 UTC - in response to Message 30990.

Has anybody tried disabling Hyperthreading and run all their GPU's with CPU tasks and see if that lowers the GPU utilization? I don't experience any issue's when running all eight cores flat out with Rosetta or CPDN, I have noticed that what BOINC says and other applications say differ greatly. When running GPUGRID, my CPU utilization is from 99.42% to 99.56% for each core that feeds a GPU. I would try it but the last Intel CPU I bought that I used was a socket 370 Pentium III 800EB (I have a PIII 1.1GHz but never used it, Intel wanted it back).

Yes I have as I have bought a refurbished rig with 2 Xeon's no HT (not possible), and when running a GPU task and none CPU, GPU use gets a steady load. When adding a core at a time for CPU crunching, the GPU load drops, to below 35% with 7 cores and to zero with 8 cores crunching CPU. Then only very limited a core give some time to the GPU WU.
____________
Greetings from TJ

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30993 - Posted: 25 Jun 2013 | 0:50:17 UTC - in response to Message 30988.

TJ & skgiven,

OK, I am starting to catch on thanks to you guys help. I am quite certain that in compute preferences I told each project that they could use 100% of the processors 100% of the time. From what you guys have said I can go in there and set Rosie to 7 cpu cores and put GPUG on 1 core. That would certainly allow GPUG to go to its full potential. Right now when I click "run based on preferences" I see 8 Rosie tasks running and 1 GPUGRID task running. The status line for the GPUG task says "Running (0.721 CPUs + 1 NVIDIA GPU". If I change my preferences and lower the percentage given to Rosetta the GPU should load up better. Good to know. On the other hand I'm quite happy to run things the way that they are. I like keeping the GPU temp down as far as possible.

TJ,

Thanks for the info regarding TThrottle. However, I don't use it to throttle anything, I don't need to. I only use TThrottle as a safeguard against possible excessive high temps. I have the CPU & GPU set to throttle at 70C because they should not normally ever reach that temp. I think TThrottle is a way cool program for this reason. And of course it has those kick-ass customizable graphs!

When I built this computer last month I installed a Xigmatek "Gaia" cooler on the CPU and it seems to be doing a fantastic job (for only $30). The case is a Corsair Obsidian 650D with a 200mm front intake and a top mounted 200mm exhaust that pulls the hot air straight up out of the case. I have the rear 120mm fan reversed to provide intake. This gives the case positive air pressure (more air going in than going out) and makes a HUGE difference in how much dust accumulates inside the case. One side benefit of reversing the rear fan to provide intake was that the Gaia CPU cooler is rather large and its position puts it directly in front of and about 2" from the rear fan. Is that cool, or what? So instead of getting 2nd hand hot air from the inside of the case the cpu cooler is being directly blasted by cool air from the outside! (I can't quit grinning about how well that worked out). So, all in all I have excellent cooling. The top of the case can be set up with 1 x 200mm, 2 x 120mm or 2 x 140mm fans. (Did I say I really love this case?) If I ever need more cooling I would likely get 2 Noctua 140mm fans for the top. One of their 140mm fan blows as much air at 800rpm as the Aerocool 200mm LED fan that I am using now and the noise level of the Nocs is only 12db(A) per fan at full speed.

Secondly in EVGA Precision you have the option "GPU clock offset" this can be plus or minus. You can also click on "voltage" on the left site of the program under "test" and "monitoring". A new window will pop-up and you can lower the voltage. EVGA has neat software, that is one reason I like that brand.
I don't get it. Are you sure about lowering the voltage?? When I click on Voltage a new window pops up but lowering the V is not possible. All you can do is click overvoltage and then drag the arrow up to raise the voltage. My software revision is the new one: 4.2.0.

I see from my stats that I already have 2.1 million credits at GPUG and I have only had the one crash last night that I was not directly responsible for, so I guess I'm currently doing pretty good.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 30994 - Posted: 25 Jun 2013 | 1:56:54 UTC - in response to Message 30992.

Yes I have as I have bought a refurbished rig with 2 Xeon's no HT (not possible), and when running a GPU task and none CPU, GPU use gets a steady load. When adding a core at a time for CPU crunching, the GPU load drops, to below 35% with 7 cores and to zero with 8 cores crunching CPU. Then only very limited a core give some time to the GPU WU.


Okay, I understand now, I didn't realize that the new rig you bought didn't have Hyperthreading. I think it's strange that only the Intel processors experience this throttling and the AMD processors don't (at least in my case). I have removed 2 cores from BOINC so they can only be used by the video cards and nothing else in BOINC and I'm assuming everyone else has done this too.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30997 - Posted: 25 Jun 2013 | 12:14:30 UTC

Intel i3/5/7 CPUs can reach VERY high temps (~90C) quite easily with their stock Intel coolers! I don't understand why 70C under continuous crunching load is considered too hot or toasty! Isn't a difference of 20C from the consciously Intel-selected upper operating temperature, a safe distance?

As evidence to back this up I have my home server, crunching on WCG for years now, day in, day out, at temps ranging from ~55 to ~70.

I think people tend to over-react or go the extra mile(s) on "optimal" cooling without a real reason.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30999 - Posted: 25 Jun 2013 | 13:08:00 UTC - in response to Message 30997.

Intel i3/5/7 CPUs can reach VERY high temps (~90C) quite easily with their stock Intel coolers! I don't understand why 70C under continuous crunching load is considered too hot or toasty! Isn't a difference of 20C from the consciously Intel-selected upper operating temperature, a safe distance?

As evidence to back this up I have my home server, crunching on WCG for years now, day in, day out, at temps ranging from ~55 to ~70.

I think people tend to over-react or go the extra mile(s) on "optimal" cooling without a real reason.

Your are quite right about the temperatures, but with liquid cooling 71°C is indeed toasty.
Secondly running on such high temperatures will reduce the lifespan of a CPU significantly and more over circuit boards. So good/efficient cooling is a must for crunchers, especially 24/7 ones.

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31000 - Posted: 25 Jun 2013 | 13:26:12 UTC - in response to Message 30993.
Last modified: 25 Jun 2013 | 13:33:14 UTC

TJ & skgiven,

OK, I am starting to catch on thanks to you guys help. I am quite certain that in compute preferences I told each project that they could use 100% of the processors 100% of the time. From what you guys have said I can go in there and set Rosie to 7 cpu cores and put GPUG on 1 core. That would certainly allow GPUG to go to its full potential. Right now when I click "run based on preferences" I see 8 Rosie tasks running and 1 GPUGRID task running. The status line for the GPUG task says "Running (0.721 CPUs + 1 NVIDIA GPU". If I change my preferences and lower the percentage given to Rosetta the GPU should load up better. Good to know. On the other hand I'm quite happy to run things the way that they are. I like keeping the GPU temp down as far as possible.

I have set under Tools, Computing preference, Processor usage to 75%. This results in one core doing nothing. In my case it results in slightly higher GPU load. This setting works for all tasks that run on the CPU not just Rosie. As far as I know you can not set that per project, you can only give a project more time by setting its share higher. But this will not affect CPU usage.
However you have a cool case, with cool I mean cool like the Americans use this word often with something is great :)

Secondly in EVGA Precision you have the option "GPU clock offset" this can be plus or minus. You can also click on "voltage" on the left site of the program under "test" and "monitoring". A new window will pop-up and you can lower the voltage. EVGA has neat software, that is one reason I like that brand.
I don't get it. Are you sure about lowering the voltage?? When I click on Voltage a new window pops up but lowering the V is not possible. All you can do is click overvoltage and then drag the arrow up to raise the voltage. My software revision is the new one: 4.2.0.


Yes same software here on two rigs. When I click on Voltage I get the same pop-up window and can draw the arrow with the mouse up and down from 1150 to 825 mV.
I can set i.e. at 900 mV, click on apply and then click on the red cross and it works. But I have other cards, that could have to do with it. Nothing higher than a GTX660.
____________
Greetings from TJ

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31004 - Posted: 25 Jun 2013 | 15:15:10 UTC

TJ is correct, just because the CPUs can hit and run at 70, doesn't mean they should 24/7/365. It WILL shorten the lifespan.

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 31009 - Posted: 25 Jun 2013 | 15:48:52 UTC - in response to Message 31004.

TJ-

Thanks for that info on voltage in the EVGA software. I have tried and tried, but can't get the arrow to go down. I have dragged on the arrow and dragged on the bars, nothing seems to work. There is a GPU tweaker that came with my Asus motherboard. I noticed that it won't allow me to lower the voltage either. I think this might be because I have the super-clocked edition of the card. I'm not going to worry about it- the card is working well and running cool anyway.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31010 - Posted: 25 Jun 2013 | 15:54:48 UTC - in response to Message 31009.

The 320.x driver is probably preventing this, going by the 60-odd page manual.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 31011 - Posted: 25 Jun 2013 | 16:02:28 UTC - in response to Message 31009.
Last modified: 25 Jun 2013 | 16:06:50 UTC

You need to remember that the GTX770 has a TDP of 230 watts while a GTX680 has a TDP of 195 watts, they upped the default max voltage from 1.175v to 1.187v.

Mines still on air at this point and averages 57° to 63° with the ACX cooler, I wish they would hurry up and release the next driver already.

Edit: SK, did you get a chance to read that manual? Probably some lite reading, eh?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31014 - Posted: 25 Jun 2013 | 18:00:08 UTC

Regarding the voltage: as you said under crunching load 1.187 V are displayed. This is actually the voltage for about the highest turbo bin. If you set a voltage of 1.175 V in Precision, this should limit the maximum used. But as you say, your card runs pretty well by now, so I'd rather use that headroom left in the chip for some OC without voltage increases. A good 50 MHz should surely be left. Or us the power target to drive power consumption down and efficiency up, if need. In tihs case the card will choose lower clocks & voltages itself.

BTW: I'm also glad EVGA makes the nice Precision software (actually a GUI for RivaTuner made by someone else, just as MSI Afterburner). And that it also works with my non-EVGA cards!

Regarding the temperature: higher temperatures do reduce a chips lifetime. It's a guaranteed, continous process. The rule of thumb is half the lifetime for every 10°C higher. And while it's true that manufacturers allow their CPUs and GPUs to hit ~90°C, this doesn't mean it's good this way. They're simply not assuming 24/7 load for consumer-grade hardware.

If this lifetime reduction will matter at all within the useful life of the hardware considered is a different question entirely and can not be answered generally. CPUs have historically been able to take quite a beating before one starts to notice the degradation (the chip will reach lower and lower clocks and similar conditions, or need more voltage to reach the same levels as before).

On the other hand I've burned out my first Radeon X1950Pro crunching Folding@Home within a few months of running 24/7 at ~70°C. There was surely some bad luck involved here, but I wouldn't want to run higher than 70°C under sustained load.

GPU-Grid performance vs. CPU threads: in my testing I reached full performance on my single GPU when I limited BOINC to 3 CPU WUs running on my Quad i7 with HT on. That means a full physical core dedicated to support the GPU. Yet I'm running at 100% CPU load now because I like to support Einstein and HT is especially efficient there, as well as my 22 nm CPU.

Note that on Kepler GPUs (pretty much all we're talking about now) GPU-Grid always uses a full CPU thread, despite that unfortunate "0.7xxx" value being shown by BOINC. This means by default BOINC will launch 8 CPU threads along GPU-Grid (on an 8-core machine) and overcommit the CPU. To fix this one can set it to "use at most 99% of CPUs", but then the CPU will not be fully used if GPU-Grid is down. Or one can place a file called "app_config.xml" into the GPU-Grid project folder with the following contents:

<app_config>
<app>
<name>acemdbeta</name>
<max_concurrent>9999</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<app>
<name>acemdlong</name>
<max_concurrent>9999</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<app>
<name>acemdshort</name>
<max_concurrent>9999</max_concurrent>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
</app_config>

Won't make much of a difference, though.

MrS
____________
Scanning for our furry friends since Jan 2002

ronny
Send message
Joined: 20 Sep 12
Posts: 17
Credit: 19,131,325
RAC: 0
Level
Pro
Scientific publications
watwat
Message 31017 - Posted: 25 Jun 2013 | 18:54:55 UTC

TJ is correct, just because the CPUs can hit and run at 70, doesn't mean they should 24/7/365. It WILL shorten the lifespan.


I ignore this most of the year and run at about 68 to 78 degrees, but we are pushing 25 degrees Celsius in Hammerfest (Northernmost city in the world) so I've had to pause my cpu and gpu computing (normally I heat my house almost entirely on computing).
This leaves room for possibly upgrading without it affecting computing contribution. On that note, have you concluded anything profound in this thread? Or must we just wait and see for the first person to buy them all and pit them against each other?

Off-topic: Can AMD be used in GPUgrid yet? I have not stayed updated on the matter. except one thing, I have noticed some very cheap AMD cards trance my 560Ti in various OpenCL applications.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31020 - Posted: 25 Jun 2013 | 19:47:17 UTC - in response to Message 31017.

780 doesn't work yet. 770 is slightly faster than a 680 (but the jury is out on the performance/watt side).

Off-Topic Question - No
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ronny
Send message
Joined: 20 Sep 12
Posts: 17
Credit: 19,131,325
RAC: 0
Level
Pro
Scientific publications
watwat
Message 31022 - Posted: 25 Jun 2013 | 20:08:16 UTC

Hm, thanks. Then I'll have to wait a bit until 780 works well before I decide.

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 31030 - Posted: 25 Jun 2013 | 22:07:06 UTC - in response to Message 31014.

Hi MrS-

Thank-you for all of your input. I understand this:

Note that on Kepler GPUs (pretty much all we're talking about now) GPU-Grid always uses a full CPU thread, despite that unfortunate "0.7xxx" value being shown by BOINC.

But I do not understand this, and why it needs to be fixed:
This means by default BOINC will launch 8 CPU threads along GPU-Grid (on an 8-core machine) and overcommit the CPU.

I typically run Rosetta along with GPUG. I have noticed that if I suspend Rosetta and let GPUG run all by itself the GPU load goes up from 82% to 89-90%. A substantial amount. So I thought OK, I'll fix this and get GPUG going even faster.

To begin with I don't understand why Rosie is having an effect on GPUG because Rosie does not touch the GPU at all. I have verified this. (If I suspend GPUG and let Rosie continue the GPU load goes to zero). Apparently, as you were saying, there is a certain load that GPUG requires from the CPU, typically 1 core. So I go to Rosetta's page and adjust my compute preferences, making sure to update Rosie in BOINC, so BOINC knows what to do, but this has had no effect. Right now I have Rosetta's preferences set to use 1 processor for 25% of the time, but BOINC is still ramping up to 100% CPU usage on all cores when running Rosetta and GPUG together. My BOINC preferences are set to use 100% of the processors for 100% of the time, but I'm sure the way to turn down Rose is to do it on Rosie's page, right?

Does anyone have any suggestions?

Ronny-
Run your computers at 70 or 80 degrees and heat your house with the heat. That is amazing. Sounds to me like a study in efficiency. Well done!

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31032 - Posted: 26 Jun 2013 | 0:34:43 UTC - in response to Message 31030.

Rosetta doesn't use the GPU, but GPUGrid does use the CPU.

Set Boinc locally to 99% and you should see a slight increase in GPU usage and WU performance. GPU usage would increase further if you set it to 75% or 50%, but most people are happy with 99% (7 CPU tasks on 7 CPU threads + 1 CPU thread to feed the GPU, basically).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

klepel
Send message
Joined: 23 Dec 09
Posts: 165
Credit: 2,835,189,088
RAC: 582,895
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31033 - Posted: 26 Jun 2013 | 0:40:47 UTC - in response to Message 31030.

Right now I have Rosetta's preferences set to use 1 processor for 25% of the time, but BOINC is still ramping up to 100% CPU usage on all cores when running Rosetta and GPUG together. My BOINC preferences are set to use 100% of the processors for 100% of the time, but I'm sure the way to turn down Rose is to do it on Rosie's page, right?

Does anyone have any suggestions?

Ronny-
Run your computers at 70 or 80 degrees and heat your house with the heat. That is amazing. Sounds to me like a study in efficiency. Well done!

It is not that you have to configure the participation of each project in the project preference page, this does only influence the priority of how many work is send from each project. In my case I have a priority project for CPU and GPU respectively: Top priority climateprediction.net 100%, secondary priority malariacontrol.net 1% so my CPU gets only work if either climateprediction.net does not have or the computer has to fulfill the ration of 100/1.
We do refer to “My BOINC preferences” in the BOINC Manager: So I do set “to use 99% of the processors for 100% of the time” as I do have one GPU in the system, so of my entire cores one core will be free to feed my GPU and all others are used in my CPU project. Or in your case you have to free 3 cores or threads to feed your 3 GPUs. This will make run GPUGRID project faster and your system in general will more stable.

Midimaniac
Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 31041 - Posted: 26 Jun 2013 | 18:49:52 UTC - in response to Message 31033.

Glad I asked about that! I reset my local preferences to compute on 99% of the processors and it reduced Rosetta to only 7 tasks running and one GPUG task running.

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 324
Credit: 72,394,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31066 - Posted: 27 Jun 2013 | 16:03:11 UTC

Hello: The GTX 780-TITAN GPUGRID work already, or is there any provision for this. Greetings.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31076 - Posted: 27 Jun 2013 | 20:55:28 UTC - in response to Message 31066.

Look over there for updates [there haven't been any, as the developer is away].

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31086 - Posted: 28 Jun 2013 | 8:40:04 UTC - in response to Message 31014.

To fix this one can set it to "use at most 99% of CPUs", but then the CPU will not be fully used if GPU-Grid is down. Or one can place a file called "app_config.xml" into the GPU-Grid project folder with the following contents:

I have tried this on my "big system" (new PSU is on its way to fit two AMD GPU's) with Einstein@home and what I saw was that the GPU load went to 0 (zero) and then occasionally increases to 24% load. The WU in the BOINC taks list didn't also make progress. So I set CPU use back to 90% again to lease one core free.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31113 - Posted: 28 Jun 2013 | 20:46:10 UTC - in response to Message 31086.

Mhh, something obviously went wrong there, but I don't know your configuration well enough to make any educated guesses. In my post I assumed a standard setup as starting point, that is 100% CPU use and 1 GPU. Maybe that doesn't apply to your system?

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31118 - Posted: 28 Jun 2013 | 22:23:45 UTC - in response to Message 31113.

Mhh, something obviously went wrong there, but I don't know your configuration well enough to make any educated guesses. In my post I assumed a standard setup as starting point, that is 100% CPU use and 1 GPU. Maybe that doesn't apply to your system?

MrS

Yes, if I use 90% CPU (via Computing preferences, etc.) on an 8 core (2 x 4 no HT), then one core is free and runs GPU fine. But as soon as I make it higher then 90%, GPU load drops to zero. But that is on Einstein. I will not try this here, as I don't want to risk loosing a WU that has already run long.
But thanks for the tip, as it works for other people than that is great. I am a happy cruncher and have learned a lot here.
____________
Greetings from TJ

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 324
Credit: 72,394,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31440 - Posted: 12 Jul 2013 | 18:23:38 UTC
Last modified: 12 Jul 2013 | 18:24:14 UTC

Hello: The first task completed with my new Gainward GTX770. http://www.gpugrid.net/results.php?userid=68764

Perfect performance, average temperature 61 º C, fan at 64%, 87% load. Working with Windows 8.

Then I'll look to Ubuntu 13.04 which is actually my working OS.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31443 - Posted: 12 Jul 2013 | 19:30:40 UTC - in response to Message 31440.

Hello: The first task completed with my new Gainward GTX770. http://www.gpugrid.net/results.php?userid=68764

Perfect performance, average temperature 61 º C, fan at 64%, 87% load. Working with Windows 8.

Then I'll look to Ubuntu 13.04 which is actually my working OS.

We can't go to your link, that is not working (no access).
However looking at your computers I see that the 770 is doing short runs. Lets see if if does long runs nicely as well? That would be interesting.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 31459 - Posted: 13 Jul 2013 | 9:48:41 UTC - in response to Message 31443.

Yeah, especially the new NOELIAs!!
____________

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 324
Credit: 72,394,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31466 - Posted: 13 Jul 2013 | 13:32:04 UTC - in response to Message 31459.

Hello: Finished the first short tasks with the GTX 770.

As expected Linux / Ubuntu 13.04 outperforms Windows 8 on average approximately 4.8%, the sample is short but I think it sets the trend, we will see in the long.

Then will run two short tasks at the same time and see if it really pays.

The next step long assignments. Greetings.

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 324
Credit: 72,394,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31480 - Posted: 13 Jul 2013 | 22:56:40 UTC - in response to Message 31466.

Hello: Finished two simultaneous short tasks on the GTX 770 and the result is an improvement of 5% (approx) of executing a single task, not much but it's something.

Possibly try with three tasks at once.

Unfortunately there are problems with Linux Noelias I hope is fixed soon. Greetings.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31482 - Posted: 13 Jul 2013 | 23:03:44 UTC - in response to Message 31480.
Last modified: 13 Jul 2013 | 23:04:19 UTC

Possibly try with three tasks at once.

Don't bother, you won't get any improvement and 5% isn't worth the extra failure rate.

Unfortunately there are problems with Linux Noelias I hope is fixed soon. Greetings.

The problem is not related to Linux or Windows, it's with the WU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 324
Credit: 72,394,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31484 - Posted: 13 Jul 2013 | 23:18:58 UTC - in response to Message 31482.
Last modified: 13 Jul 2013 | 23:20:09 UTC

Unfortunately there are problems with Linux Noelias I hope is fixed soon. Greetings.
The problem is not related to Linux or Windows, it's with the WU's.


Hello: Thanks for your comments.

If the problem is not better suspend Noelias ship out...?

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31488 - Posted: 13 Jul 2013 | 23:57:27 UTC - in response to Message 31484.

Short queue has NATHANIEL's WU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,550,027,797
RAC: 1,823,662
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 31490 - Posted: 14 Jul 2013 | 2:39:48 UTC

My GTX770 runs the exact same times as my GTX680's, I guess the faster memory has no impact here. With a TDP of 230 watts for the 770 compared to the 680's TDP of 195 watts, I think the GTX680 is the better deal and with Global Foundries process maturing very nicely, the new GTX680's are going straight to 1201MHz - 1215MHz right out of the box at 1.175 volts. I just really like those 680's, you can't go wrong with them.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31498 - Posted: 14 Jul 2013 | 10:37:48 UTC - in response to Message 31490.
Last modified: 14 Jul 2013 | 10:39:50 UTC

It might not be so obvious if the difference was only 5% and there was some task runtime variation.

This TDP difference makes me think that the best cards around now might be the GTX670's.

What are the 770's clocks and are the 770's actually using more power?
If so then you could test if the faster memory has any impact (other than sucking up power) by reducing its frequency.

If you have just run Noelia WU's then you might wait and see how other WU's perform; they might be more memory dependent - I've seen a difference in memory controller load for different WU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31500 - Posted: 14 Jul 2013 | 11:31:22 UTC - in response to Message 31498.

This TDP difference makes me think that the best cards around now might be the GTX670's.

For now, you are saying and that is a good point. How are these cards in the future?
The WU's evolve fast, so a top card now can become "outdated" very fast.
So perhaps is investing in the best cards available (Titan?) a safer option then buying a "cheap" one now and in a few months they are slow again.
A year ago my GTX550Ti did not bad here, now its taking 18-30 hours.

On the other hand science projects need computer power from the public, that's the idea of BOINC. Then they should keep in mind that a lot of people have a tight budget to use for expensive computer hardware. So lower-end and mid-range cards must be used as well, they are in the majority after all.
I know the SR queue is there for, but these are gradually taking more time and more resources as well.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 31502 - Posted: 14 Jul 2013 | 13:08:54 UTC - in response to Message 31500.

So perhaps is investing in the best cards available (Titan?) a safer option then buying a "cheap" one now and in a few months they are slow again.

I don't ever think it's a good decision to buy new and/or top-end stuff at their premium prices. Something like 2/3rds of the way to the top seems like the best cost-efficient choice for me.

Being realistic, I don't think that hardware for crunching should even reach that 2/3 point. I mean, come on, we're giving real money, buying real hardware, consuming real electricity, working 24/7, putting in big chunks of our real time for a close-to-zero probability that something really useful will come out of all of this... Nice and romantic and all, but why invest big money?

On the other hand science projects need computer power from the public, that's the idea of BOINC. Then they should keep in mind that a lot of people have a tight budget to use for expensive computer hardware. So lower-end and mid-range cards must be used as well, they are in the majority after all.
I know the SR queue is there for, but these are gradually taking more time and more resources as well.

I agree with you 100%! The majority is what every science project out there should optimize for. Doing that will give them the biggest gain.

The short-run queue is TOO LAME in the credit it gives at present. Boosting credit gain will bring more people with low-to-mid-range cards to GPUGRID. On the other hand, SR WUs are almost always much fewer than LR, so they may just not care.
____________

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31503 - Posted: 14 Jul 2013 | 13:19:35 UTC - in response to Message 31502.
Last modified: 14 Jul 2013 | 13:19:47 UTC

There isn't much difference between a GTX670 and a GTX770 architecturally so it's unlikely that the 670 will be outdated before the 770 or 760.

Titan doesn't even work yet.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31504 - Posted: 14 Jul 2013 | 13:35:36 UTC - in response to Message 31502.

Indeed we invest heavily on hardware and electricity, but we are not forced to do so. We choose to run BOINC as or free will. And everyone can contribute to her or his own possibilities.
I have seen cancer from close by, I have "lived" in a hospital for more than half a year so I will invest to contribute to help find answers. I can give money to a cancer fund but I don't know if they use it for research, or a campaign, or brochures or whatever. Here I know where it is for. I don't care about credits as well, but I know others do and that has to be taken into account indeed. I do Rosetta as well and there credit is very very low. The project has troubles all the time. But I stick to it as there research is useful.

What we contribute will certainly help. Perhaps not tomorrow but eventually it will. GPUGRID has several papers published already, they are there to read for everyone. And they do care, cause as there is no-one to crunch, than their project is over. But in the scientific community there is competition as well. Several groups around the world are trying to find things as first. Funding has become harder and especially at Universities the scientists have to publish a lot of papers. In fact at the end the amount of papers published counts. All these things hinder the scientist. Most have a (young) family as well and then overall time is the enemy. So we have to be a bit patient from time to time.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 31536 - Posted: 15 Jul 2013 | 9:18:25 UTC - in response to Message 31504.

I know what you mean TJ and I also want to help the fight against cancer and other diseases in any way I can. That's why I have my machine running 24/7 and keep calming down my wife all the time, who gets upset regularly by the (pretty low) noise and (substantial) heat it generates, lol!

I'm only saying that the researchers should optimize their WUs for the hardware the majority of their crunching supporters have, that's all. That would give them the biggest gain and their supporters crunching satisfaction! Mega-crunchers would continue getting their mega-credit with the sheer amount of crunching work produced. That's pretty obvious methinks.
____________

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31662 - Posted: 19 Jul 2013 | 17:31:37 UTC - in response to Message 31536.

If anyone has actual power usage info for the GTX770 let us know.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1919
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 31680 - Posted: 20 Jul 2013 | 9:07:52 UTC - in response to Message 31662.

just a note.

we have not tested gtx770, but we expect these to work because they are based on the chip of the gtx680.

We have now ordered few for testing with the standard cooling (one fan). I'll keep you posted.

gdf

FoldingNator
Send message
Joined: 1 Dec 12
Posts: 24
Credit: 60,122,950
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwat
Message 31682 - Posted: 20 Jul 2013 | 12:01:47 UTC - in response to Message 31680.
Last modified: 20 Jul 2013 | 12:16:09 UTC

TJ wrote at 14 Jul 2013 | 13:35:36 UTC:
Indeed we invest heavily on hardware and electricity, but we are not forced to do so. We choose to run BOINC as or free will. And everyone can contribute to her or his own possibilities.
I have seen cancer from close by, I have "lived" in a hospital for more than half a year so I will invest to contribute to help find answers. I can give money to a cancer fund but I don't know if they use it for research, or a campaign, or brochures or whatever. Here I know where it is for. I don't care about credits as well, but I know others do and that has to be taken into account indeed. I do Rosetta as well and there credit is very very low. The project has troubles all the time. But I stick to it as there research is useful.

That is also why I am contributing to GPUGRID. I've lost some familymembers by the disease cancer, like my own father at very young age. My mother lost also 2 very good friends of her, who had breast cancer. It's less close than your experience with the disease but close enough for my to set folding as a goal.


Vagelis Giannadakis wrote at 15 Jul 2013 | 9:18:25 UTC:
I know what you mean TJ and I also want to help the fight against cancer and other diseases in any way I can. That's why I have my machine running 24/7 and keep calming down my wife all the time, who gets upset regularly by the (pretty low) noise and (substantial) heat it generates, lol!

LOL! :P But I can understand it. Sometimes it is really a sacrifice because your room isn't silent at all (noise by watching movies at the television), your room is overheating (very nice when the ambient is already 28 degrees Celsius :P) and sometimes you can't do heavy (graphic) work or gaming at the computer when there is running a large GPU WU. Not talking about electricity bills... you get the point. :P

But we all know why we are here. For the credit or for a higher goal. ;-)

Maybe it's also because you do not know exactly what you are doing. In "Folding@Home" and I thought also "FightAIDS@Home" you can see in a special application a simulation of what you're doing.

It's all based on trust, that GPUGRID makes good use of your data from folded WU's. For all publised papers you've to pay money, something like 35 euros. I think that is a lot of money for a few pages of text, where I can't understand more then 50% procent of the text because it is writed in scientific launguage.
Why is this not free available to people who have worked as a volenteer there? Is giving (read: rendering) a lot of WU's to the project not enough for GPUGRID for reading free the results you've rendering for? very strange in my honest opinion...


I'm only saying that the researchers should optimize their WUs for the hardware the majority of their crunching supporters have, that's all. That would give them the biggest gain and their supporters crunching satisfaction! Mega-crunchers would continue getting their mega-credit with the sheer amount of crunching work produced. That's pretty obvious methinks.

I also agree with this. I don't have the money for buy each year a highend card solution for rough performance and good (water/air)cooling (per card).

It's frustrating to see when WU's are become larger and larger. As example a GTX560(TI) isn't enough anymore or other GPU's with less then 1,5GB VRAM. The instability of WU's is frustrating as hell, when you offer your free computing time and kWh's electricity and you get only errors.
I asked that before, why does GPUGRID have so many errors and strange load @ the GPU's. The answer were that GPUGRID changes the WU's (I guess they meant the instructions in the WU) for optimalisation. But wait? optimalization, where do I see that? Only what I see is larger WU's and a lot instability, crashes, errors, stranges loads different per project/WU. A little while ago I quited GPUGRID for that reasons... very frustrating when you have those problems again and again.

At Primegrid as example I get always WU's who give me 99% load. With GFN and PPS sieve... there isn't a different load per subproject (like Nathan of NOELIA tasks over here) or differences in WU's in one subproject.
At this moment I'm running 2 NOELIA's (@ 2 GPU's, each card 1 WU). Last week it gave me loads to 85%, now is the max suddenly just 65% and the expected runtime increased to 16,5 hours! Last week was that 14,5 hours on the same cards with LOWER corespeeds. Very dramatic... in the Netherlands we would say: "Je kan er geen touw aan vast binden", what means in a way something like it's all uncertain and you can't keep nothing as standard. :P

A better support for older cards could lead to a happier and maybe a larger community. Maybe is an option (read: a checkbox in preferences) for selecting WU's for <1,5GB cards a good option and increase the report time for large WU's on older cards, with a different credit system. Recent highend cards can do the normal long runs with CUDA4.2, older cards can fold with that option smaller long runs or the same long runs but with increased report time. Then you've as a folder at home a choice which one you want to do and which one fit the best to your system.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31688 - Posted: 20 Jul 2013 | 19:28:48 UTC - in response to Message 31682.

It's all based on trust, that GPUGRID makes good use of your data from folded WU's. For all publised papers you've to pay money, something like 35 euros. I think that is a lot of money for a few pages of text, where I can't understand more then 50% procent of the text because it is writed in scientific launguage.


That is not completely true. Most papers can be read entirely, in pdf or html. It depends on the publisher. For some you need to have an account or pay for it indeed. When GPUGRID submits a paper to a publisher then the publisher is the "owner" of it and decides free reading rights or not. You can not blame GPUGRID for it as they don't have the rights.

You can click on every publication name in your own account or from others and then you will see that most can be read completely.
And I think you can read about 95% of it, but I guess you wouldn't understand the half of it. I don't mean this in a bad way. But these papers are about chemistry, biochemistry, molecular modelling and medicine. You need to be familiar with the methods used in these disciplines to understand fully.

So think twice before you pay 35 euros for just one article :)
____________
Greetings from TJ

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1919
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 31689 - Posted: 20 Jul 2013 | 19:46:24 UTC - in response to Message 31688.

most if not all the articles are available in pdf from our webpage.

Don't buy from the journal! If you one is missing, we can put online the preprint, i.e. the version before the editing of the journal. The published paper is not ours anymore, most journal keep the copyright.

Most of our papers are available here:
http://multiscalelab.org/gianni/publications

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31692 - Posted: 21 Jul 2013 | 11:24:35 UTC - in response to Message 31680.
Last modified: 21 Jul 2013 | 11:54:06 UTC

just a note.

we have not tested gtx770, but we expect these to work because they are based on the chip of the gtx680.

We have now ordered few for testing with the standard cooling (one fan). I'll keep you posted.

gdf

GTX770's work - several crunchers are already using them,
Firehawk, Carlesa25, TJ

My concern is the GTX770's 230W TDP, actual power usage and performance/Watt for here.
It's likely that the additional power consumption is mostly down to the faster GDDR5 (7008MHz compared to 6008MHz for the GTX680, TDP 195W). While this facilitated a bandwidth increased from 192GB/s to 224GB/s, I don't think the GTX680 was particularly bandwidth constrained. So does 224GB/s actually result in a speed bump for here, or does it come without any performance gain? If performance is the same as a GTX680 (or within a few percentages) on Windows people might be able to drop the GDDR5 slightly to make the card more economical to run. For Linux crunchers the GTX680 might be the better card, as changing clocks on Linux isn't easy.

Of course I'm speculating and we wont know any of this until someone posts up some power and runtime info and drops the GDDR5 clocks to see if it makes any difference.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31694 - Posted: 21 Jul 2013 | 12:39:28 UTC - in response to Message 31692.

The GTX770 won't draw 40 W more. The increased power limit is there to let the chip turbo up more often and higher. But since at GPU-Grid cards should be able to reach the top boost bin anyway (unless it's really hot), there won't be much difference: on GTX770 the voltage for the highest bin is one step higher than on GTX680 and the memory clock.. which will contribute a few W at most.

Hence I don't think memory downclocking will improve power efficiency, it never has at GPU-Grid, as performance will drop (despite GTX680 not being particularly memory bandwidth starved).

@FoldingNator: if a card can't take the long run WUs any more switch it over to the short queue. That's exactly what this one is for. Or if the current long-runs are causing to much trouble.. I actually switched my GTX660Ti over there now due to the current Noelia experiments crashing too many POEM WUs for me. Now it works better.

And the changing WUs: you can't seriously see this as negative, do you? See it like this: what you'Re experiencing here is actually the scientists working and trying different things. When ever a WU type disappears it means we completed the work! And that afterwards we get different tasks is also a good sign: it means the scientist got his/her questions answered, or formulates a different question. That's progress, visible to us.

And I don't think the "1.5 GB is not enough" will be the norm now. I suspect Noelia was trying something, pushing the boundaries of simulated system size. It should have been handled better (BOINC is able to handle WU memory requirements, but you have tell it to do so), but I'm positive this won't be the norm from now on.

MrS
____________
Scanning for our furry friends since Jan 2002

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 366,975,962
RAC: 434,166
Level
Asp
Scientific publications
watwatwatwatwat
Message 31695 - Posted: 21 Jul 2013 | 15:54:22 UTC - in response to Message 31536.

I'm only saying that the researchers should optimize their WUs for the hardware the majority of their crunching supporters have, that's all. That would give them the biggest gain and their supporters crunching satisfaction! Mega-crunchers would continue getting their mega-credit with the sheer amount of crunching work produced. That's pretty obvious methinks.


It is an obvious solution, however, it is hard to do, otherwise it would have been done a long time ago.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2054
Credit: 15,001,125,069
RAC: 8,172,074
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31696 - Posted: 21 Jul 2013 | 16:46:55 UTC - in response to Message 31694.

And I don't think the "1.5 GB is not enough" will be the norm now.

1.5GB is enough to crunch the NOELIA tasks we're talking about. My old GTX480@701MHz can handle them in 45000~48800 sec (12h30m~13h34m).

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31700 - Posted: 21 Jul 2013 | 21:52:31 UTC - in response to Message 31692.

I have removed my 770 from the system as it ran fine but slow as it used 58540 seconds for a Nathan, while other 770 where around 32000 seconds for these WU's. Must be my MOBO, as the 660 had problems in the same system. But you know all that.
I will replace the MOBO and CPU asap and then I will use a power meter and post all the details here. But first the heat wave needs to be over.
____________
Greetings from TJ

Carl
Send message
Joined: 2 May 13
Posts: 8
Credit: 1,441,694,414
RAC: 988,561
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31744 - Posted: 26 Jul 2013 | 0:46:37 UTC - in response to Message 31662.

If anyone has actual power usage info for the GTX770 let us know.


With my Asus GTX770, I see a 168 watt difference between idle and full load measured through my UPS. And the WU is a long Nathan_Kid.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,067,660
RAC: 156,042
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31748 - Posted: 26 Jul 2013 | 12:01:51 UTC - in response to Message 31744.
Last modified: 26 Jul 2013 | 12:03:08 UTC

A system with a GTX770 will use around 20W more when idle than without the GPU.
Anyone have load and idle power usages for a GTX680, ideally when running a long Nathan_Kid?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,348,955
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31757 - Posted: 27 Jul 2013 | 11:57:03 UTC - in response to Message 31748.

A system with a GTX770 will use around 20W more when idle than without the GPU.
Anyone have load and idle power usages for a GTX680, ideally when running a long Nathan_Kid?

Yes I have some information. I have built my AMD rig and has ran overnight.
Asus Sabertooth 990FX R2.0 (great board)
AMD FX8350 black edition
Asus DRW 24B5ST
Asus GTX770-DC2-OC
SanDisk SSD 128GB
Seagate barracuda 1Tb
Kingston VRAM 1066MHz 1.5V
4 case fans
1 CPU cooler stock from AMD (lot of noise, good cooling)

Power when Idle (PC only) 190W
Power when running Nathan_kid LR 320W
Power when running Santi SR and 4 fightmalaria on CPU 384W

The 770 runs smooth (but only 2SR and 1LR) with driver 320.49 and BOINC 7.0.64
SR 87000sec
LR 29700sec

[the SR are almost 4 times as fast than my GTX550Ti in a quad core]

We still have the heatwave and heavy thunder at the moment, so when the last SR has finished I will power it down.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2688
Credit: 1,172,901,099
RAC: 370,180
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31762 - Posted: 27 Jul 2013 | 13:48:25 UTC - in response to Message 31757.
Last modified: 27 Jul 2013 | 13:49:07 UTC

190 W at idle?! Did you deactivate all power management? Some people regularly recommend this, but I think it's really rubbish. Run with standard configuration and if things run smooth leave it at that. Because even while crunching BOINC your system on't be under load all the time (boot, project down, partly free cores etc.). No power saving means just wasted money under such circumstances. You should be looking at <70 W in idle mode.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : nVidia GTX GeForce 770 & 780