nVidia GTX GeForce 770 & 780

Message boards : Graphics cards (GPUs) : nVidia GTX GeForce 770 & 780
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30675 - Posted: 5 Jun 2013, 21:00:27 UTC - in response to Message 30634.  

Anyone know the theoretical performance increase for the GPUgrid app?

Not for the app, but for SP the GTX780 is theoretically 3977/3213≈23.8% faster than the GTX770. However, it's ~60% more expensive!
How it actually pans out for the apps here is still unknown, and there are some obvious differences besides the GK104 vs GK110 architecture - the 780 has a 50% wider bus, and the 770 has 16.6% faster GDDR (hence the 230W TDP compared to the GTX680's 195W).
In theory the Titan is 40% faster than the GTX770, but costs 2.5times as much.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30675 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30678 - Posted: 5 Jun 2013, 23:25:09 UTC

Price, while important overall, wasn't really on my list when making the decision. I just really wanted a gk110 :). They also clock REALLY well. My EVGA SC, acx? The twin fans one. Came at a default boost rate of 1100. I was quite impressed.

I'm just patiently waiting for the GPUgrid app. That is a rather nice performance increase. So far, its doing pretty well on the Folding@home CORE 17 app.

I am also looking forward to Maxwell, next year(?).
ID: 30678 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30958 - Posted: 24 Jun 2013, 5:46:41 UTC

Someone wanted to know if anybody had a GTX 770 yet.

I bought a 770 SuperClocked w/ACX cooler a few weeks ago. Looks like I made the right choice. The card is working fantastic. Runs fast, cool and quiet. No problems with install or anything else. I have Win7 Pro w/32GB RAM on a i7-3770k running at 4.3 GHz. Don't play computer games. I got the card to turbocharge my graphics experience, to shorten time spent encoding video and so I can do number crunching! I am new here and it is a good feeling to be able to contribute.

I have had a few errors in my WU's, but I know why. I left the clock rate on the card alone, but several times I was tweaking my CPU and crashed the system. It took me a while to realize that if BOINC was running, the WU is ruined at that point, so I went ahead and finished the WU's and they were uploaded. It took me another while to quit fooling with my CPU, especially when BOINC is running! Now I just leave the CPU alone as it runs rock solid at 4.3GHz and every project is turned in with no problems.

The specs of the 770 SC say it has a base clock of 1111MHz and a boost of 1163MHz. The card seems to set its clock rate all by itself. Is that the GPU Boost 2.0 doing that? I like to run Rosetta@home and GPUG simultaneously. Rosetta won't touch the GPU so GPUG has the 770 all to itself. According to my monitoring, with Rosetta & GPUG running I am maxing all 8 CPU cores at 100% for 100% of the time, and the GPU load is as follows:

The 770 clocks itself to 1188MHz and 1187mv. With ambient temp of 76 deg F the card temp is 57 deg C and 80% fan speed (under my fan curve). If I ramp the fan up to 100% the temp comes down a few degrees, but at 3200rpm the fan is a little noisy and 57 is a very nice temp to be at anyway. The GPU Load is mostly 80%, and GPU Power runs 58-62% TDP. I don't know why the card ramps up to 1188MHz, because the boost clock is 1163 and I don't have any offsets set. I am just using the card the way it came, although I did change the fan curve.

Hope I've answered your question. At least in my system this card is rock solid stable and just seems to sort of purr effortlessly along. One more comment: The long WU's are being finished in around 9.5 - 10 hours, so I guess that's pretty fast. Gosh, I just looked and the long run WU I'm working on now has 5 minutes left and has been running for 8:32:00, so that's only 8.62 hrs of CPU time start to finish!

ID: 30958 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30961 - Posted: 24 Jun 2013, 8:36:29 UTC - in response to Message 30958.  

I bought a 770 SuperClocked w/ACX cooler a few weeks ago.

Nice card! How long, on average, does a Nathan take on it?
Greetings from TJ
ID: 30961 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30966 - Posted: 24 Jun 2013, 15:28:25 UTC - in response to Message 30961.  

Hi TJ,

Unless I am mistaken, the Nathans you asked about are, in fact, the long WU's I was referring to in the last paragraph of my post. I don't really pay too much attention to the speed of GTX770, but to better answer your question I looked at my stats page. Here is a link so you can see for yourself:
http://www.gpugrid.net/results.php?userid=97668

The CPU Times of my last 4 Nathans completed (6980022, 6978354, 6972456, and 6968785) were 29,932sec, 30,398sec, 31,547sec, and 31,741sec respectively, for an average of 30,905sec or 8.58 hours.

This is much quicker than what I suggested in my earlier post and right in line with what I was observing in real time in the last paragraph of that post.
ID: 30966 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30967 - Posted: 24 Jun 2013, 15:47:00 UTC - in response to Message 30966.  

Hi again TJ,

I just noticed that there was an error in one of my tasks last night with the GTX770. The EVGA software (EVGA Precision 4.2.0, and EVGA NV-Z 0.4.10) that I was monitoring with crashed last night. The card itself and BOINC kept right on running, so the task completed and was uploaded as I slept. This has to be the cause of the error. Seems very weird to me. Why would the software crash? The card's memory load never goes over 18-20% and MCU load stays around 24%. The EVGA Precision software allows me to slow the card down if I have to. Will have to keep an eye on this.
ID: 30967 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30971 - Posted: 24 Jun 2013, 16:40:16 UTC - in response to Message 30966.  

Hi Midimaniac,

Great info, this helps me a lot.
Than your card is approx. 17000 seconds faster than (my) 660 not overclocked.
The 770 is a bit expensive in Europe, but I have a rig that should fit two of them easily. (If I replace the liquid cooling with a large CPU cooler that rig can run 24/7, and I don't need a new rig just now (but this should be on another thread)).
I start saving money and will by two 770, after the summer.

Your link doesn't work (anymore) "no access" is the message when clicking on it.

Happy crunching.
Greetings from TJ
ID: 30971 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30972 - Posted: 24 Jun 2013, 16:52:48 UTC - in response to Message 30967.  

Nice to hear your card is working well! About the clock speed: nVidia states the typical expected turbo clock speed, not the maximum one. What is reached in reality depends mostly on the load itself and GPU temperature.

Regarding your crash: 2 utilities constantly polling the GPUs sensors can cause problems. I actually leave none of them on constantly. I minimize GPU-Z and set it not to continue refrsehing when in the background. If I want a current reading I pop it up again. For continous monitoring I'd use only one utility and set the refrseh rate not too high, maybe every 10 s.

The EVGA Precision software allows me to slow the card down if I have to

If you do so, lower the power target to make the card boost less. This way clock speed and voltage are reduced, so power efficiency is improved. Actually 1.187 V is a bit much for 1.19 GHz. For comparison my GTX660Ti (same chip, just a bit older and one shader cluster deactivated) reaches 1.23 GHz at 1.175 V. Close, but shows you'll probably have some headroom left in that card (which could either be used for a higher offset clock, or a tad bit lower voltage at stock clock).

MrS
Scanning for our furry friends since Jan 2002
ID: 30972 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30973 - Posted: 24 Jun 2013, 16:53:10 UTC - in response to Message 30967.  

Interesting crash report.
I'm also seeing MSI Afterburner crash occasionally. The fan/temp profile seems to stick but the Afterburner Task bar icon is removed. It open's again without issue but when it crashes it doesn't kill WU's (that I've noticed, and it was the first thing I checked). The problem might be related to the 320.x drivers. I'm going to reinstall Afterburner just in case it's something to do with moving GPU's around inside the case...

The last NATHAN_KIDc22_noPhos WU I returned on my GTX660Ti took just under 35K seconds (9.7h):

I5R8-NATHAN_KIDc22_noPhos-9-10-RND0487_0 4542508 23 Jun 2013 | 21:01:47 UTC 24 Jun 2013 | 6:56:23 UTC Completed and validated 34,881.54 34,462.74 133,950.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

With your setup a THAN_KIDc22_noPhos took 8.5h:

I16R9-NATHAN_KIDc22_noPhos-8-10-RND1605_1 4538546 22 Jun 2013 | 18:06:43 UTC 23 Jun 2013 | 10:52:15 UTC Completed and validated 30,617.77 29,931.82 133,950.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

So your GPU is 14% faster than my GTX660Ti (1189MHz).

8 CPU cores at 100%... GPU Load is mostly 80%, and GPU Power runs 58-62% TDP

Your system appears to be better optimized for CPU usage than GPU usage.
I'm using 3 GPU's (not running CPU tasks). The GTX660Ti (PCIE2 @X2) has 88% GPU usage and 85% power (running a NATHAN_KIDc22_SODcharge or a NATHAN_KIDc22_noPhos) and the other two are ~93% GPU usage and 92 to 94% power (NATHAN_KIDc22_full and NATHAN_KIDc22_noPhos).

FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30973 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
flashawk

Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30975 - Posted: 24 Jun 2013, 17:03:00 UTC - in response to Message 30971.  

I don't think he understood completely, the longest NATHAN's give 167,550 points and it looks as though he's only done 1, it took him 10.55 hours. That's a little slower than my 770 by about 1/2 hour.

Midimaniac, I think you should have a higher GPU load, mines at 95% with a memory load of 31%. I had to downclock the 770 because of the driver issue's and my 680's are faster right now (the 770 was completing the long NATHAN_KIDc22 in about 9 hours flat).

ID: 30975 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30978 - Posted: 24 Jun 2013, 18:11:32 UTC - in response to Message 30973.  

Interesting crash report.
I'm also seeing MSI Afterburner crash occasionally. The fan/temp profile seems to stick but the Afterburner Task bar icon is removed. It open's again without issue but when it crashes it doesn't kill WU's (that I've noticed, and it was the first thing I checked).

Probably not crashing, but the taskbar icon is disappearing. I've also seen the icon disappear but Afterburner is still operating. According to Unwinder the icon is disappearing because of a notify bug in windows. He's intending workarounds in the next Afterburner version.
ID: 30978 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30980 - Posted: 24 Jun 2013, 18:33:26 UTC - in response to Message 30978.  
Last modified: 24 Jun 2013, 20:24:17 UTC

Definitely an app crash in my case (though I think I've also experienced what you describe where the icon just vanishes):
    Faulting application name: MSIAfterburner.exe, version: 2.3.1.0, time stamp: 0x50f6cecb
    Faulting module name: nvapi.dll, version: 9.18.13.2014, time stamp: 0x518965f4
    Exception code: 0xc0000094
    Fault offset: 0x001ab9a4
    Faulting process id: 0x11c8
    Faulting application start time: 0x01ce6dd888a0b914
    Faulting application path: C:\Program Files (x86)\MSI Afterburner\MSIAfterburner.exe
    Faulting module path: C:\Windows\system32\nvapi.dll
    Report Id: b6db0dcf-d9e9-11e2-a7c5-d43d7e2bd120



Something else I've noticed since moving to the latest driver is that I can't adjust the fan speed to >73% in MSI Afterburner! This might have been part of the issue I had with the GTX650Ti failing some tasks - it was ~69°C while the other cards were ~60°C.
... Went back to 314.22 and Afterburner 2.3.0 and managed to reset everything and then redefine settings. I can now go past 73% fan speed on the 650TiBoost (which is closest to the OC'ed CPU). Didn't lose the WU's either :)


FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30980 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30984 - Posted: 24 Jun 2013, 21:24:15 UTC - in response to Message 30975.  

So many replies all at once!

MrS,

A really helpful reply, thank-you. The info on the clock speed that you presented makes sense.

Regarding your crash: 2 utilities constantly polling the GPUs sensors can cause problems. I actually leave none of them on constantly. I minimize GPU-Z and set it not to continue refrsehing when in the background. If I want a current reading I pop it up again. For continous monitoring I'd use only one utility and set the refrseh rate not too high, maybe every 10 s.
Two? How about four? In addition to the 2 EVGA I also had TThrottle running and RealTemp running as well! Your advice is well taken- I will close everything but TThrottle. Uh-oh, now that I think about it EVGA Precision needs to be running for the custom software fan curve to be enabled. When software not running the chip in the hardware takes over and the card runs hotter. This A.M. when I woke the GTX770 was running at 61 C, which is only 4 deg hotter than my software fan curve would have allowed, so no big deal. Maybe I can just minimize EVGA Precision? I will cut the polling back in all my software to about 10sec as you suggested.

...you'll probably have some headroom left in that card (which could either be used for a higher offset clock, or a tad bit lower voltage at stock clock).
Apparently I can't lower just the voltage in EVGA Precision. All I can do is raise it! That seems kinda dumb! But I can lower the Power Target. That would lower the voltage as you suggested. And I can also adjust GPU & Mem clock offsets up or down. I have a feeling that polling the hardware from 4 different softwares caused the problem last night! It is interesting that when the two EVGA applications report voltage of 1187mv when crunching hard, CPUID Hardware Monitor reports voltage of only 0900mv. It's interesting because at idle EVGA reports 0861mv and CPUID reports 0862mv, almost exactly the same voltage.

Flashawk-
Thanks for the insight into the Nathans. I see what you mean.

The lower GPU load I have is apparently because of running Rosetta@home together with GPUG and maxxing out my CPU to 100% on all cores. I just now suspended Rosetta so that only GPUG is running and my GPU stats immediately changed. Apparently having Rosetta running is starving the GPU for the CPU time that it requires to run GPUG at full capacity. Interesting.

GPU load goes up with Rosetta suspended:
GPU Load: 92%
GPU Power: 66-67% TDP
Temp went up 1 degree to 58C

GPU load goes down with Rosetta running:
GPU Load: 84-86%
GPU Power: 64.1% TDP
Temp is back down to 57C

So apparently running GPUG by itself would yield the quickest times for completion of GPUG tasks. Interesting. I suppose I could time a few GPUG tasks to see how fast the card turns them over when it has the CPU to itself, but I probably won't. It's not something that really matters a lot, right?
ID: 30984 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30985 - Posted: 24 Jun 2013, 22:27:28 UTC - in response to Message 30984.  

So apparently running GPUG by itself would yield the quickest times for completion of GPUG tasks. Interesting. I suppose I could time a few GPUG tasks to see how fast the card turns them over when it has the CPU to itself, but I probably won't. It's not something that really matters a lot, right?

Depends what you want - do you want to do more work for GPUGrid or Rosetta? It's entirely up to you, but for reference:
Your GPU is presently 14% faster than my GTX660Ti. If you didn't do anything other than not crunch CPU tasks it would be 23% faster than my GPU, just going by GPU usage. As this isn't necessarily accurate you would actually need to run a few work units without the CPU being used to get an accurate measurement, and your GPU would probably be faster than +23% over my GPU, moreso if you tweaked everything else towards GPU crunching. Many people tend to go for a reasonably happy-medium of say 6 or 7 CPU tasks and 1 GPU task. I have 3 GPU's in the one system (one on heels) so I want to give them every opportunity of success.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30985 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30988 - Posted: 24 Jun 2013, 23:28:36 UTC - in response to Message 30984.  

I have some remarks as well for you Midimaniac.

I have stopped TThrottle, it is a great program, but if you throttle anything, CPU and/or GPU then the GPU load will flip from heavy to low load, this will result in a GPU WU to take longer to finish. You can check that easily with EVGA NV-Z. Result is that my CPU is around 71°C (to hot with liquid cooling) but I will accept this for now as it doesn't run 24/7.

Secondly in EVGA Precision you have the option "GPU clock offset" this can be plus or minus. You can also click on "voltage" on the left site of the program under "test" and "monitoring". A new window will pop-up and you can lower the voltage. EVGA has neat software, that is one reason I like that brand.

Finally I have one CPU core free. So 6 are doing Rosie and 1 (0.669) is doing GPUGRID and 1 does nothing (perhaps windows system things). CPU usage is about 88%. My GTX660 runs smooth in about 12.5 hours for a Nathan LR, but your GTX770 is a way faster, so your RAC should increase significantly in the next days :)
Greetings from TJ
ID: 30988 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30989 - Posted: 25 Jun 2013, 0:04:11 UTC - in response to Message 30988.  
Last modified: 25 Jun 2013, 0:05:04 UTC

I have stopped TThrottle, it is a great program, but if you throttle anything, CPU and/or GPU then the GPU load will flip from heavy to low load, this will result in a GPU WU to take longer to finish.

TThrottle is best used as a safeguard against overheating for incidents such as fan failure or extremely hot environmental conditions in which you cannot access the machine. It is not a good alternative to a fan control program such as Afterburner. Think of it as a safeguard in catastrophic conditions. It's also perfect for transmitting CPU & GPU temps to BoincTasks so you can monitor them from one client.
ID: 30989 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
flashawk

Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30990 - Posted: 25 Jun 2013, 0:04:50 UTC - in response to Message 30988.  

Result is that my CPU is around 71°C


That is really high for water or even air cooling, I have water cooling on my CPU's and GPU's and my FX-8350's run from 38° to 42° and they run pretty hot.

Has anybody tried disabling Hyperthreading and run all their GPU's with CPU tasks and see if that lowers the GPU utilization? I don't experience any issue's when running all eight cores flat out with Rosetta or CPDN, I have noticed that what BOINC says and other applications say differ greatly. When running GPUGRID, my CPU utilization is from 99.42% to 99.56% for each core that feeds a GPU. I would try it but the last Intel CPU I bought that I used was a socket 370 Pentium III 800EB (I have a PIII 1.1GHz but never used it, Intel wanted it back).
ID: 30990 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30991 - Posted: 25 Jun 2013, 0:22:30 UTC

That is getting kind of toasty.

As a side note, I try to aim for around 50-60 on air. This is with a bullish OC though.
ID: 30991 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30992 - Posted: 25 Jun 2013, 0:45:43 UTC - in response to Message 30990.  

Has anybody tried disabling Hyperthreading and run all their GPU's with CPU tasks and see if that lowers the GPU utilization? I don't experience any issue's when running all eight cores flat out with Rosetta or CPDN, I have noticed that what BOINC says and other applications say differ greatly. When running GPUGRID, my CPU utilization is from 99.42% to 99.56% for each core that feeds a GPU. I would try it but the last Intel CPU I bought that I used was a socket 370 Pentium III 800EB (I have a PIII 1.1GHz but never used it, Intel wanted it back).

Yes I have as I have bought a refurbished rig with 2 Xeon's no HT (not possible), and when running a GPU task and none CPU, GPU use gets a steady load. When adding a core at a time for CPU crunching, the GPU load drops, to below 35% with 7 cores and to zero with 8 cores crunching CPU. Then only very limited a core give some time to the GPU WU.
Greetings from TJ
ID: 30992 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 30993 - Posted: 25 Jun 2013, 0:50:17 UTC - in response to Message 30988.  

TJ & skgiven,

OK, I am starting to catch on thanks to you guys help. I am quite certain that in compute preferences I told each project that they could use 100% of the processors 100% of the time. From what you guys have said I can go in there and set Rosie to 7 cpu cores and put GPUG on 1 core. That would certainly allow GPUG to go to its full potential. Right now when I click "run based on preferences" I see 8 Rosie tasks running and 1 GPUGRID task running. The status line for the GPUG task says "Running (0.721 CPUs + 1 NVIDIA GPU". If I change my preferences and lower the percentage given to Rosetta the GPU should load up better. Good to know. On the other hand I'm quite happy to run things the way that they are. I like keeping the GPU temp down as far as possible.

TJ,

Thanks for the info regarding TThrottle. However, I don't use it to throttle anything, I don't need to. I only use TThrottle as a safeguard against possible excessive high temps. I have the CPU & GPU set to throttle at 70C because they should not normally ever reach that temp. I think TThrottle is a way cool program for this reason. And of course it has those kick-ass customizable graphs!

When I built this computer last month I installed a Xigmatek "Gaia" cooler on the CPU and it seems to be doing a fantastic job (for only $30). The case is a Corsair Obsidian 650D with a 200mm front intake and a top mounted 200mm exhaust that pulls the hot air straight up out of the case. I have the rear 120mm fan reversed to provide intake. This gives the case positive air pressure (more air going in than going out) and makes a HUGE difference in how much dust accumulates inside the case. One side benefit of reversing the rear fan to provide intake was that the Gaia CPU cooler is rather large and its position puts it directly in front of and about 2" from the rear fan. Is that cool, or what? So instead of getting 2nd hand hot air from the inside of the case the cpu cooler is being directly blasted by cool air from the outside! (I can't quit grinning about how well that worked out). So, all in all I have excellent cooling. The top of the case can be set up with 1 x 200mm, 2 x 120mm or 2 x 140mm fans. (Did I say I really love this case?) If I ever need more cooling I would likely get 2 Noctua 140mm fans for the top. One of their 140mm fan blows as much air at 800rpm as the Aerocool 200mm LED fan that I am using now and the noise level of the Nocs is only 12db(A) per fan at full speed.

Secondly in EVGA Precision you have the option "GPU clock offset" this can be plus or minus. You can also click on "voltage" on the left site of the program under "test" and "monitoring". A new window will pop-up and you can lower the voltage. EVGA has neat software, that is one reason I like that brand.
I don't get it. Are you sure about lowering the voltage?? When I click on Voltage a new window pops up but lowering the V is not possible. All you can do is click overvoltage and then drag the arrow up to raise the voltage. My software revision is the new one: 4.2.0.

I see from my stats that I already have 2.1 million credits at GPUG and I have only had the one crash last night that I was not directly responsible for, so I guess I'm currently doing pretty good.
ID: 30993 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Graphics cards (GPUs) : nVidia GTX GeForce 770 & 780

©2025 Universitat Pompeu Fabra