NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680

Message boards : Graphics cards (GPUs) : NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29946 - Posted: 14 May 2013, 20:52:43 UTC - in response to Message 29943.  
Last modified: 14 May 2013, 21:06:08 UTC

You have raised a few interesting points:

Different operating systems perform differently here. Linux can be up to ~5% faster than XP. XP is ~11% faster than Vista and W7. I think W8 is roughly the same as W7 in terms of GPU performance. 2003R2 servers are slightly faster than XP (but only 1 to 3% last time I measured it), and 2008 servers are ~5% slower than XP (but a bit better than W7, except when it comes to drivers).
Obviously GPU utilization is higher with the faster operating systems.

I suppose I should have taken into consideration my PSU efficiency (it's 91%+).

The one big problem with these measurements is that these WU's use the CPU and the GPU. So you are not measuring the GPU running alone. The problem with this is that you can't accurately account for the CPU's power usage; running a CPU WU from another project and taking the CPU power usage from that is not accurate - you can see up to 30W different power consumption running different CPU WU's. How much power the Nathan WU's actually draw is open to debate. I suspect it's a good bit less than the average CPU WU would draw.

For reference:
0 GPUGrid wU’s + 1 CPU BoincSimap WU’s – System usage 91W
0 GPUGrid wU’s + 2 CPU BoincSimap WU’s – System usage 104W
0 GPUGrid wU’s + 1 CPU Ibercivis WU’s – System usage 93W
0 GPUGrid wU’s + 2 CPU Ibercivis WU’s – System usage 106W
0 GPUGrid wU’s + 1 CPU Climate WU’s – System usage 95W
0 GPUGrid wU’s + 2 CPU Climate WU’s – System usage 112W

Yeah, that was too easy - Power target it is. Are there any tools that can tell you what your GPU's Power Target actually is?

I noticed that with MSI Afterburner I cannot unlock the Core Voltage for the GTX650TiBoost, but I can for the GTX660. I can move the Power Limit for both. The last time I played with that it was really inaccurate.
GPUZ is telling me that my power consumption is ~96% of the TDP for my 660 and 95% of the TDP for the 650TiBoost, but that just matches Afterburners power percentage.

I've tweaked things:
660,
Core Voltage +12mV, power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +50MHz; GPU power % now 98%, GPU Power usage ~92% (with 2 CPU WU's), Core clock 1137MHz, GPU Clock 3055MHz.

650Ti,
Core Voltage (cant budge), power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +55MHz; GPU power % now 98%, GPU Power usage ~93%, Core clock 1202MHz, GPU Clock 3110MHz.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29946 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29952 - Posted: 15 May 2013, 2:06:49 UTC - in response to Message 29929.  
Last modified: 15 May 2013, 2:39:42 UTC

Jim, if a GPU has a TDP of 140W and while running a task is at 95% power, then the GPU is using 133W. To the GPU it's irrelevant how efficient the PSU is, it still uses 133W. However to the designer, this is important. It shouldn't be a concern when buying a GPU but when buying a system or upgrading it is.

OK, I was measuring it at the AC input, as mentioned in my post. Either should work to get the card power, though if you measure the AC input you need to know the PS efficiency, which is usually known these days for the high-efficiency units. (I trust Kill-A-Watt more than the circuitry on the cards for measuring power, but that is just a personal preference.)
ID: 29952 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29957 - Posted: 15 May 2013, 12:36:20 UTC - in response to Message 29952.  

My GTX660 really doesn't like being overclocked. Stability was very poor when it comes to crunching with even a very low OC. It's definitely not worth a 1.3% speed up (task return time) if the error rate rises even slightly, and my error rate rose a lot. This might be down to having a reference GTX660 or it being used for the display; I hadn't been using the system for a bit, and with the GPU barely overclocked, within a minute of me using the system a WU failed, and after 6h with <10min to go! It's been reset to stock.

On the other hand the GTX650TiBoost sticks a modest OC very well, and has returned WU's with the shaders up to 1202MHz (the same as my GTX660Ti), albeit for only a 3.5% decrease in runtime. I dare say a 5%+ performance increase is readily achievable. However, I'm using W7; I would get more than that by just sticking it in an XP rig, and more again using Linux. Also, in XP OCing might not be as beneficial; the GPU would already be running at ~99%. Ditto for Linux.

It's looking like a FOC GTX660 is the best mid-range card to invest in.

FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29957 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29959 - Posted: 15 May 2013, 13:41:28 UTC - in response to Message 29957.  

It's looking like a FOC GTX660 is the best mid-range card to invest in.

What's "FOC"?
ID: 29959 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29966 - Posted: 15 May 2013, 16:57:08 UTC - in response to Message 29959.  

Factory Over Clocked.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29966 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29969 - Posted: 15 May 2013, 17:15:22 UTC - in response to Message 29966.  

Factory Over Clocked.

Thank you!

ID: 29969 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29978 - Posted: 15 May 2013, 21:23:05 UTC

Actually the higher the GPU utilization, the more a GPU core OC should benefit performance. Because every additional clock is doing real work, whereas at lower utilization levels only a fraction of the added clocks will be used.

MrS
Scanning for our furry friends since Jan 2002
ID: 29978 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29983 - Posted: 15 May 2013, 23:27:52 UTC - in response to Message 29978.  

Yeah, you're right. I was thinking that you wouldn't be able to OC as much to begin with, but 105% of 99% GPU - 99% utilization is > 105% of 88% GPU - 88% utilization; 4.95% > 4.4%
Overclocking the GPU core doesn't actually improve the GPU utilization, it just exploits what's there. To improve the GPU utilization you have to solve other bottlenecks, such as the CPU (higher clocks and >availability), PCIE (PCIE3>PCIE2, X16>x8>x4) and the Memory Controller load/GPU memory bandwidth (OC the GDDR, use Virtu if your board is licensed and capable). Faster system memory and disk might also help a tiny amount.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 29983 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30076 - Posted: 18 May 2013, 22:05:40 UTC - in response to Message 29930.  
Last modified: 19 May 2013, 10:36:45 UTC

Just meant as a very rough guide, but serves to highlight price variation and the affect on performance/purchase price.

£:
GTX660Ti - 100% - £210
GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.8% better performance/£
GTX650Ti Boost 79% - £138 (cost 66%) – 19.7% better performance/£
GTX650Ti - 58% - £110 (cost 52%) – 11.5% better performance/£

$ (from Beyond's post):
GTX660Ti - 100% - $263
GTX660 - 88% - $165 (63% cost of a GTX660Ti) – 39.7% better performance/$
GTX650Ti Boost 79% - $162 (62%) – 27.4% better performance/$
GTX650Ti - 58% - $120 (46%) – 26.1% better performance/$

€ (from a site MrS linked to):
GTX660Ti - 100% - €229
GTX660 - 88% - €160 (70% cost of a GTX660Ti) – 25.7% better performance/€
GTX650Ti Boost - 79% - €129 (56%) – 41.1% better performance/€
GTX650Ti - 58% - €104 (45%) – 28.8% better performance/€

CAD $ (matlock):
GTX660Ti - 100% - $300
GTX660 - 88% - $220 (73% cost of a GTX660Ti) – 20.5% better performance/CAD
GTX650Ti Boost - 79% - $180 (60% cost of a GTX660Ti) – 31.6% better performance/CAD
GTX650Ti - 58% - $150 (50% cost of a GTX660Ti) – 16% better performance/CAD

Going by these figures the GTX660 is the best value for money in the UK and the US, but the 650TiBoost is the best value for money in Germany (Euro) and Cananda.

Beyond's $84 GTX650Ti offers a staggering 207% better performance/$ than a GTX660Ti. As long as you have the space, such bargains are great value.

I will fill this out a bit later.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30076 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
matlock

Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30079 - Posted: 18 May 2013, 23:42:13 UTC - in response to Message 30076.  

I think it's best to stick with one manufacturer when comparing prices. This also includes reference vs OC models. Looking at the very lowest prices isn't always great as I don't want another Zotac (my 560ti448 was very loud and hot). There are also mail-in-rebates, but I tend to ignore those when comparing prices.

Memory Express is a retailer in Western Canada that has some of the best prices in the area, and they will also price-beat other stores including Newegg. Here are the prices in Canadian dollars for the Asus DirectCU II OC line of 600 series cards (without MIR and without price-beat):

660Ti - $300
660 - $220
650Ti Boost - $180
650Ti - $150

By using their price beat (beating $214.99 by 5%), I just picked up an Asus 660 for $204. I also have a MIR I can send in for another $20 off. $184 for a very high quality card. Runs cool and quiet, unlike my old Zotac.
ID: 30079 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30081 - Posted: 19 May 2013, 4:55:04 UTC - in response to Message 30079.  

Newegg just had the MSI 650 Ti (non OC) on sale for $84.19 shipped AR. Unfortunately the sale just ended yesterday, only lasted a day or two.
ID: 30081 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30617 - Posted: 1 Jun 2013, 19:38:53 UTC - in response to Message 28645.  
Last modified: 1 Jun 2013, 19:41:07 UTC

The GTX 650 Ti is twice as fast as the GTX 650, and costs about 35% more. It's well worth the extra cost.

Well, it's been a few short months and it looks like the 650 Ti has had it's day at GPUGrid. While it's very efficient It can no longer (in non-OCed form) make the 24hr cutoff for the crazy long NATHAN_KIDc22 WUs. So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-(

http://www.gpugrid.net/workunit.php?wuid=4490227
ID: 30617 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Vagelis Giannadakis

Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30620 - Posted: 1 Jun 2013, 21:05:38 UTC - in response to Message 30617.  

I've completed 4 NATHAN_KID WUs with my stock-clocked 650ti all in ~81K secs (~22.5 hours). I am about to finish my fifth, also expected to take ~22.5 hours.

22.5 hours is pretty close to the 24h window, so one has to be careful with cache settings. I've set my minimum work buffer to 0.

Maybe something just slowed down crunching for this WU?

One of mine: http://www.gpugrid.net/result.php?resultid=6912798

Comparing the values for "time per step", it is clear all the difference in total time was because of a greater time per step. Maybe you had some application eating up GPU cycles?
ID: 30620 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30624 - Posted: 1 Jun 2013, 22:00:48 UTC - in response to Message 30620.  

Maybe something just slowed down crunching for this WU?

Comparing the values for "time per step", it is clear all the difference in total time was because of a greater time per step. Maybe you had some application eating up GPU cycles?

No it's a machine that's currently dedicated to crunching. You're running Linux which is about 15% faster than Win7-64 at GPUGrid according to SKG. That's the difference, and even then you would have to micromanage and still you don't always make the 24 hour cutoff:

25 May 2013 | 7:52:09 UTC
26 May 2013 | 9:50:00 UTC
Completed and validated 81,103.17
139,625.00 credits out of 167,550.00
ID: 30624 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Vagelis Giannadakis

Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30629 - Posted: 2 Jun 2013, 7:10:23 UTC - in response to Message 30624.  

I see, yes, maybe it is because of Linux. Maybe you could cut some time with a mild OC? Or maybe you could install Linux? :)

I missed the 24h window for my first NATHAN_KID, but that was before setting the min work buffer to 0. Since setting it to 0, it's been working like clockwork.

At least, until they make WUs bigger! I hope not, at least not in the immediate future. There aren't that many people with GTX 660s out there.
ID: 30629 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30632 - Posted: 2 Jun 2013, 9:47:30 UTC - in response to Message 30629.  
Last modified: 2 Jun 2013, 10:46:37 UTC

For a long time the difference between Linux and XP was very small (<3%) and XP was around 11% faster than Vista/W7/W8. However since the new apps have arrived it's not as clear. Some have reported big differences and some have reported small differences. The question is why?

As well as the different apps in use (6.18, 6.52, 6.49 and 6.53 - last 2 Betas), there have been several different WU types (NATHAN_KIDc22, GIANNI_VIL1, SDOERR_2HDQc, NATHAN_dhfr36, NOELIA_klebe), and the GPU's in use are CC2.0, 2.1, 3.0 and 3.5 for the latest Betas. Additionally, we know that there are bandwidth &/or cache issues with some of the GF600 range, and our established perceptions regarding performances of older cards is open for debate.

So different apps &/or WU's might perform slightly differently on different operating systems (and we tend to be a bit generic when referring to Linux; some versions may be faster than others). Apps and WU's may also perform differently on the different GPU's architectures and there might even be some performance variation due to GDDR bandwidth for the GF600's.

I had a quick look at NOELIA_klebe WU's on a GTX650TiBoost and found a ~4% difference between Linux and a 2008R2 server (same system/hardware). The difference use to be around 5% so that's still roughly in line.

I also looked at a few NATHAN_dhfr36 WU's on a Linux SKT775 2.66GHz PCIE2 1066 DDR3 HDD system vs a W7 i7-3770K @ 4.2 GHz PCIE3 2133MHz DDR3 SSD, again on the GTX650TiBoost.
Linux was only 2% faster, but there are explanations; the GPU's frequency was probably slightly higher under W7, the CPU usage for these GPU's is ~100% so processing power probably does have a roll to play, as does PCIE3 vs PCIE2, system memory and even the drive. Individually these are not much, and not particularly important, but collectively they are important as the affect is cumulative.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30632 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30655 - Posted: 4 Jun 2013, 20:16:01 UTC - in response to Message 30617.  

So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-(

That's why I prefer few large GPUs here over more smaller ones, as long as the price does not become excessive (GTX680+). However, I also think there's no need to increase the WU sizes too fast.

MrS
Scanning for our furry friends since Jan 2002
ID: 30655 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
GoodFodder

Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30659 - Posted: 4 Jun 2013, 21:41:49 UTC
Last modified: 4 Jun 2013, 22:02:59 UTC

My gtx 650 ti's are running NATHAN_KIDc22 in less than 70k secs with a mild OC of 1033, 1350 - Win XP mind. Nathan's seem to be very much CPU bound from what I have seen - have you tried upping the process priority to 'Normal' and if only using one GPU per machine setting the CPU affinity with something like ImageCFG?
ID: 30659 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30881 - Posted: 19 Jun 2013, 0:29:58 UTC - in response to Message 30659.  
Last modified: 19 Jun 2013, 4:48:18 UTC

Got some PCIE risers to play with :))
Very nice, so long as you don't mind fat GPU's hanging out of a case!

On my main system (which, like many, only has 4 PCIE power connectors) the top slot is occupied by a GTX660Ti (slot 0; PCIE 3 X8), the next with a GTX660 (slot 1; PCIE 3 X8), and now the third with a GTX650TiBoost (slot 2; PCIE 2 X4).

The memory controller load of the Boost is only 22%, the GTX660's memory controller load is 26% and the GTX660Ti's memory controller load is 36%.

Of course Boinc reports three GTX650Ti Boosts!
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 30881 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30882 - Posted: 19 Jun 2013, 6:30:12 UTC - in response to Message 30881.  

Got some PCIE risers to play with :))

I can't visualize what that looks like, but how about this for an alternative...

I recently upgraded to a GTX 660, so my old GTX 460 now sits in its box.

Is there an adapter I can mount in a PCI slot, on top of which I mount the 460? I have a 620W PSU and four PCIE power connectors.
ID: 30882 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

Message boards : Graphics cards (GPUs) : NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680

©2026 Universitat Pompeu Fabra