Message boards :
Graphics cards (GPUs) :
NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
You have raised a few interesting points: Different operating systems perform differently here. Linux can be up to ~5% faster than XP. XP is ~11% faster than Vista and W7. I think W8 is roughly the same as W7 in terms of GPU performance. 2003R2 servers are slightly faster than XP (but only 1 to 3% last time I measured it), and 2008 servers are ~5% slower than XP (but a bit better than W7, except when it comes to drivers). Obviously GPU utilization is higher with the faster operating systems. I suppose I should have taken into consideration my PSU efficiency (it's 91%+). The one big problem with these measurements is that these WU's use the CPU and the GPU. So you are not measuring the GPU running alone. The problem with this is that you can't accurately account for the CPU's power usage; running a CPU WU from another project and taking the CPU power usage from that is not accurate - you can see up to 30W different power consumption running different CPU WU's. How much power the Nathan WU's actually draw is open to debate. I suspect it's a good bit less than the average CPU WU would draw. For reference: 0 GPUGrid wU’s + 1 CPU BoincSimap WU’s – System usage 91W 0 GPUGrid wU’s + 2 CPU BoincSimap WU’s – System usage 104W 0 GPUGrid wU’s + 1 CPU Ibercivis WU’s – System usage 93W 0 GPUGrid wU’s + 2 CPU Ibercivis WU’s – System usage 106W 0 GPUGrid wU’s + 1 CPU Climate WU’s – System usage 95W 0 GPUGrid wU’s + 2 CPU Climate WU’s – System usage 112W Yeah, that was too easy - Power target it is. Are there any tools that can tell you what your GPU's Power Target actually is? I noticed that with MSI Afterburner I cannot unlock the Core Voltage for the GTX650TiBoost, but I can for the GTX660. I can move the Power Limit for both. The last time I played with that it was really inaccurate. GPUZ is telling me that my power consumption is ~96% of the TDP for my 660 and 95% of the TDP for the 650TiBoost, but that just matches Afterburners power percentage. I've tweaked things: 660, Core Voltage +12mV, power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +50MHz; GPU power % now 98%, GPU Power usage ~92% (with 2 CPU WU's), Core clock 1137MHz, GPU Clock 3055MHz. 650Ti, Core Voltage (cant budge), power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +55MHz; GPU power % now 98%, GPU Power usage ~93%, Core clock 1202MHz, GPU Clock 3110MHz. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Jim, if a GPU has a TDP of 140W and while running a task is at 95% power, then the GPU is using 133W. To the GPU it's irrelevant how efficient the PSU is, it still uses 133W. However to the designer, this is important. It shouldn't be a concern when buying a GPU but when buying a system or upgrading it is. OK, I was measuring it at the AC input, as mentioned in my post. Either should work to get the card power, though if you measure the AC input you need to know the PS efficiency, which is usually known these days for the high-efficiency units. (I trust Kill-A-Watt more than the circuitry on the cards for measuring power, but that is just a personal preference.) |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My GTX660 really doesn't like being overclocked. Stability was very poor when it comes to crunching with even a very low OC. It's definitely not worth a 1.3% speed up (task return time) if the error rate rises even slightly, and my error rate rose a lot. This might be down to having a reference GTX660 or it being used for the display; I hadn't been using the system for a bit, and with the GPU barely overclocked, within a minute of me using the system a WU failed, and after 6h with <10min to go! It's been reset to stock. On the other hand the GTX650TiBoost sticks a modest OC very well, and has returned WU's with the shaders up to 1202MHz (the same as my GTX660Ti), albeit for only a 3.5% decrease in runtime. I dare say a 5%+ performance increase is readily achievable. However, I'm using W7; I would get more than that by just sticking it in an XP rig, and more again using Linux. Also, in XP OCing might not be as beneficial; the GPU would already be running at ~99%. Ditto for Linux. It's looking like a FOC GTX660 is the best mid-range card to invest in. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's looking like a FOC GTX660 is the best mid-range card to invest in. What's "FOC"? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
|
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Factory Over Clocked. Thank you! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Actually the higher the GPU utilization, the more a GPU core OC should benefit performance. Because every additional clock is doing real work, whereas at lower utilization levels only a fraction of the added clocks will be used. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yeah, you're right. I was thinking that you wouldn't be able to OC as much to begin with, but 105% of 99% GPU - 99% utilization is > 105% of 88% GPU - 88% utilization; 4.95% > 4.4% Overclocking the GPU core doesn't actually improve the GPU utilization, it just exploits what's there. To improve the GPU utilization you have to solve other bottlenecks, such as the CPU (higher clocks and >availability), PCIE (PCIE3>PCIE2, X16>x8>x4) and the Memory Controller load/GPU memory bandwidth (OC the GDDR, use Virtu if your board is licensed and capable). Faster system memory and disk might also help a tiny amount. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just meant as a very rough guide, but serves to highlight price variation and the affect on performance/purchase price. £: GTX660Ti - 100% - £210 GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.8% better performance/£ GTX650Ti Boost 79% - £138 (cost 66%) – 19.7% better performance/£ GTX650Ti - 58% - £110 (cost 52%) – 11.5% better performance/£ $ (from Beyond's post): GTX660Ti - 100% - $263 GTX660 - 88% - $165 (63% cost of a GTX660Ti) – 39.7% better performance/$ GTX650Ti Boost 79% - $162 (62%) – 27.4% better performance/$ GTX650Ti - 58% - $120 (46%) – 26.1% better performance/$ € (from a site MrS linked to): GTX660Ti - 100% - €229 GTX660 - 88% - €160 (70% cost of a GTX660Ti) – 25.7% better performance/€ GTX650Ti Boost - 79% - €129 (56%) – 41.1% better performance/€ GTX650Ti - 58% - €104 (45%) – 28.8% better performance/€ CAD $ (matlock): GTX660Ti - 100% - $300 GTX660 - 88% - $220 (73% cost of a GTX660Ti) – 20.5% better performance/CAD GTX650Ti Boost - 79% - $180 (60% cost of a GTX660Ti) – 31.6% better performance/CAD GTX650Ti - 58% - $150 (50% cost of a GTX660Ti) – 16% better performance/CAD Going by these figures the GTX660 is the best value for money in the UK and the US, but the 650TiBoost is the best value for money in Germany (Euro) and Cananda. Beyond's $84 GTX650Ti offers a staggering 207% better performance/$ than a GTX660Ti. As long as you have the space, such bargains are great value. I will fill this out a bit later. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think it's best to stick with one manufacturer when comparing prices. This also includes reference vs OC models. Looking at the very lowest prices isn't always great as I don't want another Zotac (my 560ti448 was very loud and hot). There are also mail-in-rebates, but I tend to ignore those when comparing prices. Memory Express is a retailer in Western Canada that has some of the best prices in the area, and they will also price-beat other stores including Newegg. Here are the prices in Canadian dollars for the Asus DirectCU II OC line of 600 series cards (without MIR and without price-beat): 660Ti - $300 660 - $220 650Ti Boost - $180 650Ti - $150 By using their price beat (beating $214.99 by 5%), I just picked up an Asus 660 for $204. I also have a MIR I can send in for another $20 off. $184 for a very high quality card. Runs cool and quiet, unlike my old Zotac. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Newegg just had the MSI 650 Ti (non OC) on sale for $84.19 shipped AR. Unfortunately the sale just ended yesterday, only lasted a day or two. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The GTX 650 Ti is twice as fast as the GTX 650, and costs about 35% more. It's well worth the extra cost. Well, it's been a few short months and it looks like the 650 Ti has had it's day at GPUGrid. While it's very efficient It can no longer (in non-OCed form) make the 24hr cutoff for the crazy long NATHAN_KIDc22 WUs. So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-( http://www.gpugrid.net/workunit.php?wuid=4490227 |
|
Send message Joined: 5 May 13 Posts: 187 Credit: 349,254,454 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've completed 4 NATHAN_KID WUs with my stock-clocked 650ti all in ~81K secs (~22.5 hours). I am about to finish my fifth, also expected to take ~22.5 hours. 22.5 hours is pretty close to the 24h window, so one has to be careful with cache settings. I've set my minimum work buffer to 0. Maybe something just slowed down crunching for this WU? One of mine: http://www.gpugrid.net/result.php?resultid=6912798 Comparing the values for "time per step", it is clear all the difference in total time was because of a greater time per step. Maybe you had some application eating up GPU cycles?
|
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Maybe something just slowed down crunching for this WU? No it's a machine that's currently dedicated to crunching. You're running Linux which is about 15% faster than Win7-64 at GPUGrid according to SKG. That's the difference, and even then you would have to micromanage and still you don't always make the 24 hour cutoff: 25 May 2013 | 7:52:09 UTC 26 May 2013 | 9:50:00 UTC Completed and validated 81,103.17 139,625.00 credits out of 167,550.00 |
|
Send message Joined: 5 May 13 Posts: 187 Credit: 349,254,454 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I see, yes, maybe it is because of Linux. Maybe you could cut some time with a mild OC? Or maybe you could install Linux? :) I missed the 24h window for my first NATHAN_KID, but that was before setting the min work buffer to 0. Since setting it to 0, it's been working like clockwork. At least, until they make WUs bigger! I hope not, at least not in the immediate future. There aren't that many people with GTX 660s out there.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For a long time the difference between Linux and XP was very small (<3%) and XP was around 11% faster than Vista/W7/W8. However since the new apps have arrived it's not as clear. Some have reported big differences and some have reported small differences. The question is why? As well as the different apps in use (6.18, 6.52, 6.49 and 6.53 - last 2 Betas), there have been several different WU types (NATHAN_KIDc22, GIANNI_VIL1, SDOERR_2HDQc, NATHAN_dhfr36, NOELIA_klebe), and the GPU's in use are CC2.0, 2.1, 3.0 and 3.5 for the latest Betas. Additionally, we know that there are bandwidth &/or cache issues with some of the GF600 range, and our established perceptions regarding performances of older cards is open for debate. So different apps &/or WU's might perform slightly differently on different operating systems (and we tend to be a bit generic when referring to Linux; some versions may be faster than others). Apps and WU's may also perform differently on the different GPU's architectures and there might even be some performance variation due to GDDR bandwidth for the GF600's. I had a quick look at NOELIA_klebe WU's on a GTX650TiBoost and found a ~4% difference between Linux and a 2008R2 server (same system/hardware). The difference use to be around 5% so that's still roughly in line. I also looked at a few NATHAN_dhfr36 WU's on a Linux SKT775 2.66GHz PCIE2 1066 DDR3 HDD system vs a W7 i7-3770K @ 4.2 GHz PCIE3 2133MHz DDR3 SSD, again on the GTX650TiBoost. Linux was only 2% faster, but there are explanations; the GPU's frequency was probably slightly higher under W7, the CPU usage for these GPU's is ~100% so processing power probably does have a roll to play, as does PCIE3 vs PCIE2, system memory and even the drive. Individually these are not much, and not particularly important, but collectively they are important as the affect is cumulative. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-( That's why I prefer few large GPUs here over more smaller ones, as long as the price does not become excessive (GTX680+). However, I also think there's no need to increase the WU sizes too fast. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 4 Oct 12 Posts: 53 Credit: 333,467,496 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My gtx 650 ti's are running NATHAN_KIDc22 in less than 70k secs with a mild OC of 1033, 1350 - Win XP mind. Nathan's seem to be very much CPU bound from what I have seen - have you tried upping the process priority to 'Normal' and if only using one GPU per machine setting the CPU affinity with something like ImageCFG? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Got some PCIE risers to play with :)) Very nice, so long as you don't mind fat GPU's hanging out of a case! On my main system (which, like many, only has 4 PCIE power connectors) the top slot is occupied by a GTX660Ti (slot 0; PCIE 3 X8), the next with a GTX660 (slot 1; PCIE 3 X8), and now the third with a GTX650TiBoost (slot 2; PCIE 2 X4). The memory controller load of the Boost is only 22%, the GTX660's memory controller load is 26% and the GTX660Ti's memory controller load is 36%. Of course Boinc reports three GTX650Ti Boosts! FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Got some PCIE risers to play with :)) I can't visualize what that looks like, but how about this for an alternative... I recently upgraded to a GTX 660, so my old GTX 460 now sits in its box. Is there an adapter I can mount in a PCI slot, on top of which I mount the 460? I have a 620W PSU and four PCIE power connectors. |
©2026 Universitat Pompeu Fabra