Message boards :
Graphics cards (GPUs) :
NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 8 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The GTX660Ti has one less Rop cluster (of ROPs, L2 cache and Memory) than a GTX670. So it drops from 32 ROPs, with 512KB L2 cache and quad channel memory to 24 ROPs, 384KB L2 cache and triple channel memory. I don't know how you can separate these cluster elements but given what MrS said I respect that any slow-down isn't ROP count based. I reckon that NVidia thinks 384K is sufficient to support the reduced memory bandwidth, but I'm not sure if you can't really separate the two. We know that the GTX670 performs better for the more memory intensive games. So that leaves the memory bandwidth looking like the culprit. The 3GB cards aren't any faster either. I would go along with the idea of uneven memory requirements. Something is stopping the memory controller load going past ~41%. I'm currently using 38% but that's with the CPU usage at 100%. When I disabled CPU tasks and ran two WU's it only went up to ~41% and that was with 99% GPU utilization. GoodFodder, I think we ran some >1GB Long WU's a few months back. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I suspect the performance improvement of the gtx 660 over say the gtx 650ti is due to the larger cache size (384K v 256K) rather than bandwidth (gtx 670/680 incidently has 512K). Don't forget the amount of shaders, which is the most basic performance metric! Well.. the L2 cache coupled to the memory controller and ROP block is something which could make a difference here. Not for games, as things are mostly streamed there and bandwidth is key. But for number crunching caches are often quite important. I'm currently using EVGA Precision X to set clock speeds. And this won't let me underclock GPU memory more than 250 (real) MHz. I suspect I could get memory controller utilization above 41% if I could set lower clocks. But I'm not too keen on installing another one of these utilities :p MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would like to see what a 660 can do with a Noelia unit (when they start working again) and see if the more complex task takes a proportionate time increase. My first two Noelias are in, and are running 12-13+ hours on my two GTX 660s, each supported by one virtual core of an i7-3770 (not overclocked). The other six cores of the CPU are now running WCG/CEP2. 063ppx25x2-NOELIA_klebe_run-0-3-RND3968_1 12:05:29 (05:47:34) 5/7/2013 5:46:00 AM 5/7/2013 6:05:05 AM 0.627C + 1NV 47.91 Reported: OK * 041px24x3-NOELIA_klebe_run-0-3-RND0769_0 13:15:57 (06:17:45) 5/7/2013 5:09:05 AM 5/7/2013 5:22:42 AM 0.627C + 1NV 47.46 Reported: OK * The Nathans now run a little over 6 hours on this setup. http://www.gpugrid.net/results.php?hostid=150900&offset=0&show_names=1&state=0&appid= Also, I run a GTX 650 Ti, and the first Noelia was 18 hours 14 minutes. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
As some people suggested it appears that the GTX660Ti is relatively better (slightly) when it comes to Noelia's WU's, compared to Nate's. My GTX660Ti was 25% faster than a GTX660, however for most of the run I only crunched one CPU WU, which enabled the Noelia task to use 94% of the GPU. Using the CPU less accounts for at least 4% but probably >6% compared to crunching CPU tasks on 6 CPU threads. My CPU is also at 4.2GHz, so that might make a little difference compared to stock clocks, possibly bringing it to 9 or 10%. If you run a WU without any CPU tasks running you would find out. Might have been better to post in the NOELIAs are back! thread, but I might rename this one as we have drifted into a GPU runtime comparison thread. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
How does the 650 Ti BOOST perform compared to the 650 Ti and 660? The 650 Ti has: 768 shaders. 16 ROPs. 128 bit mem bus. 256 KB L2 Cache. The 650 Ti BOOST has: 768 shaders. 24 ROPs. 192 bit mem bus. 384 KB L2 Cache. The 660 has: 960 shaders. 24 ROPs. 192 bit mem bus. 384 KB L2 Cache. It seems the 650 Ti and 660 are both great for price/performance, but I may get the BOOST as it is about $40 cheaper than the 660 and may perform close to it. |
|
Send message Joined: 4 Oct 12 Posts: 53 Credit: 333,467,496 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Unfortunately I have not seen any stats for the 650 ti boost, however personally as gpugrid appears to be mainly shader bound I would go for either a base 650 ti or the 660. I would expect the performance of the boost to only be marginally quicker than a base 650 ti; I guess it really depends on your budget; the price differences of the products in your area; the supporting hardware for the card and whether the machine will be used for other purposes other than crunching. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The 650TiBoost is basically a 660 with less shaders. While the shaders are historically the most important component they have been trimmed down significantly from CC2.0 cards and CC2.1 cards to CC3.0 cards. This means that there is more reliance on other components especially the core but even the CPU. So the relative importance of the shaders is open to debate. It certainly seems likely that the GPU memory bandwidth and probably the different cache memories have more significance with the GeForce 600 GPU's. I think the new apps have exposed this (they generally optimize better for the newer GPU's), making initial assessments of performances outdated. There seems to be some performance difference with the different WU types, but that would need to be better explored/reported on before we could assess it's importance when it comes to choosing a GPU. Even then it's down to the researchers to decide what sort of research they will be doing long term. I'm expecting more Noelia type WU's, as they extend the research boundaries, but that's just my hunch. The Boost is definitely worth a look, but whether its closer to a 660 or a 650Ti remains to be seen. I would speculate that it performs slightly better than a 650Ti but not that close to a 660; the Boost has less shaders to feed so the memory won't be that important (proportionately less so). A comparison of the 650Ti to a 650TiBoost should reveal the importance of cache (at least the difference between 256 and 384K). As with any GPU, it comes down to the price, performance and running cost. These 'mid-range' GPU's are very interesting when it comes to crunching. They let normal computer users, especially light-gamers, participate in the project (rather than just the enthusiasts). I would include the GTX 660 (OEM)'s as interesting mid-range GPUs, though they are pushing the high end bracket with 1152shaders, and we don't know the price (OEM). If you include the different memory amounts these mid-range GPU's have and the OEM cards, there are 7 to choose from. I would still like to see non-OEM 1152shader GPU's, to fill out the range; one version has a 256bit bus, so it should match the GTX660Ti (192bit). I'm sure manufacturers could make a sweet one. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Pulled a GTX660Ti and a GTX470 from a system to test a GTX660 and a GTX650TiBoost: I7-3770K@4.2GHz, with the GTX660 in the top PCIE3 slot (0) and the GTX650TiBoost in PCIE3 slot 1. Running 5 CPU WU’s, and two GPUGrid WU’s, VM off. While running two Nathan WU’s the GPU power reported for both GPU’s stayed around 94%. With the fans on auto the temperatures stabilized at ~62degC. Fan speeds at 56% and 57%. GPU usage stayed around 93 to 94%. The GTX660’s core clock is 1071 while the GTX650TiBoost’s clock is 1124. The GTX660’s memory clock is listed as 3005MHz and the GTX650Ti’s is 3055MHz. The GTX660 is using 453MB GDDR (~62MB for the system), and the GTX650TiBoost is using 380MB. Memory controller loads are 31% for the 660 and 26% for the 650TiBoost At 5% into the run (which is usually a very accurate indication of expected run times for GPUGrid WU’s, while the estimated remaining time isn’t) the GTX660 had taken 18min 10sec. Expected run time ~22000sec At 5% into the run the GTX650TiBoost took 20min 9sec, so the expected run time is 24200sec. The GTX660 is about 11% faster than the GTX650TiBoost. - At 45% this still looked about right. Considering the shader count and the core frequency (but nothing else) you would expect the GTX660 to be ~19% faster - Shaders; 960/768=1.25, core freq. 1071/1124 =0.95 So it appears that there are additional performance benefits, probably from the shader cache and memory bandwidth to shader count. Mid-range GPU performances against the first high-end GPU (running Nathan WU’s): GTX660Ti - 100% - £210 GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.5% better performance/£ GTX650Ti Boost 79% - £138 (66%) – 19.6% better performance/£ GTX650Ti - 58% - £110 (52%) – 11.5% better performance/£ At the above prices the GTX660 offers the best performance/purchase price, but is only slightly better than the GTX650TiBoost. Prices are subject to change. Running costs: These cards all optimize towards a similar power % (91 to 95), so we should be able to go by the reference TDP’s and come up with a reasonably accurate measure of performance/Watt: GTX660Ti – 100% – 150W – 100% (performance/Watt) GTX660 - 88% – 140W – 94% GTX650Ti Boost – 79% - 134W – 88% GTX650Ti – 58% - 110W – 79% Actual Wattage's might change this a bit, but it's roughly what I would expect; the performance/Watt of the higher specked cards would be better. Again the 660 looks quite good. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 4 Oct 12 Posts: 53 Credit: 333,467,496 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Interesting comparison, be be great to see further results for the various WUs; out of curiosity which cards do you have running in there at the moment? Personally would leave power and cost out of the equation for the following reasons: Pricing - really area and time of purchase dependent. TDP - personal experience (as indeed you mentioned) can be a little misleading as the actual power draw for running a particular app on a particular card really depends on a number of factors e.g num of memory modules; chip quality (binning) not to mention Boost (which I hate) as you know can ramp up the voltage to 1.175V - I have to limit the power target of my 670 to 72% as it gets toasty. Sorry I'm rabbiting on, will leave it there, good show. |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
TDP - personal experience (as indeed you mentioned) can be a little misleading as the actual power draw for running a particular app on a particular card really depends on a number of factors e.g num of memory modules; chip quality (binning) not to mention Boost (which I hate) as you know can ramp up the voltage to 1.175V - I have to limit the power target of my 670 to 72% as it gets toasty. I would leave in the TDP - it is useful for comparative purposes, even if it varies in absolute terms for the factors you mentioned. However, I would add a correction for power supply efficiency. For example, a Gold 90+ power supply would probably run about 91% efficient at those loads, and so the GTX 660 that measures 140 watts at the AC input (as with a Kill-A-Watt) is actually drawing 140 X 0.91 = 127 watts. That is not particularly important if you are just comparing cards to be run on the same PC, but could be if you are comparing the measurements for your card with someone else's on another PC, or when building a new PC and choosing a power supply for example. |
|
Send message Joined: 4 Oct 12 Posts: 53 Credit: 333,467,496 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes, upon reflection your right - TDP useful for comparative purposes, was thinking out loud. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The TDP is for reference but it's certainly not a simple consideration. You would really need to list all your components when designing a system if TDP (power usage) is a concern. Key would be the PSU; if your PSU is only 80% efficient, it's lack of efficiency impacts upon all the other components. However, if you are buying for a low end system with only 1 6pin PCIE required then the purchase cost vs the running cost of such a PSU (and other components) are likely to make it more acceptable. For mid-range systems (with mid-range GPU's) an 85%+ PSU is a reasonable balance. For a high end system (with more than one high end GPU) I would only consider a 90+ efficiency PSU (Professional series). I just ran the two WU's last night. Today I'm finishing off a Noelia (on the 660) and have started a Nathan WU on the 650Ti. The Nathan WU is at 63% and the anticipated runtime is close to the previous WU. I will run a few Nathan WU's to get more accurate results, but so far the Relative performances (in previous post) look reasonably accurate. One thing to note is that most GTX660Ti's are FOC. My 660 is very much a reference model, no FOC GPU or memory. The 650Ti on the other hand is an FOC card with a core clock of 8.8% over reference and a 1.8% memory FOC. The reason for this is that the GTX660 has one 6-pin PCIE power connector and a TDP of 140W. This is very close to the 150W maximum, so it doesn't give you much headroom for overclocking. The reference TDP for a GTX650TiBoost is 134W, and having less shaders is bound to reduce the power draw. So it lends itself to FOC, as does the GTX660Ti. Jim, if a GPU has a TDP of 140W and while running a task is at 95% power, then the GPU is using 133W. To the GPU it's irrelevant how efficient the PSU is, it still uses 133W. However to the designer, this is important. It shouldn't be a concern when buying a GPU but when buying a system or upgrading it is. To support a draw of 133W requires: @80% efficiency - 160W @85% efficiency - 153W @91% efficiency - 145W The difference of 7W between an 80+ and an 85+ PSU, or 8W between the 85% and 91% PSU's isn't a massive consideration when you have one or two mid-range GPU's. However when you have 2 or more high end GPU's it is: Two GTX680's (195W TDP) @ 90% power usage on an 80% efficient PSU wastes 70W just supporting the GPU's. Probably closer to 100W for the whole system. This can be more than halved by using a top PSU. The added benefits are reduced heat radiation, and thus reduced noise from not needing the fans to spin as fast. The heat problem increases exponentially when you add a 3rd or 4th GPU, but you need a more powerful PSU anyway; just for the PCIE power connectors. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
A very nice bit of information. Thanks! To put these mid-range GPU performances into perspective, against the first high-end GPU (running Nathan WU’s): Prices are a bit different in the states though. Best price at newegg AR shipped: 650 TI: $120 (after $10 rebate) (MSI) 650 TI Boost: $162 (no rebate, rebate listed at newegg in error on this GPU) (MSI) 660: $165 (after $15 rebate) (PNY) 660 TI: $263 (after $25 rebate) (Galaxy) So for instance the 650 TI is only 45.6% of the cost of the 660 TI here and the 660 is only 62.7% of the cost of the 660 TI. Neither the 650 TI Boost nor the 660 TI is looking too good (initial purchase) at these prices. Running costs: I don't think the TDP estimates mean too much for our purposes. Do you have some Kill-A-Watt figures? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
In the UK running cost difference for these mid-range GPU's would only be around £10 a year, so yeah, no big deal, and only a consideration if you are getting two or three. In a lot of locations it would be less, and in Germany and a few other EU countries it would be more. So it depends where you are. In the US the prices seem to be stretched out more; with the higher end GPU's costing relatively more than the lower end, but I expect the GTX650TiBoost to fall in price; it was only released two or three weeks ago, and there might not bee too many to choose from. There is no way I would get one for a few $'s less than a GTX660, which still looks like the best bang for buck. At present I have both the GTX660 and GTX650TiBoost in the system, and I'm CPU crunching on a modestly OC's i7-3770K (337W at the wall, 91% PSU). I would really need to pull both GPU's, take a reading, add one, only crunch on the GPU, take a reading and then do the same for the other GPU to have accurate figures. A bit much for 2 GPU's that have a TDP difference of 6W, but I might do it in a few days, after I get a few more runs in. In the mean time I suppose I could make reasonable estimates based on idle power usage (from review forums)... FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I posted here at the end of April about my plan to upgrade my PSU and my GTX 460. The Antec 620W PSU is up and running (I’m amazed I installed it without screwing up my Dell Studio XPS 435 i7 PC…), and I just ordered an ASUS GTX 660 from Amazon France; the best deal (with free shipping) I could find at €141 (£120, $182), but I did have to pay, in addition, the 20% VAT (value-added tax) the European Union demands. You guys State-side probably complain about 5-6% State taxes… I’ve been running my GTX 460 at 850/1700/2000 vs. the stock 675/1350/1800 with no problems. What OC-ing should I try with the 660? Thanks, Tom |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would suggest a very modest OC, and no more. Antec don't make bad PSU's, especially 600W+ models, but I would still be concerned about pulling too many Watts through the PCIE slot on that motherboard. I suggest the first thing you do is to compete a WU and get a benchmark, then nudge up the clocks ever so slightly. I'll try to resist the urge to OC for a while, to complete more runs and get a more accurate performance table. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That sounds like very good advise! Thank you. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Measured Wattages: 0 GPUGrid wU’s + no CPU WU’s – System usage 77W (this includes the GPU idle power, of 7W each) 1 GPUGrid WU’s (on the GTX650TiBoost) + no CPU WU’s - 191W 1 GPUGrid WU’s (on the GTX660) + no CPU WU’s - 197W 2 GPUGrid WU’s + no CPU WU’s - 314W 2 GPUGrid WU’s + 5 CPU WU’s - 338W Excluding the GPU’s idle power, The GTX 650Ti Boost used 114W The GTX 660 used 120W (6 Watts more, as expected going by the TDP) Adding the idle Wattage, The GTX 650Ti Boost used 121W The GTX 660 used 127W Measured when running Nathan WU’s at 95% power and 95% GPU utilization (no CPU WU’s running). Note that these WU's also use part of the CPU, which does increase the power a bit. These figures suggest that the power usage and GPU utilization can be multiplied to get the actual Wattage used; 0.95*0.95 represents the actual Wattage used, ie 90% of 134 for the GTX650TiBoost=121W, and 95% power * 95% GPU utilization is 90% of the TDP for the 660; 127W is ~0.9*140W. Though I would want confirmation of that by others before I accept it. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 4 Oct 12 Posts: 53 Credit: 333,467,496 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just for comparison: System: Win XP sp3, G2020, 2x GTX 650 Ti (@1033,1550), 300W 80+ PSU rated at 83% at 50% load. Would allow +- 3W for power meter accuracy At the wall: idle no GPUs = 30W CPU load (intel burn test) no GPUs = 40W 0 GPUGrid WU’s + no CPU WU’s = 42W 1 GPUGrid WU’s (NATHAN) + no CPU WU’s = 112W 1 GPUGrid WU’s (SDOERR) + no CPU WU’s = 113W 2 GPUGrid WU’s + no CPU WU’s (only has 2 cores) ~ 188W (CPU 100%, GPU 99%) GTX650Ti idle = 6W GTX650Ti WU inc 1 CPU core ~83W (at the wall) est. power @83% for one GTX 650Ti WU inc 1 CPU core = 68.9W GPU rated TDP 110W, CPU 55W |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
To me it looks like the reported power percentage is calculated relative to the power target rather than TDP. The former is not reported as widely (though it is in e.g. Anandtech reviews), but is generally ~15 W below the TDP. If this was true we could easily calculate real power draw by "power target * claimed power draw". Some factory OC'ed cards have higher power targets built in, though (e.g. mine). MrS Scanning for our furry friends since Jan 2002 |
©2026 Universitat Pompeu Fabra