NVidia GPU Card comparisons in GFLOPS peak

Message boards : Graphics cards (GPUs) : NVidia GPU Card comparisons in GFLOPS peak
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 17 · Next

AuthorMessage
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27435 - Posted: 26 Nov 2012, 22:47:50 UTC - in response to Message 27433.  

Yeah, this makes judging real world performance harder since we seldomly know the real clock speed a GPU is running at.

MrS
Scanning for our furry friends since Jan 2002
ID: 27435 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27467 - Posted: 29 Nov 2012, 4:38:47 UTC - in response to Message 26621.  
Last modified: 29 Nov 2012, 5:23:30 UTC

Don't buy GTX460 for GPU-Grid now, it's simply not efficient enough any more for 24/7 crunching. I'd rather think about replacing the cooler on your current card with some aftermarket design. That will probably also use 3 slots, but be strong & quiet. MrS

Are the GPUGrid apps still only using 2/3 of the GTX 460 shaders, or has that been fixed?

Edit: Must be fixed since the GPU utilization is showing as 95% with 6 CPU WUs running (on an X6). Big improvement.
ID: 27467 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27472 - Posted: 29 Nov 2012, 10:01:08 UTC - in response to Message 27467.  

Are the GPUGrid apps still only using 2/3 of the GTX 460 shaders, or has that been fixed?

Edit: Must be fixed since the GPU utilization is showing as 95% with 6 CPU WUs running (on an X6). Big improvement.

It can't be fixed by GPUGrid, because it comes from the chip's superscalar architecture (= there are 50% more shaders than shader feeding units on a CC2.1 and CC3.0 chip). The GPU utilization is not equivalent of shader utilization, most likely it is showing the utilization of the shader feeding units. The good news is that you can overclock the shaders (of a CC2.1 card) more, because the power consumption inflicted by the GPUGrid client on the whole shader array is less than the maximum.
ID: 27472 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27480 - Posted: 29 Nov 2012, 20:13:43 UTC - in response to Message 27472.  

Zoltan is right. Although it could theoretically be fixed by some funny re-writing of the code, the result would likely be slower overall.

MrS
Scanning for our furry friends since Jan 2002
ID: 27480 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Arathrei

Send message
Joined: 29 Jun 13
Posts: 1
Credit: 50,025
RAC: 0
Level

Scientific publications
wat
Message 31150 - Posted: 1 Jul 2013, 23:48:54 UTC

Another mobile GK for comparsion:
2.7.2013 1:14:29 | | CUDA: NVIDIA GPU 0: GeForce GTX 560M (driver version 320.49, CUDA version 5.50, compute capability 2.1, 1536MB, 1244MB available, 625 GFLOPS peak)
ID: 31150 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31190 - Posted: 3 Jul 2013, 17:27:05 UTC - in response to Message 31150.  
Last modified: 3 Jul 2013, 17:41:38 UTC

Firstly note that this set of data is from WUprop and has flaws, but it at least doesn't appear to include erroneous data. It won't be as good as accurate reports from crunchers of their specific cards running apps on similar setups.

Long runs on GeForce 600 series GPU's, Windows app

GPU Computation time (minutes) (min - max) CPU usage (min-max)

NVIDIA GeForce GT 610 10,256.3 (7,321.1-13,191.4) 1.8% (1.3%-2.3%)
NVIDIA GeForce GT 630 4,672.1 (4,672.1-4,672.1) 9.7% (9.7%-9.7%)

NVIDIA GeForce GT 640 2,032.9 (1,713.9-2,839.3) 94.8% (17.1%-99.8%)
NVIDIA GeForce GTX 650 1,725.0 (1,622.7-2,047.0) 99.2% (98.6%-99.6%)
NVIDIA GeForce GTX 650Ti 1,237.7 (518.5-1,914.5) 91.7% (58.8%-99.9%)
NVIDIA GeForce GTX 660 784.6 (352.9-1,045.9) 97.3% (47.6%-100.3%)
NVIDIA GeForce GTX 660Ti 659.5 (312.9-1,348.0) 99.2% (83.0%-102.4%)
NVIDIA GeForce GTX 670 593.9 (455.3-992.8) 98.6% (90.7%-100.2%)
NVIDIA GeForce GTX 680 595.8 (471.4-899.8) 98.4% (80.3%-101.2%)

Long runs on GeForce 500 series cards Windows app

NVIDIA GeForce GTX 550Ti 1,933.7 (1,510.4-2,610.4) 14.8% (3.0%-23.3%)
NVIDIA GeForce GTX 560 1,253.3 (1,090.0-1,820.8) 20.3% (6.0%-27.3%)
NVIDIA GeForce GTX 560Ti 1,001.7 (710.2-2,011.6) 18.4% (6.4%-37.1%)
NVIDIA GeForce GTX 570 870.6 (691.5-1,743.7) 20.2% (5.5%-36.3%)
NVIDIA GeForce GTX 580 711.0 (588.8-1,087.6) 18.8% (9.2%-32.5%)

As this is 'Windows' it includes ~25% XP systems and 75% Vista+W7+W8
The GT 610 and 630 are Fermi, the rest of the 600's are GK.


Long runs on GeForce 500 series GPU's, Linux app

NVIDIA GeForce GTX 570 797.9 (712.8-966.7) 15.7% (8.5%-18.8%)
NVIDIA GeForce GTX 580 351.3 (351.3-351.3) 5.3% (5.3%-5.3%)

Long runs on GeForce 600 series GPU's, Linux app

NVIDIA GeForce GTX 650Ti 1106.2 (986.9-1324.4) 97.7% (94.5%-98.5%)
NVIDIA GeForce GTX 650TiBOOST 774.6 (769.2-780.5) 99.1% (98.8%-99.4%)
NVIDIA GeForce GTX 660 718.5 (651.3-874.1) 89.6% (86.1%-95.1%)
NVIDIA GeForce GTX 660Ti 587.1 (541.2-717.2) 94.9% (90.9%-99.6%)
NVIDIA GeForce GTX 670 533.9 (494.6-639.1) 99.4% (98.7%-99.7%)
NVIDIA GeForce GTX 680 471.9 (450.1-562.4) 98.7% (97.2%-99.5%)

This data will include different types of work from the Long queue. It's probably skewed by the misreporting of GPU's (when there are two GPU's in a system only one is reported but it's reported twice).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 31190 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31218 - Posted: 4 Jul 2013, 18:17:57 UTC

Hi, Good work, very interesting, congratulations.

I help in changing GPUs I'm preparing, in view of the results (if I played well) on Linux GTX 580 are the best by a landslide and the best platform for Linux computers. Greetings.
ID: 31218 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31228 - Posted: 4 Jul 2013, 21:16:58 UTC - in response to Message 31218.  
Last modified: 4 Jul 2013, 21:50:08 UTC

Going by the results below, on Linux a GTX660 is almost as fast as a GTX580 (~3% short), but costs less to buy and costs less to run,

http://www.gpugrid.net/hosts_user.php?sort=expavg_credit&rev=0&show_all=0&userid=25378

GTX580 v GTX660

GTX580:
I92R6-NATHAN_KIDKIXc22_2-0-41-RND9338_0 4565518 4 Jul 2013 | 4:35:14 UTC 4 Jul 2013 | 19:59:48 UTC Completed and validated 38,258.75 2,732.66 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I57R9-NATHAN_KIDKIXc22_2-0-41-RND6129_0 4565155 3 Jul 2013 | 21:57:40 UTC 4 Jul 2013 | 13:01:29 UTC Completed and validated 37,291.86 2,869.94 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I20R7-NATHAN_KIDKIXc22_1-7-41-RND9392_0 4564152 3 Jul 2013 | 11:34:28 UTC 4 Jul 2013 | 2:37:22 UTC Completed and validated 37,936.45 2,874.42 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

GTX660:
I63R3-NATHAN_KIDKIXc22_2-0-41-RND7987_0 4565212 4 Jul 2013 | 3:43:54 UTC 4 Jul 2013 | 19:38:15 UTC Completed and validated 39,537.31 39,500.98 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I38R7-NATHAN_KIDKIXc22_2-0-41-RND4496_0 4564951 3 Jul 2013 | 23:05:39 UTC 4 Jul 2013 | 15:02:38 UTC Completed and validated 39,331.53 39,307.00 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I21R5-NATHAN_KIDKIXc22_2-0-41-RND8033_0 4564771 3 Jul 2013 | 16:44:48 UTC 4 Jul 2013 | 8:38:10 UTC Completed and validated 39,480.50 39,443.40 138,300.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)


The WUProp results below are probably misleading (contain some oddities),
Long runs on GeForce 500 series GPU's, Linux app

NVIDIA GeForce GTX 570 797.9 (712.8-966.7) 15.7% (8.5%-18.8%)
NVIDIA GeForce GTX 580 351.3 (351.3-351.3) 5.3% (5.3%-5.3%)
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 31228 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31231 - Posted: 4 Jul 2013, 21:42:03 UTC - in response to Message 31228.  
Last modified: 4 Jul 2013, 21:52:21 UTC

Hello: I'm sorry but I do not understand these latest data, I think are changed. those of the GTX 580 does not add up.

On the other hand never considered the more CPU GTX600 series to almost zero frenta the GTX500.


Edit: Now I see that if you add up, thanks
ID: 31231 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31233 - Posted: 4 Jul 2013, 21:54:20 UTC - in response to Message 31231.  

Yeah, I accidentally listed results from a 650TiBoost on Linux (16% slower) that I was using to check against just in case there was any issue with the results below (as two cards were in use).

I've cleared the list up now, thanks.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 31233 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31234 - Posted: 4 Jul 2013, 22:02:07 UTC - in response to Message 31233.  

Hello: I want to insist on high consumption of CPU (90-100%) of the GTX600 cards against 5-30% CPU of the GTX500.

Wear and secure energy consumption that cancel out the advantage of energy efficiency of the 600 series.
ID: 31234 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31236 - Posted: 4 Jul 2013, 22:22:28 UTC - in response to Message 31234.  

You are certainly correct that a GK600 card will use 1 CPU core/thread while a GF500 will only use ~15% of a CPU core/thread, however the power difference does not cancel itself out.
I've measured this and GF400 and GF500 cards use ~75% of their TDP at GPUGrid while the GK600 cards use ~90%.
So, a GTX580 will use ~244*.75W=183W and a GTX660 will use 140*.9W=126W. The difference is 57W. The difference of my i7-3770K using 6 CPU threads or 7 CPU threads is ~4W to 7W (dependent on the app). So by running a GTX660 you save at least 50W. I'm not belittling the loss of a CPU thread (though it's actually 85% of a thread and perhaps mostly polling) as I know it can be used to crunch CPU projects, however on multicore processors the more of the CPU you use the less of an increase in CPU performance you get (diminished returns).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 31236 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31269 - Posted: 5 Jul 2013, 21:01:13 UTC

SK is right here: especially with HT you're not loosing a full core. Depending on the project that additional logical would only have increased CPU throughput by 15 - 30% of a physical core. Hence the low additional power used.

MrS
Scanning for our furry friends since Jan 2002
ID: 31269 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32490 - Posted: 29 Aug 2013, 18:40:33 UTC - in response to Message 32479.  

So, for potential buyers seeking advice, is it already possible to make a side by side comparison between GK104's and GK110's in points per watt?.

A Titan is roughly 50% faster than a GTX680.

GTX680 has a TDP of 195W and the Titan's TDP is 250W. That said you would really need people measuring their power usage and posting it along with run times and clocks.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 32490 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Trotador

Send message
Joined: 25 Mar 12
Posts: 103
Credit: 14,948,929,771
RAC: 10
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32497 - Posted: 29 Aug 2013, 19:14:46 UTC - in response to Message 32490.  


A Titan is roughly 50% faster than a GTX680.

GTX680 has a TDP of 195W and the Titan's TDP is 250W. That said you would really need people measuring their power usage and posting it along with run times and clocks.


A GTX690 has a TDP of 300W and it provides around 100% more work than a GTX680. So it would have better point per watt ratio than a Titan, am I correct?
ID: 32497 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32499 - Posted: 29 Aug 2013, 19:44:03 UTC - in response to Message 32497.  
Last modified: 29 Aug 2013, 19:57:02 UTC

A GTX690 does around 80% more work than a GTX680; each GTX690 core is about 90% as fast as a GTX680.
Assuming it used 300W (or an equal proportion of it compared to other KG104's and the KG110 GPU's) and a GTXTitan used 250W then it's reasonably accurate to say that a GTX690 is overall equally as efficient as a Titan (for crunching for here in terms of performance per Watt).

GTX680 - 100% for 195W (0.53)
Titan - 150% for 250W (0.60)
GTX690 - 180% for 300W (0.60)

If a GTX680 used 195W to do a certain amount of crunching, the Titan or the GTX690 would do the same amount of work for 167W.

Both the Titan and the GTX680 are about 12% more efficient than the GTX680.

To bring in the purchase factor,
In the UK a GTX690 costs the same as a Titan (~£770), but would do 20% more work. A GTX780 on the other hand costs ~£500, but might bring around 90% the performance of the Titan (would need someone to confirm this).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 32499 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 32500 - Posted: 29 Aug 2013, 19:51:59 UTC - in response to Message 32497.  

I've measured our 4xTitan E5 systems at about 1000W (4.5A rms @ 220Vac) under ACEMD load. (GPUs generally don't hit max power when running ACEMD).

MJH
ID: 32500 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32506 - Posted: 29 Aug 2013, 20:14:33 UTC - in response to Message 32500.  
Last modified: 29 Aug 2013, 20:15:23 UTC

The GK104 GPU's tend to operate at around 90% of their TDP, but there is variation; my GTX660 is presently at 97% TDP and my GTX660Ti is at 88% (both running NOELIA Beta's).

Taking an estimated 100W off for the other components (CPU, motherboard, HDD and RAM) that's 900W at the wall. So with a 90% PSU the GPU's are drawing just over 200W. So more like 80% of their TDP. That's definately a bit less than the GK104's and closer to what the Firmi's use ~75% TDP.

Anyway, the benefit's of the Titan are FP64, faster single GPU performance and better cooling (as in, you could realistically use 4 in the one system).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 32506 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
flashawk

Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 32511 - Posted: 29 Aug 2013, 22:33:50 UTC

On my rigs, 2 GTX680's with an FX-8350 with all 8 cores at 100% pulls 630 watts water cooled. When they were on air, they were using 640 watts.

When I'm using only 2 cores to feed the GPU's the computer used 521 on water and 530 watts on air
ID: 32511 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
nanoprobe

Send message
Joined: 26 Feb 12
Posts: 184
Credit: 222,376,233
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 32514 - Posted: 29 Aug 2013, 23:27:47 UTC - in response to Message 32511.  
Last modified: 29 Aug 2013, 23:28:21 UTC

On my rigs, 2 GTX680's with an FX-8350 with all 8 cores at 100% pulls 630 watts water cooled. When they were on air, they were using 640 watts.

When I'm using only 2 cores to feed the GPU's the computer used 521 on water and 530 watts on air

That seems like a lot of power. Is it the 680s or the 8350 that makes it that high. Just curious.
ID: 32514 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 17 · Next

Message boards : Graphics cards (GPUs) : NVidia GPU Card comparisons in GFLOPS peak

©2025 Universitat Pompeu Fabra