NVidia GPU Card comparisons in GFLOPS peak

Message boards : Graphics cards (GPUs) : NVidia GPU Card comparisons in GFLOPS peak
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · 16 . . . 17 · Next

AuthorMessage
jlhal

Send message
Joined: 1 Mar 10
Posts: 147
Credit: 1,077,535,540
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33204 - Posted: 25 Sep 2013, 13:32:15 UTC - in response to Message 33198.  

Here is your answer
...

Thanks, I saw rhis already. Just wondered why it did not appear in skgiven's list, but I presume this is because he does not consider bi-gpu as a whole...

Lubuntu 16.04.1 LTS x64
ID: 33204 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile dskagcommunity
Avatar

Send message
Joined: 28 Apr 11
Posts: 463
Credit: 961,266,958
RAC: 283,695
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33205 - Posted: 25 Sep 2013, 15:11:49 UTC

OK :)
DSKAG Austria Research Team: http://www.research.dskag.at



ID: 33205 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
opr

Send message
Joined: 24 May 11
Posts: 7
Credit: 93,272,937
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 33606 - Posted: 25 Oct 2013, 1:12:51 UTC

Hello folks, this is a question I posted few days earlier to moderator and he answered me already , thanks , but anyway:
I'm running application : Long runs(8-12 hours on fastest card) 8.14 (cuda55). Workunit name is 29x17-SANTI RAP74wtCUBIC-19-34-RND6468_0 . Boinc manager says that it takes 92 hrs and it seems to be correct. I don't know much about these things , I have windows 7 Pro 64 , Geforce GT 430 (cuda 96 , it says somewhere) but computer itself is some years old; pentium 4 (some sort of at least virtual dual-prosessor) 3,4 GHz . 3 GB RAM. Motherboard is asus p5gd2-x ,bios is updated to 0601.

So is it ok that it takes so long. I think these gpugrid-things were faster sometimes. I installed boinc manager again carefully not in a service or protected application mode. Could it be that this "old iron" works better with older graphics card drivers for example. Boinc seems to detect correctly craphic card anyway. So is there any good tricks to go "full speed" , or is this just ok?

Ps.Workunit is now completed and it did took that time. But those of you who have same thoughts , try einstein@home for few gpu-workunits because at least with my old computer and little gp-card , those WUs went fast through , with quite a big estimated GFLOPs-size.
(there seems to be lot of good info on this forum , thanks everyone).

Regards ,opr .
ID: 33606 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
kingcarcas
Avatar

Send message
Joined: 27 Oct 09
Posts: 18
Credit: 378,626,631
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33617 - Posted: 25 Oct 2013, 18:43:28 UTC

Yep, I have a couple of those still and will be upgrading soon. Set it to only run Short runs on that computer. (30hrs) 96 cores aint much when you see a Geforce Titan with 2688 (and a 780Ti on the way!)
You can run Linux to get better results, don't know if you'd bother with that.
ID: 33617 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 33658 - Posted: 28 Oct 2013, 23:16:17 UTC - in response to Message 33617.  

I am combining 2 questions that I have into one here:

kingcarcas,

Are you saying that Linux is better for number crunching than say, WIN 7-64? I am thinking about building a dedicated BOINC machine. Running Ubuntu would save me $200 on a Windows OS.

Also, anyone, what would the performance of a single GTX780 be compared to performance of two 760's? An EVGA 780 w/ ACX cooling can be had at Newegg for $500. Two EVGA GTX 760's w/ ACX cooling can be had for the exact same price of $500. The number of CUDA cores is identical: 1152 (x2) for the 760's, and 2304 for the 780.

Clock speeds of the 2 models are within 10% of each other, but the 780 uses a 384bit memory, whereas the 760's have a 256bit memory. So would a single 780 be overall much faster? Only running 1 GPU would almost certainly be easier.
ID: 33658 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
matlock

Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 33659 - Posted: 29 Oct 2013, 2:05:45 UTC - in response to Message 33658.  

Are you saying that Linux is better for number crunching than say, WIN 7-64? I am thinking about building a dedicated BOINC machine. Running Ubuntu would save me $200 on a Windows OS.


Yes, Linux is at least 10% faster than Windows 7. Look at my setup, as I have almost a 300k RAC with just a GTX 660. If it's a dedicated machine, there should be no hesitation to use Linux.

Also, anyone, what would the performance of a single GTX780 be compared to performance of two 760's? An EVGA 780 w/ ACX cooling can be had at Newegg for $500. Two EVGA GTX 760's w/ ACX cooling can be had for the exact same price of $500. The number of CUDA cores is identical: 1152 (x2) for the 760's, and 2304 for the 780.


Others here can tell you about the performance difference, but another thing to consider is the power consumption and running cost of two GPUs vs one. 340W(170W x 2) vs 250W.
ID: 33659 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 33661 - Posted: 29 Oct 2013, 2:29:17 UTC - in response to Message 33659.  

Thanks for that very helpful info matlock. I will dwnld Ubuntu and install it on a spare drive so I can get familiar with it. As for the GPU's I hadn't thought to consider the power factor. Going to guess that to get the most work done the smarter option would be one 780, instead of two 760's. And it does have the faster 384bit memory bus. But I am most certainly not an expert in these matters, only a quick learner. I would appreciate any thoughts from others regarding two 760's or one 780 for GPUGrid work.
ID: 33661 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
matlock

Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 33662 - Posted: 29 Oct 2013, 4:30:19 UTC - in response to Message 33661.  

Let me know if you need any help with Ubuntu or another distribution. I recommend Lubuntu though. It's the same as Ubuntu but doesn't have all the flash and bloat of Gnome3, as it comes with LXDE. LXDE is a very simple and lightweight desktop environment. If you need a few more features, go for MATE Desktop(a Gnome2 fork) which is easy to install on top of Lubuntu. Everything you need should be available through the synaptic package manager, which is a graphical front-end to apt.

A single 780 would also allow you to add another card later.

I haven't had a failed task for quite a while, but what timing. When I posted I had a RAC of 296k and it just dropped to 282k. Oh well, it will climb back.
ID: 33662 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Midimaniac

Send message
Joined: 7 Jun 13
Posts: 16
Credit: 41,089,625
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 33663 - Posted: 29 Oct 2013, 8:11:19 UTC - in response to Message 33662.  

Matlock I will remember to ask you if I need assistance. Buying the hardware is some months off, but I think I'll download that Lubuntu you mentioned onto a spare drive I have and then boot into it and play around with it, see how I like it.

Right now I am using an i7-3770k, OC'd for the winter to a very stable 4.5GHz, Vcore at 1.256v. It puts out 9-10 deg C more heat that way, as opposed to my summer speed of 4.3GHz, Vcore 1.120v! 32 gigs of RAM and I'm thinking I could steal 16GB of that for the new machine because Windows Task Manager always says I have 24GB available. Using an EVGA Superclocked GTX770 with ACX cooler for the GPU.
ID: 33663 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
matlock

Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 33916 - Posted: 17 Nov 2013, 6:57:44 UTC - in response to Message 33663.  

I have a new recommendation for new Linux users, and that is Linux Mint. I had a chance to use it this weekend and I'm impressed. It puts more polish on Ubuntu.

Try Mint with MATE: http://www.linuxmint.com/edition.php?id=134
ID: 33916 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
klepel

Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,798,881,008
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34151 - Posted: 8 Dec 2013, 4:21:12 UTC

Hi all,
I am trying to install a BOINC version which recognizes CUDA on Puppy Linux 5.7 on a USB Stick (4 GB). I do not have much experience with Linux at all, however I got BOINC running with the boinc-6.2.15.pet from the user “Wulf-Pup”. It works great on CPU WUs however it does not see my CUDA cards (GTX 8400 as a test). So I was wondering if anybody has experience with Puppy Linux and Boinc.

I tried Dotsch/UX as well: It worked but Climateprediction filled the USB stick (8 GB), got stuck and I was never able to format it again. The start process was also quite long and error prone.

I know it works with Ubuntu but it does not convince me installed on an USB Stick.

My final goal is to install Boinc on a sleek Linux on an USB 16 GB (USB 3.0), so I would be able to run a high-end graphic card with Linux instead of my W7 environment on the same computer.
Thanks.
ID: 34151 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34152 - Posted: 8 Dec 2013, 9:43:36 UTC - in response to Message 34151.  

Try HOW TO - Install GPUGRID on a USB stick.

I installed Xubuntu (Ubuntu with a much lighter desktop) on a USB 2.0 stick using Unetbootin. It worked but using the USB stick as a substitute for an HDD gave very slow disk reads/writes and slow boots. USB 3.0 is faster than 2.0 and might be adequate but I would be prepared to configure to use some HDD space for permanent storage. But if you're going to do that, then it seems to me you may as well install Linux in a dual-boot configuration alongside Windows.

If you really must run Windows on that machine along with Linux and if you have a Win install disk then the best configuration, if you want to crunch GPUgrid on Linux, is to just erase Windows from the disk, install Linux, then install VirtualBox (free) on Linux, then install Windows in a Virtual Box virtual machine. That way you can have Windows and Linux running simultaneously rather than having to boot back and forth between them. The virtual Windows won't run quite as fast as real Windows but, depending on the apps you intend to run, that might not be a problem.

I don't see any advantage to having a bootable USB stick to boot a machine that already has an HDD with enough spare room for a dual-boot setup unless you need to boot to Linux infrequently, perhap
BOINC <<--- credit whores, pedants, alien hunters
ID: 34152 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35691 - Posted: 16 Mar 2014, 18:54:17 UTC - in response to Message 34152.  
Last modified: 17 Mar 2014, 10:15:36 UTC

Update to add the GTX Titan Black, the GTX 780 Ti and the 750 Ti

Relative performances of high end and mid-range reference Keplers at GPUGrid:

    114%    GTX Titan Black
    112%    GTX 780Ti
    100%	GTX Titan
    90%	GTX 780
    77% 	GTX 770
    74% 	GTX 680
    59% 	GTX 670
    58% 	GTX 690 (each GPU)
    55% 	GTX 660Ti
    53%	GTX 760 
    51% 	GTX 660
    47%     GTX 750Ti
    43% 	GTX 650TiBoost
    33%	GTX 650Ti



This is meant as a general guide but it should be reasonably accurate (within a few percent). The figures are based on actual results but they should still be perceived as estimates; there are some potential unknowns, unexpected configurations, app specific effects, bespoke cards, OC's, bottlenecks... and the table is subject to change (WU and app type). Note that lots of Kepler cards have non-reference clocks, so expect line variations of over 10% (the GTX660 performance range could be from 50 to 56% of a Titan).


FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 35691 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 35692 - Posted: 16 Mar 2014, 19:24:55 UTC - in response to Message 35691.  

Hi,

The 750Ti ought to come in at about 0.50.

Matt
ID: 35692 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35696 - Posted: 16 Mar 2014, 21:03:20 UTC - in response to Message 35692.  
Last modified: 17 Mar 2014, 10:30:16 UTC

Hi Matt,
I've changed the table a couple of times but it's based on what I can see from others, and the best comparison I can find now is as follows:
Both Win7 systems with comparable CPU's.

GTX750Ti
818x-SANTI_MAR419cap310-29-84-RND6520_0 5288118 16 Mar 2014 | 14:01:45 UTC 16 Mar 2014 | 18:26:07 UTC Completed and validated 14,621.79 4,475.01 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60)
http://www.gpugrid.net/result.php?resultid=7938386

GTX660
425x-SANTI_MAR419cap310-23-84-RND9668_0 5283371 15 Mar 2014 | 10:36:31 UTC 15 Mar 2014 | 17:31:57 UTC Completed and validated 12,503.16 2,083.05 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60)
http://www.gpugrid.net/result.php?resultid=7931978

The most recent driver (used) is supposed to increase boost for the Maxwell's, but I've got little to go on, and the non-reference cards range from <1089MHz to 1255MHz Boost (15%). I expect Alexander has a good card based on the reported non-Boost speeds. Then there are all the other performance unknowns; what else the system is doing, system spec, temps, and the big if; memory controller load.

Update!
I think I've found the issue; running Einstein iGPU WU's at the same time as NVidia GPUGrid WU's - It's known to reduce GPUGrid WU performance (but also depends on CPU usage and other settings). So the GTX750Ti performance I've included is probably less than what it could be.

Update 2
GTX750Ti (no Einstein iGPU app)
401x-SANTI_MAR423cap310-29-84-RND5514_1 5289992 17 Mar 2014 | 6:14:56 UTC 17 Mar 2014 | 10:04:35 UTC Completed and validated 13,649.27 3,335.68 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60)
http://www.gpugrid.net/result.php?resultid=7941692

Performance was 9.5% faster than the first WU when not running the iGPU Einstein app, making the GTX750Ti 9% slower than a GTX660.

So,
    114%    GTX Titan Black
    112%    GTX 780Ti
    100%	GTX Titan
    90%	GTX 780
    77% 	GTX 770
    74% 	GTX 680
    59% 	GTX 670
    58% 	GTX 690 (each GPU)
    55% 	GTX 660Ti
    53%	GTX 760 
    51% 	GTX 660
    47%     GTX 750Ti
    43% 	GTX 650TiBoost
    33%	GTX 650Ti


FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 35696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36388 - Posted: 18 Apr 2014, 15:19:34 UTC

So in short: does the 750 Ti currently have the highest ratio of performance to energy used?
ID: 36388 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
GPUGRID Role account

Send message
Joined: 15 Feb 07
Posts: 134
Credit: 1,349,535,983
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 36390 - Posted: 18 Apr 2014, 15:35:16 UTC - in response to Message 36388.  
Last modified: 18 Apr 2014, 15:41:51 UTC

So in short: does the 750 Ti currently have the highest ratio of performance to energy used?


It's a wash between 750s and 780s when you factor in the cost of the host systems need to support an equal amount of GPU compute capacity. The 750 configuration has slightly higher capital cost which is about balanced by reduced operational cost.

Matt
ID: 36390 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36393 - Posted: 18 Apr 2014, 15:54:44 UTC - in response to Message 36390.  
Last modified: 18 Apr 2014, 15:56:38 UTC

So in short: does the 750 Ti currently have the highest ratio of performance to energy used?


My GTX 750 Ti draws about 52 watts on the Shorts, while my GTX 660 pulls the full TDP of 140 watts (both as measured by GPU-Z). And the times on the 750 Ti are only a little longer, with a Nathan short taking typically 2 hours 54 minutes on the 750 Ti and 2 hours 37 minutes on the 660, or about as skgiven noted above. (The 660 is clocked at 1000 MHz base, running at 1136 MHz, and the 750 Ti is 1072/1215 MHz).

In short, it is a great card, just in time for the summer.
ID: 36393 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36428 - Posted: 19 Apr 2014, 11:49:14 UTC - in response to Message 36393.  
Last modified: 19 Apr 2014, 12:40:10 UTC

It's a pity Maxwell's didn't also arrive in larger guises. Cards as capable as the GTX770 or GTX780Ti but based on the Maxwell architecture using the existing, mature 28nm process would certainly be a winner. There is definitely plenty of scope for mid range cards too. Perhaps NVidia missed a trick here? Maybe the predicted GTX750 and GTX750Ti performances were not so high, or maybe NVidia didn't want to interfere with the existing production or plans for GTX780Ti, Titan Black and Titan Z by releasing mid to high end 28nm Maxwell cards. If so that would amount to withholding technology for 'business' reasons. Hopefully there is still time for more 28nm Maxwell's - something that performs to the standards of a GTX770 or 780 but uses much less power would go down well over the summer...

Rumour has it that the 20nm Maxwell's might not be ready this year, but even if 20nm card do arrive in the 3rd quarter, availability could be a problem (as seen before) and the inevitable expense will put many off. Until at least this time next year they are likely to be quite expensive. Unless you are interested in low to mid end cards (GTX750Ti), now isn't a great time to buy especially if your electric costs are high.

The release of the 28nm Maxwell's and continued release of high end 28nm Kepler's suggests NVidia is continuing to adapt the strategy for releasing cards. Way back in the days of the GF200 cards, a flagship GTX280 was released along with a slightly lesser version (260). Then these were added to and revision models released under the same series (55nm versions of the GTX285, GTX275 and GTX260). This model continued with the release of the GTX480 and GTX470, changed slightly with the release of the slightly modified architectures and refined process (release of the GTX580 but still on 40nm). The GF600's were also reincarnated (GF700) and their designs tweaked in the same way the GF500's were an upgrade to the GF400's, but it's been a bit more drawn out and the GF700 range has or is getting more high end cards to choose from.
The 28nm Maxwells completely reverses the original model; new architecture, but the same 28nm process, and instead of a flagship release, a low to medium end release (which should keep OEM's happy).

Rumour has it that a high end 20nm Maxwell might make it out late this year, but if that's the case it's not great for us. It will be interesting to see if smaller 20nm Maxwells make it onto the market earlier than the high end 'flagship' cards, but I'm hoping for something now and a few 28nm Maxwell runs might be feasible.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 36428 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36431 - Posted: 19 Apr 2014, 12:42:13 UTC - in response to Message 36428.  

It's a pity Maxwell's didn't also arrive in larger guises. Cards as capable as the GTX770 or GTX780Ti but based on the Maxwell architecture using the existing, mature 28nm process would certainly be a winner. There is definitely plenty of scope for mid range cards too. Perhaps NVidia missed a trick here?

Exactly, except that I don't think they are missing a trick as much as keeping their options open. If the 20nm process does not arrive soon enough (whatever that means), then I have no doubt that we will see more Maxwells at 28nm. The GTX 750 Ti was just testing the waters.
ID: 36431 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · 16 . . . 17 · Next

Message boards : Graphics cards (GPUs) : NVidia GPU Card comparisons in GFLOPS peak

©2025 Universitat Pompeu Fabra