Message boards :
Graphics cards (GPUs) :
NVidia GPU Card comparisons in GFLOPS peak
Message board moderation
Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · 16 . . . 17 · Next
| Author | Message |
|---|---|
|
Send message Joined: 1 Mar 10 Posts: 147 Credit: 1,077,535,540 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Here is your answer Thanks, I saw rhis already. Just wondered why it did not appear in skgiven's list, but I presume this is because he does not consider bi-gpu as a whole... Lubuntu 16.04.1 LTS x64 |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 959,766,958 RAC: 142,390 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
|
Send message Joined: 24 May 11 Posts: 7 Credit: 93,272,937 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hello folks, this is a question I posted few days earlier to moderator and he answered me already , thanks , but anyway: I'm running application : Long runs(8-12 hours on fastest card) 8.14 (cuda55). Workunit name is 29x17-SANTI RAP74wtCUBIC-19-34-RND6468_0 . Boinc manager says that it takes 92 hrs and it seems to be correct. I don't know much about these things , I have windows 7 Pro 64 , Geforce GT 430 (cuda 96 , it says somewhere) but computer itself is some years old; pentium 4 (some sort of at least virtual dual-prosessor) 3,4 GHz . 3 GB RAM. Motherboard is asus p5gd2-x ,bios is updated to 0601. So is it ok that it takes so long. I think these gpugrid-things were faster sometimes. I installed boinc manager again carefully not in a service or protected application mode. Could it be that this "old iron" works better with older graphics card drivers for example. Boinc seems to detect correctly craphic card anyway. So is there any good tricks to go "full speed" , or is this just ok? Ps.Workunit is now completed and it did took that time. But those of you who have same thoughts , try einstein@home for few gpu-workunits because at least with my old computer and little gp-card , those WUs went fast through , with quite a big estimated GFLOPs-size. (there seems to be lot of good info on this forum , thanks everyone). Regards ,opr . |
|
Send message Joined: 27 Oct 09 Posts: 18 Credit: 378,626,631 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yep, I have a couple of those still and will be upgrading soon. Set it to only run Short runs on that computer. (30hrs) 96 cores aint much when you see a Geforce Titan with 2688 (and a 780Ti on the way!) You can run Linux to get better results, don't know if you'd bother with that. |
|
Send message Joined: 7 Jun 13 Posts: 16 Credit: 41,089,625 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I am combining 2 questions that I have into one here: kingcarcas, Are you saying that Linux is better for number crunching than say, WIN 7-64? I am thinking about building a dedicated BOINC machine. Running Ubuntu would save me $200 on a Windows OS. Also, anyone, what would the performance of a single GTX780 be compared to performance of two 760's? An EVGA 780 w/ ACX cooling can be had at Newegg for $500. Two EVGA GTX 760's w/ ACX cooling can be had for the exact same price of $500. The number of CUDA cores is identical: 1152 (x2) for the 760's, and 2304 for the 780. Clock speeds of the 2 models are within 10% of each other, but the 780 uses a 384bit memory, whereas the 760's have a 256bit memory. So would a single 780 be overall much faster? Only running 1 GPU would almost certainly be easier. |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Are you saying that Linux is better for number crunching than say, WIN 7-64? I am thinking about building a dedicated BOINC machine. Running Ubuntu would save me $200 on a Windows OS. Yes, Linux is at least 10% faster than Windows 7. Look at my setup, as I have almost a 300k RAC with just a GTX 660. If it's a dedicated machine, there should be no hesitation to use Linux. Also, anyone, what would the performance of a single GTX780 be compared to performance of two 760's? An EVGA 780 w/ ACX cooling can be had at Newegg for $500. Two EVGA GTX 760's w/ ACX cooling can be had for the exact same price of $500. The number of CUDA cores is identical: 1152 (x2) for the 760's, and 2304 for the 780. Others here can tell you about the performance difference, but another thing to consider is the power consumption and running cost of two GPUs vs one. 340W(170W x 2) vs 250W. |
|
Send message Joined: 7 Jun 13 Posts: 16 Credit: 41,089,625 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Thanks for that very helpful info matlock. I will dwnld Ubuntu and install it on a spare drive so I can get familiar with it. As for the GPU's I hadn't thought to consider the power factor. Going to guess that to get the most work done the smarter option would be one 780, instead of two 760's. And it does have the faster 384bit memory bus. But I am most certainly not an expert in these matters, only a quick learner. I would appreciate any thoughts from others regarding two 760's or one 780 for GPUGrid work. |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Let me know if you need any help with Ubuntu or another distribution. I recommend Lubuntu though. It's the same as Ubuntu but doesn't have all the flash and bloat of Gnome3, as it comes with LXDE. LXDE is a very simple and lightweight desktop environment. If you need a few more features, go for MATE Desktop(a Gnome2 fork) which is easy to install on top of Lubuntu. Everything you need should be available through the synaptic package manager, which is a graphical front-end to apt. A single 780 would also allow you to add another card later. I haven't had a failed task for quite a while, but what timing. When I posted I had a RAC of 296k and it just dropped to 282k. Oh well, it will climb back. |
|
Send message Joined: 7 Jun 13 Posts: 16 Credit: 41,089,625 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Matlock I will remember to ask you if I need assistance. Buying the hardware is some months off, but I think I'll download that Lubuntu you mentioned onto a spare drive I have and then boot into it and play around with it, see how I like it. Right now I am using an i7-3770k, OC'd for the winter to a very stable 4.5GHz, Vcore at 1.256v. It puts out 9-10 deg C more heat that way, as opposed to my summer speed of 4.3GHz, Vcore 1.120v! 32 gigs of RAM and I'm thinking I could steal 16GB of that for the new machine because Windows Task Manager always says I have 24GB available. Using an EVGA Superclocked GTX770 with ACX cooler for the GPU. |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have a new recommendation for new Linux users, and that is Linux Mint. I had a chance to use it this weekend and I'm impressed. It puts more polish on Ubuntu. Try Mint with MATE: http://www.linuxmint.com/edition.php?id=134 |
|
Send message Joined: 23 Dec 09 Posts: 189 Credit: 4,798,881,008 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi all, I am trying to install a BOINC version which recognizes CUDA on Puppy Linux 5.7 on a USB Stick (4 GB). I do not have much experience with Linux at all, however I got BOINC running with the boinc-6.2.15.pet from the user “Wulf-Pup”. It works great on CPU WUs however it does not see my CUDA cards (GTX 8400 as a test). So I was wondering if anybody has experience with Puppy Linux and Boinc. I tried Dotsch/UX as well: It worked but Climateprediction filled the USB stick (8 GB), got stuck and I was never able to format it again. The start process was also quite long and error prone. I know it works with Ubuntu but it does not convince me installed on an USB Stick. My final goal is to install Boinc on a sleek Linux on an USB 16 GB (USB 3.0), so I would be able to run a high-end graphic card with Linux instead of my W7 environment on the same computer. Thanks. |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Try HOW TO - Install GPUGRID on a USB stick. I installed Xubuntu (Ubuntu with a much lighter desktop) on a USB 2.0 stick using Unetbootin. It worked but using the USB stick as a substitute for an HDD gave very slow disk reads/writes and slow boots. USB 3.0 is faster than 2.0 and might be adequate but I would be prepared to configure to use some HDD space for permanent storage. But if you're going to do that, then it seems to me you may as well install Linux in a dual-boot configuration alongside Windows. If you really must run Windows on that machine along with Linux and if you have a Win install disk then the best configuration, if you want to crunch GPUgrid on Linux, is to just erase Windows from the disk, install Linux, then install VirtualBox (free) on Linux, then install Windows in a Virtual Box virtual machine. That way you can have Windows and Linux running simultaneously rather than having to boot back and forth between them. The virtual Windows won't run quite as fast as real Windows but, depending on the apps you intend to run, that might not be a problem. I don't see any advantage to having a bootable USB stick to boot a machine that already has an HDD with enough spare room for a dual-boot setup unless you need to boot to Linux infrequently, perhap BOINC <<--- credit whores, pedants, alien hunters |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Update to add the GTX Titan Black, the GTX 780 Ti and the 750 Ti Relative performances of high end and mid-range reference Keplers at GPUGrid: 114% GTX Titan Black 112% GTX 780Ti 100% GTX Titan 90% GTX 780 77% GTX 770 74% GTX 680 59% GTX 670 58% GTX 690 (each GPU) 55% GTX 660Ti 53% GTX 760 51% GTX 660 47% GTX 750Ti 43% GTX 650TiBoost 33% GTX 650Ti
FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
MJHSend message Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level ![]() Scientific publications ![]()
|
Hi, The 750Ti ought to come in at about 0.50. Matt |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi Matt, I've changed the table a couple of times but it's based on what I can see from others, and the best comparison I can find now is as follows: Both Win7 systems with comparable CPU's. GTX750Ti 818x-SANTI_MAR419cap310-29-84-RND6520_0 5288118 16 Mar 2014 | 14:01:45 UTC 16 Mar 2014 | 18:26:07 UTC Completed and validated 14,621.79 4,475.01 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60) http://www.gpugrid.net/result.php?resultid=7938386 GTX660 425x-SANTI_MAR419cap310-23-84-RND9668_0 5283371 15 Mar 2014 | 10:36:31 UTC 15 Mar 2014 | 17:31:57 UTC Completed and validated 12,503.16 2,083.05 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60) http://www.gpugrid.net/result.php?resultid=7931978 The most recent driver (used) is supposed to increase boost for the Maxwell's, but I've got little to go on, and the non-reference cards range from <1089MHz to 1255MHz Boost (15%). I expect Alexander has a good card based on the reported non-Boost speeds. Then there are all the other performance unknowns; what else the system is doing, system spec, temps, and the big if; memory controller load. Update! I think I've found the issue; running Einstein iGPU WU's at the same time as NVidia GPUGrid WU's - It's known to reduce GPUGrid WU performance (but also depends on CPU usage and other settings). So the GTX750Ti performance I've included is probably less than what it could be. Update 2 GTX750Ti (no Einstein iGPU app) 401x-SANTI_MAR423cap310-29-84-RND5514_1 5289992 17 Mar 2014 | 6:14:56 UTC 17 Mar 2014 | 10:04:35 UTC Completed and validated 13,649.27 3,335.68 18,300.00 Short runs (2-3 hours on fastest card) v8.15 (cuda60) http://www.gpugrid.net/result.php?resultid=7941692 Performance was 9.5% faster than the first WU when not running the iGPU Einstein app, making the GTX750Ti 9% slower than a GTX660. So, 114% GTX Titan Black 112% GTX 780Ti 100% GTX Titan 90% GTX 780 77% GTX 770 74% GTX 680 59% GTX 670 58% GTX 690 (each GPU) 55% GTX 660Ti 53% GTX 760 51% GTX 660 47% GTX 750Ti 43% GTX 650TiBoost 33% GTX 650Ti FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So in short: does the 750 Ti currently have the highest ratio of performance to energy used? |
|
Send message Joined: 15 Feb 07 Posts: 134 Credit: 1,349,535,983 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So in short: does the 750 Ti currently have the highest ratio of performance to energy used? It's a wash between 750s and 780s when you factor in the cost of the host systems need to support an equal amount of GPU compute capacity. The 750 configuration has slightly higher capital cost which is about balanced by reduced operational cost. Matt |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So in short: does the 750 Ti currently have the highest ratio of performance to energy used? My GTX 750 Ti draws about 52 watts on the Shorts, while my GTX 660 pulls the full TDP of 140 watts (both as measured by GPU-Z). And the times on the 750 Ti are only a little longer, with a Nathan short taking typically 2 hours 54 minutes on the 750 Ti and 2 hours 37 minutes on the 660, or about as skgiven noted above. (The 660 is clocked at 1000 MHz base, running at 1136 MHz, and the 750 Ti is 1072/1215 MHz). In short, it is a great card, just in time for the summer. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's a pity Maxwell's didn't also arrive in larger guises. Cards as capable as the GTX770 or GTX780Ti but based on the Maxwell architecture using the existing, mature 28nm process would certainly be a winner. There is definitely plenty of scope for mid range cards too. Perhaps NVidia missed a trick here? Maybe the predicted GTX750 and GTX750Ti performances were not so high, or maybe NVidia didn't want to interfere with the existing production or plans for GTX780Ti, Titan Black and Titan Z by releasing mid to high end 28nm Maxwell cards. If so that would amount to withholding technology for 'business' reasons. Hopefully there is still time for more 28nm Maxwell's - something that performs to the standards of a GTX770 or 780 but uses much less power would go down well over the summer... Rumour has it that the 20nm Maxwell's might not be ready this year, but even if 20nm card do arrive in the 3rd quarter, availability could be a problem (as seen before) and the inevitable expense will put many off. Until at least this time next year they are likely to be quite expensive. Unless you are interested in low to mid end cards (GTX750Ti), now isn't a great time to buy especially if your electric costs are high. The release of the 28nm Maxwell's and continued release of high end 28nm Kepler's suggests NVidia is continuing to adapt the strategy for releasing cards. Way back in the days of the GF200 cards, a flagship GTX280 was released along with a slightly lesser version (260). Then these were added to and revision models released under the same series (55nm versions of the GTX285, GTX275 and GTX260). This model continued with the release of the GTX480 and GTX470, changed slightly with the release of the slightly modified architectures and refined process (release of the GTX580 but still on 40nm). The GF600's were also reincarnated (GF700) and their designs tweaked in the same way the GF500's were an upgrade to the GF400's, but it's been a bit more drawn out and the GF700 range has or is getting more high end cards to choose from. The 28nm Maxwells completely reverses the original model; new architecture, but the same 28nm process, and instead of a flagship release, a low to medium end release (which should keep OEM's happy). Rumour has it that a high end 20nm Maxwell might make it out late this year, but if that's the case it's not great for us. It will be interesting to see if smaller 20nm Maxwells make it onto the market earlier than the high end 'flagship' cards, but I'm hoping for something now and a few 28nm Maxwell runs might be feasible. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's a pity Maxwell's didn't also arrive in larger guises. Cards as capable as the GTX770 or GTX780Ti but based on the Maxwell architecture using the existing, mature 28nm process would certainly be a winner. There is definitely plenty of scope for mid range cards too. Perhaps NVidia missed a trick here? Exactly, except that I don't think they are missing a trick as much as keeping their options open. If the 20nm process does not arrive soon enough (whatever that means), then I have no doubt that we will see more Maxwells at 28nm. The GTX 750 Ti was just testing the waters. |
©2025 Universitat Pompeu Fabra