Message boards :
Graphics cards (GPUs) :
780Ti vs. 770 vs. 750Ti
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I am doing an analysis of my three GPUs, comparing their credit delivery performance, one against the other. 1. All WUs completed within 24 hours. That's why the 750Ti does not appear in the Gerard numbers(!) 2. The Gerards are from 23 April, when I installed the 780Ti, and the Noelias are from 10 June. 3. Any % improvement difference between the 'ETQunbound' and '467x' Noelias is very marginal. 4. There are fewer 780Ti and 750Ti WUs than you might expect. But these two GPUs are on the same rig, and I have been sharing the Gerard loads between them so that all WUs complete inside 24 hours. None of these WUs are in the analysis. My conclusions? • The 780Ti is only around 33% better than the 770 for GPUGrid. Big surprise, given the price differential. • 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering. • Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs, if Gerard can be persuaded to moderate his processing demands... Is there such a thing?? |
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
• 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering. It is quite possible for the GTX 750 Tis to complete the Gerards reliably in under 24 hours, but there are a few tricks involved.
https://www.gpugrid.net/forum_thread.php?id=3986 You need to leave at least one CPU core free for each card. In fact, I run two GTX 750 Tis on each of two PCs (four cards total). This gives me essentially two cores for each card, since each card uses only about 20% of a core on my Haswell machines, leaving the rest of the core available for either card that needs it. (This leaves the other 6 cores free for other BOINC work.) Finally, you may need to do a moderate overclock, something I normally avoid, but these cards can take it; my ASUS GTX750TI-OC-2GD5 don't get about 64C with a 1348 MHz GPU base clock, and I have received no errors yet. https://www.gpugrid.net/results.php?hostid=223541&offset=0&show_names=1&state=0&appid= https://www.gpugrid.net/results.php?hostid=194224&offset=0&show_names=1&state=0&appid=
|
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the response, Jim. I see you are able to do that, but my problem is my Internet connection. I'm 3kms from the telephone exchange and everyone in the village is Netflix-ing! My best connection speed is 2 megs. Often it's 0.5 megs. It can take three hours to upload a 90 meg Gerard result. Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!! |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!! Is it too much to ask that the project give credit based on WU processing time rather than sent/received time? |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!! From the project's point of view the reason of a task's delayed return is indifferent. How could the project tell from the processing time that your host missed the bonus' deadline because of a slow internet connection, and not because the GPU was offline, or it was crunching something else? Besides, if you know your internet connection is slow, and you don't want to miss the bonus' deadline you should choose your GPU according to both of these conditions. (e.g. if you can't have a faster internet connection then you have to buy a faster GPU, a GTX960 for example) From the project's side a working solution to this problem could be if the bonus would not be assigned to the workunit itself, instead it would be assigned to the host, so if the previous workunit returned within 24 hours by the host, then the last workuint would gain the +50% bonus credit. In this case if the host has a slow and a fast GPU, or it has two almost fast enough GPUs then it would gain the +50 bounus for all workunits. But this method does not reflect the way this project works: A given simulation consists a series of workunits, each one continue the work from the previous one, so from the project's point of view it is better if a workunit is returned as fast as possible, so the whole simulation could be finished as fast as possible. So the current bonus method serves better the project's goals, than your (or my) suggestion. |
|
Send message Joined: 20 Jul 14 Posts: 732 Credit: 130,089,082 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for this very interesting report, tomba, and thanks for sharing. Really appreciated :) [CSF] Thomas H.V. Dupont Founder of the team CRUNCHERS SANS FRONTIERES 2.0 www.crunchersansfrontieres |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for this very interesting report, tomba, and thanks for sharing. +1 |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for this very interesting report, tomba, and thanks for sharing. My pleasure, Thomas! Greetings from the woods, 3kms from La Garde Freinet. Its Netflixing inhabitants are killing my Internet service !! |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing?? No takers on this thought, but perhaps there are mobos that will take three (four?) double-width 750Ti GPUs?? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Put a GTX750Ti into a Q6600 system (DDR3). At stock and running 2 CPU tasks it would take between 26 and 27h to complete one of Gerard's long tasks. The 750Ti's GPU utilization was around 90% Enabled SWAN_SYNC and rebooted, reduced the CPU usage to 50% (to still run one CPU task), overclocked to 1306/1320MHz (it bounces around) and the GPU utilization rose to 97%. GPU-Z says it's in a PCIE 1.1 x16 slot using a X4 PCIe bus. Going by the % progress the task should now finish under 23h. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the reply, skgiven! Enabled SWAN_SYNC and rebooted Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%. overclocked to 1306/1320MHz Now I'm in trouble... Which sliders do I use to get to 1306/1320MHz ?? |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Which sliders do I use to get to 1306/1320MHz ?? "GPU clock offset" slider. Boost bins are in 13MHz intervals. Begin by raising one bin at a time until you reach 1306 or 1320. At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock. |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Enabled SWAN_SYNC and rebooted. Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%. Looks like I gain 20 mins on a Noelia and 35 mins on a Gerard. Worthwhile! Thank you. |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Which sliders do I use to get to 1306/1320MHz ?? Thanks for the response, eXaPower! Did that. Pushed it up to 1276 for a temp around 70C and no additional fan noise where the ambient is 25C: A bit puzzled why GPU-Z says I'm only at 1147... At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock. Not sure what you're saying here. Was I already at 1320?? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
A bit puzzled why GPU-Z says I'm only at 1147... 1147MHz is the clock without boost. Click the GPU-Z Sensors Tab to see what it actually is. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
A bit puzzled why GPU-Z says I'm only at 1147... Thanks skgiven! Yep - the sensors tab shows 1276. So much to learn... |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Enabled SWAN_SYNC As previously reported, I did that, but... This WU just finished. The top entry, for my 750Ti, says "SWAN Device 1". Lower down, for my 780Ti, it says "SWAN Device 0". The 750Ti is the one driving video. Is this the way it is, or is there something else to do? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The tasks will report SWAN Device 0 or SWAN Device 1 irrespective of setting SWAN_SYNC. AFAIK it effects both cards the same (after a reboot). Noticed that your 750Ti temperature crept up to 82C: # GPU 1 : 79C # GPU 1 : 80C # GPU 0 : 69C # GPU 1 : 81C # GPU 0 : 70C # GPU 0 : 71C # GPU 1 : 82C Suggest you reduce your temp target a bit. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing?? There are such things but they are expensive, there is a performance loss and it's not necessary. An MSI Z77A-G45 (and many similar boards) has 3 x16 PCIe slots, and you could additionally use up to 3 of the PCIe X1 slots (albeit at a performance loss of 15% or more depending on setup). After a quick look at an overclocked GTX750Ti on XP, in theory 2 overclocked GTX750Ti’s on a quad core system optimized for GPUGrid could do 12.5% more work than 1 overclocked GTX970. The 750’s would also cost less to buy and less to run. Two 750Ti’s would cost about £200 new while a 970’s would cost around £250 new. Second hand £120 to £160 vs £210 to £240; roughly £140 vs £225. Assuming a 60W system overhead: The power usage of the 2 750Ti’s would be 2*60W+60W=180W. The power usage of the one 970 would be 145W+60W=205W. That’s 13.8% less power for 12.5% more work or a system performance/Watt improvement of 28%. Does it scale up to 4 750Ti's? 4 750Ti’s would cost about £400 new while the 970’s would cost around £500 new (~£300 vs £450 second hand). The power usage of the 4 750Ti’s would be 4*60W+60W=300W. The power usage of the 2 970’s would be 2*145W+60W=350W. It’s 16.6% less power for 12.5% more work or a system performance/Watt improvement of 31%. So 12.5% more work, £100 less to buy and 50W less power consumption. On the down side: If you are hit by the WDDM overhead (Vista, W7, W8, W10, 2008Server and 2012Server…) then you may miss the 24h 50% bonus deadline for returning some tasks (or might just scrape under it). This shouldn’t be a problem on XP or Linux with an overclocked GTX750Ti, but at reference clocks and/or on a non-optimized system you would still be looking at 26h+ for some tasks (so you do need to overclock these). On WDDM systems the GTX970’s can run 2 tasks at a time to increase overall credit, effectively negating the theoretical 12% improvement of the 750Ti’s (haven’t checked if it is 12% on a WDDM system). I cannot increase the power usage of my GTX750Ti, unlike the 970’s. The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail. If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time. While it’s no more expensive to get a motherboard with 2 PCIe X16 slots, and some have 3 slots, very few have 4 slots. While in theory you could use an X1 slot with a powered riser, the loss of bus width would reduce performance by more than the 12% gain. However, the 750Ti would still be cheaper to buy and run, and you might be able to tune it accordingly. It's also likely to suffer less than a bigger card raised from an X1 slot. For comparison purposes: The power usage of a single 750Ti system would be 60W+60W=120W. As the throughput is only 56.25% of a 970 and the power usage is 58.53% overall system performance per Watt is a bit less (4% less) than a 970. Similarly, in terms of system performance per Watt a GTX980’s 10.8% better than a system with a single GTX750Ti. If you compare one GTX980 against 3 GTX750Ti’s you would find that the 3 GTX750Ti’s can do about 2.6% more work but use 180W compared to the 165W of the GTX980. The GTX980 is therefore the better choice in terms of system performance/Watt (by 4%). However, a new GTX980 still costs £400 while three GTX750Ti’s cost £300 and you could pick up 3 second hand 750Ti’s for around £200. Obviously you could buy a system that uses more or less power, which would change the picture a bit, but basically if you are only going to get one card get a bigger one and if you want max performance/Watt on the cheap for now, build a system with two, three or four GTX750Ti’s on XP or Linux. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
On the down side: I agree only with the first part of this statement. Larger cards have higher TDP, resulting higher temperatures, which could induce shorter lifespan. If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time. That's why I don't recommend GTX 750Ti for GPUGrid. This card (and the GTX 750) are the smallest ones of the Maxwell series, and since there is the GTX 960 it's a much better choice taking all three aspects (speed, price and energy efficiency) into consideration. |
©2025 Universitat Pompeu Fabra