780Ti vs. 770 vs. 750Ti

Message boards : Graphics cards (GPUs) : 780Ti vs. 770 vs. 750Ti
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41389 - Posted: 23 Jun 2015, 16:03:09 UTC

I am doing an analysis of my three GPUs, comparing their credit delivery performance, one against the other.

1. All WUs completed within 24 hours. That's why the 750Ti does not appear in the Gerard numbers(!)
2. The Gerards are from 23 April, when I installed the 780Ti, and the Noelias are from 10 June.
3. Any % improvement difference between the 'ETQunbound' and '467x' Noelias is very marginal.
4. There are fewer 780Ti and 750Ti WUs than you might expect. But these two GPUs are on the same rig, and I have been sharing the Gerard loads between them so that all WUs complete inside 24 hours. None of these WUs are in the analysis.

My conclusions?

• The 780Ti is only around 33% better than the 770 for GPUGrid. Big surprise, given the price differential.
• 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering.
• Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs, if Gerard can be persuaded to moderate his processing demands... Is there such a thing??


ID: 41389 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41390 - Posted: 23 Jun 2015, 17:19:14 UTC - in response to Message 41389.  

• 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering.
• Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs, if Gerard can be persuaded to moderate his processing demands... Is there such a thing??

It is quite possible for the GTX 750 Tis to complete the Gerards reliably in under 24 hours, but there are a few tricks involved.



The GTX 750 Tis are not good, they are great for efficiency, which is especially welcome during the summer.

ID: 41390 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41391 - Posted: 23 Jun 2015, 17:52:25 UTC - in response to Message 41390.  


It is quite possible for the GTX 750 Tis to complete the Gerards reliably in under 24 hours

Thanks for the response, Jim.

I see you are able to do that, but my problem is my Internet connection. I'm 3kms from the telephone exchange and everyone in the village is Netflix-ing! My best connection speed is 2 megs. Often it's 0.5 megs. It can take three hours to upload a 90 meg Gerard result.

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!
ID: 41391 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41392 - Posted: 23 Jun 2015, 18:51:52 UTC - in response to Message 41391.  

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!

Is it too much to ask that the project give credit based on WU processing time rather than sent/received time?
ID: 41392 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41393 - Posted: 23 Jun 2015, 22:32:54 UTC - in response to Message 41392.  
Last modified: 23 Jun 2015, 22:34:58 UTC

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!

Is it too much to ask that the project give credit based on WU processing time rather than sent/received time?

From the project's point of view the reason of a task's delayed return is indifferent.
How could the project tell from the processing time that your host missed the bonus' deadline because of a slow internet connection, and not because the GPU was offline, or it was crunching something else?
Besides, if you know your internet connection is slow, and you don't want to miss the bonus' deadline you should choose your GPU according to both of these conditions. (e.g. if you can't have a faster internet connection then you have to buy a faster GPU, a GTX960 for example)
From the project's side a working solution to this problem could be if the bonus would not be assigned to the workunit itself, instead it would be assigned to the host, so if the previous workunit returned within 24 hours by the host, then the last workuint would gain the +50% bonus credit. In this case if the host has a slow and a fast GPU, or it has two almost fast enough GPUs then it would gain the +50 bounus for all workunits.
But this method does not reflect the way this project works:
A given simulation consists a series of workunits, each one continue the work from the previous one, so from the project's point of view it is better if a workunit is returned as fast as possible, so the whole simulation could be finished as fast as possible. So the current bonus method serves better the project's goals, than your (or my) suggestion.
ID: 41393 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[CSF] Thomas H.V. DUPONT

Send message
Joined: 20 Jul 14
Posts: 732
Credit: 130,089,082
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 41394 - Posted: 24 Jun 2015, 8:19:29 UTC - in response to Message 41389.  

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)
[CSF] Thomas H.V. Dupont
Founder of the team CRUNCHERS SANS FRONTIERES 2.0
www.crunchersansfrontieres
ID: 41394 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41396 - Posted: 24 Jun 2015, 16:35:53 UTC - in response to Message 41394.  

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)

+1
ID: 41396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41397 - Posted: 24 Jun 2015, 17:34:18 UTC - in response to Message 41394.  

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)

My pleasure, Thomas!

Greetings from the woods, 3kms from La Garde Freinet. Its Netflixing inhabitants are killing my Internet service !!
ID: 41397 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41398 - Posted: 24 Jun 2015, 19:15:59 UTC - in response to Message 41389.  

Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing??

No takers on this thought, but perhaps there are mobos that will take three (four?) double-width 750Ti GPUs??
ID: 41398 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41402 - Posted: 25 Jun 2015, 15:04:50 UTC - in response to Message 41389.  
Last modified: 25 Jun 2015, 15:53:47 UTC

Put a GTX750Ti into a Q6600 system (DDR3). At stock and running 2 CPU tasks it would take between 26 and 27h to complete one of Gerard's long tasks. The 750Ti's GPU utilization was around 90%
Enabled SWAN_SYNC and rebooted, reduced the CPU usage to 50% (to still run one CPU task), overclocked to 1306/1320MHz (it bounces around) and the GPU utilization rose to 97%.
GPU-Z says it's in a PCIE 1.1 x16 slot using a X4 PCIe bus.
Going by the % progress the task should now finish under 23h.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 41402 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41403 - Posted: 25 Jun 2015, 17:57:07 UTC - in response to Message 41402.  

Thanks for the reply, skgiven!

Enabled SWAN_SYNC and rebooted

Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%.

overclocked to 1306/1320MHz

Now I'm in trouble... Which sliders do I use to get to 1306/1320MHz ??

ID: 41403 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41404 - Posted: 25 Jun 2015, 18:21:27 UTC - in response to Message 41403.  
Last modified: 25 Jun 2015, 18:53:16 UTC

Which sliders do I use to get to 1306/1320MHz ??

"GPU clock offset" slider. Boost bins are in 13MHz intervals.

Begin by raising one bin at a time until you reach 1306 or 1320.

At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock.
ID: 41404 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41411 - Posted: 26 Jun 2015, 16:41:36 UTC - in response to Message 41403.  

Enabled SWAN_SYNC and rebooted.

Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%.

Looks like I gain 20 mins on a Noelia and 35 mins on a Gerard.

Worthwhile! Thank you.
ID: 41411 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41413 - Posted: 26 Jun 2015, 17:11:28 UTC - in response to Message 41404.  

Which sliders do I use to get to 1306/1320MHz ??

"GPU clock offset" slider. Boost bins are in 13MHz intervals. Begin by raising one bin at a time until you reach 1306 or 1320.

Thanks for the response, eXaPower!

Did that. Pushed it up to 1276 for a temp around 70C and no additional fan noise where the ambient is 25C:



A bit puzzled why GPU-Z says I'm only at 1147...



At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock.

Not sure what you're saying here. Was I already at 1320??
ID: 41413 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41415 - Posted: 26 Jun 2015, 18:37:39 UTC - in response to Message 41413.  

A bit puzzled why GPU-Z says I'm only at 1147...


1147MHz is the clock without boost. Click the GPU-Z Sensors Tab to see what it actually is.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 41415 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41417 - Posted: 27 Jun 2015, 6:08:14 UTC - in response to Message 41415.  

A bit puzzled why GPU-Z says I'm only at 1147...

1147MHz is the clock without boost. Click the GPU-Z Sensors Tab to see what it actually is.

Thanks skgiven! Yep - the sensors tab shows 1276. So much to learn...
ID: 41417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tomba

Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41419 - Posted: 27 Jun 2015, 8:57:22 UTC - in response to Message 41402.  

Enabled SWAN_SYNC

As previously reported, I did that, but...

This WU just finished. The top entry, for my 750Ti, says "SWAN Device 1". Lower down, for my 780Ti, it says "SWAN Device 0". The 750Ti is the one driving video.

Is this the way it is, or is there something else to do?


ID: 41419 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41420 - Posted: 27 Jun 2015, 11:33:27 UTC - in response to Message 41419.  
Last modified: 27 Jun 2015, 11:34:32 UTC

The tasks will report SWAN Device 0 or SWAN Device 1 irrespective of setting SWAN_SYNC. AFAIK it effects both cards the same (after a reboot).

Noticed that your 750Ti temperature crept up to 82C:

# GPU 1 : 79C
# GPU 1 : 80C
# GPU 0 : 69C
# GPU 1 : 81C
# GPU 0 : 70C
# GPU 0 : 71C
# GPU 1 : 82C

Suggest you reduce your temp target a bit.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 41420 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41422 - Posted: 27 Jun 2015, 13:18:56 UTC - in response to Message 41398.  
Last modified: 27 Jun 2015, 13:22:51 UTC

Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing??

No takers on this thought, but perhaps there are mobos that will take three (four?) double-width 750Ti GPUs??

There are such things but they are expensive, there is a performance loss and it's not necessary.
An MSI Z77A-G45 (and many similar boards) has 3 x16 PCIe slots, and you could additionally use up to 3 of the PCIe X1 slots (albeit at a performance loss of 15% or more depending on setup).

After a quick look at an overclocked GTX750Ti on XP, in theory 2 overclocked GTX750Ti’s on a quad core system optimized for GPUGrid could do 12.5% more work than 1 overclocked GTX970. The 750’s would also cost less to buy and less to run.

Two 750Ti’s would cost about £200 new while a 970’s would cost around £250 new.
Second hand £120 to £160 vs £210 to £240; roughly £140 vs £225.

Assuming a 60W system overhead:
The power usage of the 2 750Ti’s would be 2*60W+60W=180W.
The power usage of the one 970 would be 145W+60W=205W.
That’s 13.8% less power for 12.5% more work or a system performance/Watt improvement of 28%.

Does it scale up to 4 750Ti's?

4 750Ti’s would cost about £400 new while the 970’s would cost around £500 new (~£300 vs £450 second hand).
The power usage of the 4 750Ti’s would be 4*60W+60W=300W.
The power usage of the 2 970’s would be 2*145W+60W=350W.
It’s 16.6% less power for 12.5% more work or a system performance/Watt improvement of 31%.

So 12.5% more work, £100 less to buy and 50W less power consumption.

On the down side:

If you are hit by the WDDM overhead (Vista, W7, W8, W10, 2008Server and 2012Server…) then you may miss the 24h 50% bonus deadline for returning some tasks (or might just scrape under it). This shouldn’t be a problem on XP or Linux with an overclocked GTX750Ti, but at reference clocks and/or on a non-optimized system you would still be looking at 26h+ for some tasks (so you do need to overclock these).
On WDDM systems the GTX970’s can run 2 tasks at a time to increase overall credit, effectively negating the theoretical 12% improvement of the 750Ti’s (haven’t checked if it is 12% on a WDDM system).
I cannot increase the power usage of my GTX750Ti, unlike the 970’s.
The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail.
If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time.
While it’s no more expensive to get a motherboard with 2 PCIe X16 slots, and some have 3 slots, very few have 4 slots. While in theory you could use an X1 slot with a powered riser, the loss of bus width would reduce performance by more than the 12% gain. However, the 750Ti would still be cheaper to buy and run, and you might be able to tune it accordingly. It's also likely to suffer less than a bigger card raised from an X1 slot.

For comparison purposes:
The power usage of a single 750Ti system would be 60W+60W=120W.
As the throughput is only 56.25% of a 970 and the power usage is 58.53% overall system performance per Watt is a bit less (4% less) than a 970.
Similarly, in terms of system performance per Watt a GTX980’s 10.8% better than a system with a single GTX750Ti.
If you compare one GTX980 against 3 GTX750Ti’s you would find that the 3 GTX750Ti’s can do about 2.6% more work but use 180W compared to the 165W of the GTX980.
The GTX980 is therefore the better choice in terms of system performance/Watt (by 4%).
However, a new GTX980 still costs £400 while three GTX750Ti’s cost £300 and you could pick up 3 second hand 750Ti’s for around £200.

Obviously you could buy a system that uses more or less power, which would change the picture a bit, but basically if you are only going to get one card get a bigger one and if you want max performance/Watt on the cheap for now, build a system with two, three or four GTX750Ti’s on XP or Linux.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 41422 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41424 - Posted: 27 Jun 2015, 13:54:16 UTC - in response to Message 41422.  
Last modified: 27 Jun 2015, 13:57:29 UTC

On the down side:
...
The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail.

I agree only with the first part of this statement.
Larger cards have higher TDP, resulting higher temperatures, which could induce shorter lifespan.

If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time.

That's why I don't recommend GTX 750Ti for GPUGrid. This card (and the GTX 750) are the smallest ones of the Maxwell series, and since there is the GTX 960 it's a much better choice taking all three aspects (speed, price and energy efficiency) into consideration.
ID: 41424 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Graphics cards (GPUs) : 780Ti vs. 770 vs. 750Ti

©2025 Universitat Pompeu Fabra