Credit per € / $

Message boards : Number crunching : Credit per € / $
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

AuthorMessage
MrJo

Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 37798 - Posted: 3 Sep 2014, 13:18:14 UTC

Back to Credit per €:

I have made a few observations regarding the consumption (in watt) and results delivered. The results of my graphic cards I have held in a Exel spreadsheet. The table can be downloaded here:

PerformanceProWatt.xls

I had 4 Nvidia Cards for comparison, GTX 770, GTX 760, GTX 680 and GTX 750 Ti. With a Xavax energy meter I have the actual consumption measured in idle and load operation. Example: PC with GTX 770 idle: 80 Watt, with GPIGrid running load: 245 Watt. So running GPUGrid consumes 165 watts. Settings for crunching are only ACEMD long runs (8-12 hours on fastest GPU). In order to obtain a very precise average, I used the average of about 20 results per card.

The end result can be found in the orange cell. In short Words:

The GTX 770 supplies 105 points/watt
The GTX 760 supplies 110 points/watt
The GTX 680 supplies 116 points/watt
The GTX 750 Ti supplies 187 points/watt

So the GTX 750 Ti is the most efficient graphics card at the moment.

Running each PC with 1 graphic card individually is the most expensive way to operate GPUGRID. So my next project is an aged CPU (Core i7 860) with 2 GTX 750 Ti running GPUGrid. In my estimation, the machine will need about 190 watts while achieving about the score of a GTX 780.

Costs:
Running a machine consuming 190 watts will result in 1.659,84 kWh per year (24/7/365) which costs € 448.- (0,27 € pro kWh)

More ideas for a efficiente use requested ;-)


Regards, Josef

ID: 37798 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 38135 - Posted: 28 Sep 2014, 18:29:52 UTC - in response to Message 37798.  
Last modified: 28 Sep 2014, 18:53:32 UTC

More ideas for a efficient use requested ;-)

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present. You could probably build a base unit for < £1500, get 2.5M credits/day (at present performance rates) and over 2 years at £0.15/Kw it would cost ~£1500 to run (assuming 582W draw) at which time the value of the system would be ~£500. So total cost of ownership (TCO) would be ~£2500 (~$4000 or ~3200Euro) over 2 years for a credit of 1.8B.
Obviously prices vary by location and running costs depend on the price of electric which varies greatly from one place to another and the app could improve performance...

For comparison, a quad GTX780Ti system would yield ~3.3M credits per day, but the purchase cost would be >£2100, the running cost would be ~£2600 over 2years and the system would only be worth about the same (older kit). So £4200 for 2.4B.

9700 system ~750M credits/£ over 2years
780 Ti rig ~570M credits/£ over 2years
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 38135 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 38162 - Posted: 29 Sep 2014, 12:38:52 UTC - in response to Message 38135.  

More ideas for a efficient use requested ;-)

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present. You could probably build a base unit for < £1500, get 2.5M credits/day (at present performance rates) and over 2 years at £0.15/Kw it would cost ~£1500 to run (assuming 582W draw) at which time the value of the system would be ~£500. So total cost of ownership (TCO) would be ~£2500 (~$4000 or ~3200Euro) over 2 years for a credit of 1.8B.
Obviously prices vary by location and running costs depend on the price of electric which varies greatly from one place to another and the app could improve performance...

For comparison, a quad GTX780Ti system would yield ~3.3M credits per day, but the purchase cost would be >£2100, the running cost would be ~£2600 over 2years and the system would only be worth about the same (older kit). So £4200 for 2.4B.

9700 system ~750M credits/£ over 2years
780 Ti rig ~570M credits/£ over 2years


Skgiven, or anybody else--- I wondering what you would recommend for GPUGRID? I'm in the process of Building a DIY Haswell Xeon ( LGA-2011-3 socket for two CPU on MB/Quad channel memory) and 2 Low powered Z97 Dual channel systems. Two boards will have as many GPU possible. Also a decent amount of storage. ( A third will have 2 GTX660ti and maybe GTX750ti included.)

My electric rates sky rocketed over few months, where I currently reside. (It has it own power station, and Town power company, but there been ongoing dispute about cost of distribution, creating rates from 0.0876cents/Kw to over 0.1768cents/Kw. These prices will be in effect for at least 6-9 months unless term agreement can be meant on both sides. Also, State energy taxes rose 20% since September of last year.

For my LGA-2011-3 (2P) Quad channel DDR4 MB I will either choose a ASUS Z10PE-D8 WS (600usd) or GIGABYTE GA-7PESH3[640usd] (or something else), possible to have more than 4 GPU's on GA motherboard. For processors (quad channel), I already bought [2]-85TDP 6c/6t 2603V3 for 220usd each. I'm also considering a Z97 Board (LGA1150) for a low powered Xeon or i5/i7. (25TDP- 1240LV3 (4c8t) listed at 278$) or a 192usd i5-4590T[4c4t]45WTDP or a 303usd 35TDP i7 4785T[4c8t].

All newly bought GPU's will be Maxwell based. I'm thinking for LGA-2011-3 board, [4] GTX 970, (GTX980 price/performance/Wattage ratio is higher as you say.) unless a flaw been discovered in GPC. For the Z97 board, if a GTX 960 released shortly, I'll get [3].

I was given a couple second hand GTX660ti, by my sister, when she updated to GTX 980 last week for the gaming system I built for her. ( she lives out of state, so I don't access to test new GM204. So for these Two C.C3.0 cards I was thinking about picking up Z97 board, and a i5 4590T. I will "eco-tune" all cards including Maxwell, while waiting for GTX960 prices. (It's possible I keep GTX660ti, when GTX960 is released, or get more GTX970, depending on GTX960 price.)

Total system(s) cost (5 GTX 970 or 3 GTX 970/2 GTX 960/ 1 GTX750ti/ 2 Xeon 2603V3/ 2 i5 4590T or 2 Xeon 1240LV3/1 Server MB/ 2 Z97 MB) will be (not including) PSU units will be around 3680- 4000usd, which is being piece meal ( looking for all and any discounts.)

Thank you for help and advice.


ID: 38162 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MrJo

Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 38236 - Posted: 1 Oct 2014, 18:05:11 UTC - in response to Message 38135.  
Last modified: 1 Oct 2014, 18:16:37 UTC

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present.

Good Idea.. Is it necessary that all PCIe slots are running with x16 to obtain the full performance of the cards? Or is full power also available with 4 x8?


creating rates from 0.0876cents/Kw to over 0.1768cents/Kw.

A perfect and more than affordable energy world. Here in Germany we can only dream of prices like 0.1768 cents/kwh. Current price I have to pay is 0.27 Euro/kw/h which is 0.3406401 US-Dollar. Is it true? You pay only 0.1768 cents/kwh?
Regards, Josef

ID: 38236 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 38237 - Posted: 1 Oct 2014, 18:21:34 UTC - in response to Message 38162.  

WE use the Asus Z87-WS motherboard - 4 GPUs at 8x.

Matt
ID: 38237 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 38238 - Posted: 1 Oct 2014, 22:14:40 UTC - in response to Message 38236.  

I made a small mistake figuring kwh cost. Considering how energy costs are all over place, and each Country applies a different formula for prices- I went back to look at my 2014 August-September Power bill to see exactly what is included-- currently 0.1112c/kwh is the flat rate (was 0.08-0.10c/kwh from Sept2013-March2014), add a 0.05982c/kwh surcharge for taxes, and 0.0354c/kwh for distribution. This beings it currently to ~0.19cents/kwh/usd. Natural Gas (my hot water) is opposite: taxes are less than distribution costs. I have three wood stoves for heat during winter. (Electrical costs are decent compared to 0.34c/kwh for Germany) In 2013- average total for kwh was 0.1445c. Average amount of "surcharges" in 2013 for each kwh was 0.0443cents. Cost for energy resources have rose considerably in every industrial area.

In Germany how much of Total price is Taxes or "surcharges"? (Of Germany imports, what's the energy percentage?) In the States, certain energy policies (more regulations) have made the cost to run (large potion where power comes from) Coal fired plants on (East Coast) very difficult to keep energy prices in order. New Coal or Nuclear Power Plants aren't being built. Wind/Solar/Biodegradable (cost outweighs any benefit) is okay for a private homeowner (who can afford it), not an entire Country. Sensible energy policies rarely exist anymore.
ID: 38238 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45122 - Posted: 2 Nov 2016, 20:26:22 UTC

I the course of upgrading my GPUs to Pascal I have worked through a lot of specs and characteristics. I thought that Maxwell was already a major leap forward in regard to efficiency and was somewhat surprised when I compared multiple generations by the GFLOP to power ratio. (Power draw taken from Wikipedia, typical 3D consumption) See the graph below.

No doubt that Fermi cards should be replaced very soon because of high energy cost, but Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

Side note: the 750(ti) is already Maxwell architecture and therefore runs more efficient than the other 700 series (Kepler) GPUs.

From this view, the 600 and 700 series are rightly still and widely in use. But in case crunchers with Kepler cards decide to upgrade even so, they should pass over Maxwell and upgrade to Pascal by all means.


I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 45122 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45123 - Posted: 2 Nov 2016, 22:03:21 UTC - in response to Message 45122.  
Last modified: 2 Nov 2016, 22:03:33 UTC

Nice graph, and I completely agree with what you said about upgrading!

Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

It's because they are based on the same 28nm lithography (TSMC had to skip the 20nm step, proposed to use for Maxwell).
ID: 45123 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45124 - Posted: 2 Nov 2016, 22:24:25 UTC - in response to Message 45122.  

I the course of upgrading my GPUs to Pascal I have worked through a lot of specs and characteristics. I thought that Maxwell was already a major leap forward in regard to efficiency and was somewhat surprised when I compared multiple generations by the GFLOP to power ratio. (Power draw taken from Wikipedia, typical 3D consumption) See the graph below.

No doubt that Fermi cards should be replaced very soon because of high energy cost, but Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

Side note: the 750(ti) is already Maxwell architecture and therefore runs more efficient than the other 700 series (Kepler) GPUs.

From this view, the 600 and 700 series are rightly still and widely in use. But in case crunchers with Kepler cards decide to upgrade even so, they should pass over Maxwell and upgrade to Pascal by all means.



Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as secomd class.

I also woud like to add that most Pascal card owners are a part of a "vanity" project.
ID: 45124 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45125 - Posted: 2 Nov 2016, 22:55:17 UTC - in response to Message 45124.  

Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as second class.

I consider the older cards as second class, as being that much more effective than the previous generation(s) the Pascal's price can be saved in a short term on electricity costs.

I also would like to add that most Pascal card owners are a part of a "vanity" project.

I would rather call it "green" project. (Not because it's NVidia's color, but because it's twice as environment friendly as older cards)
ID: 45125 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45126 - Posted: 2 Nov 2016, 23:00:27 UTC

No offense meant. The below graph just illustrates very clearly that Pascal GPUs pay off quickly for hardcore crunchers. For example the GTX 780ti is still a well performing high end Card, but the GTX 1070 yields the same GFLOPs at 70-80W less. If you run the GPU almost day and night you will have extra cost of 0.08kW*8000hrs*0,2€/kWh=120€. A new GTX 1070 is available from 400€ and the GTX 780ti can be sold as second hand for maybe 150€. Which means you get a new card without further investment after two years, just because of saving energy.
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 45126 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45129 - Posted: 2 Nov 2016, 23:46:35 UTC
Last modified: 3 Nov 2016, 0:30:01 UTC

Other examples. Take the renowned Kepler GTX 780. The new GTX 1060 yields 10% more SP GFLOPs at 80-100W less power consumption. Leads to the same saving of 100-150€ per year.

And upgrades from Fermi will pay off even more quickly. Replacing the GTX 580 with a favorable GTX 1050 (not in the graph yet) will speed performance up by 10-15% but reduce the power draw by more than 150W!

I would rather call it "green" project. (Not because it's NVidia's color, but because it's twice as environment friendly as older cards)


yes, that is another valid argument.

Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as secomd class. I also woud like to add that most Pascal card owners are a part of a "vanity" project.


I would neither label Maxwell users as "second class" nor Pascal users as "Snobs". If someone aims at a well performing card and just crunches now and again, a GTX 980 (maybe aside from the low 4GB Memory size) is absolutely a good choice. But for 24/7 operation Pascal makes much more sense in terms of energy cost.
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 45129 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45136 - Posted: 3 Nov 2016, 11:59:57 UTC - in response to Message 45126.  
Last modified: 3 Nov 2016, 12:03:40 UTC

No offense meant. The below graph just illustrates very clearly that Pascal GPUs pay off quickly for hardcore crunchers. For example the GTX 780ti is still a well performing high end Card, but the GTX 1070 yields the same GFLOPs at 70-80W less. If you run the GPU almost day and night you will have extra cost of 0.08kW*8000hrs*0,2€/kWh=120€


...I have meant annual extra cost of course. Your electricity rate (€ per kWh) may differ from my above example.
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 45136 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45140 - Posted: 3 Nov 2016, 16:48:05 UTC - in response to Message 45136.  
Last modified: 3 Nov 2016, 16:49:47 UTC

That chart might not accurately reflect performance running GPUGrid apps!

In the UK the best GPU in terms of performance per £purchase is the GTX1060 3GB.
Based on the reports here that Pascal GPU’s can boost to ~2GHz I’ve put together a more realistic table of boost performance vs price for the Pascal range:
GeForce GTX	TFlops (Boost) @2GHz	£UKCost 	GFlops/£Cost
1050		2.56			120		21.3
1050Ti		3.07			140		21.9
1060 3GB	4.61			190		24.3
1060 6GB	5.12			240		21.3
1070		7.68			395		19.4
1080		10.24			600		17.1

Assumes all cards boost to 2GHz (and the 14nm cards might not). This is only theoretical and ignores factors such as app scaling, supporting OS, CPU, RAM, CPU cache, GPU L2 cache, task performance variations…
Both performance/Watt and performance/purchase cost (outlay) are relevant. Reports of 45W actual use when crunching here on a GTX1060-3GB might throw the previous graphs data all over the place.
The best measurements are actual observations (for here) and not theoretical. So what do cards actually boost to & what is the actual performance running specific task types? (post elsewhere or adapt for here).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 45140 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45197 - Posted: 5 Nov 2016, 12:24:31 UTC
Last modified: 5 Nov 2016, 12:25:08 UTC

Taking performance/purchase cost into consideration is a valid argument. But in this case also the service life until replacing the GPU is of importance in order to calculate the amortization.

Reports of 45W actual use when crunching here on a GTX1060-3GB might throw the previous graphs data all over the place.


Well, I have already read many different power draw and utilization statements in this forum affecting all kind of GPU. From 45W to 65W or more for the 1060 in particular. So I guess we will never have perfect numbers and utilization of cards under every OS. From that view, the below graph should be a workable reference point, although far from perfection.
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 45197 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45213 - Posted: 5 Nov 2016, 16:33:56 UTC - in response to Message 45197.  
Last modified: 5 Nov 2016, 16:50:15 UTC

IMO 2years is a reasonable 'working' life expectancy for mid to high end GPU's, and that should form part of any TCO analysis for system builds/refurbs/performance per cost measurements.

The main reason for ~2years is the time between two generations of GPU. After 2years the GPU's you have also retains some value, but that drops off almost completely when another GPU generation becomes available, circa 4years. So it's good to upgrade the GPUs every 2 years or so. Obviously it depends on the cost, & app performance. ATM Pascal's are still way too expensive IMHO.

The main base units don't need upgrading every 2 years, but after 4 years (2 GPU generations) you would still get a descent return, and the 'upgrade' might actually be worth bothering about. After 6 years (broadly speaking) the return drops off. Of course this depends on CPU generations and what they bring, if anything. All that tic-tock talk and die shrinkage brought next to nothing in terms of CPU GFlops performance for crunchers, especially GPU crunchers. As Intel strove to enhance their little integrated GPU's they ignored/starved discrete GPU crunchers of the necessary PCIE support required, unless you spend stupid money on needless kit. AMD stuck to PCIE2 as their GPU's don't do complicated programming (CUDA) and don't need higher PCIE bandwidth. NV doesn't make mainstream CPU's, so they've been at the mercy of this unscrutinised duopoly.

If you get an extreme end model and you buy early then perhaps 2.5 to 3years might be what you would want, though it's not always what you get! Sometimes the extreme end cards turn up last. The GTX780Ti (Kepler) arrived 6months after the 780 and 10months before the GTX980 (Maxwell) which offered similar performance for less power usage. With less than 2 years usage you may not realize the performance/Watt benefit over purchase cost for the bigger cards (though that varies depending on purchase costs and electric costs).

The 45W, 65W and so on are just observations. Some might be accurate, others might have been down-clocked or influenced by CPU overuse at the time...
I know my GTX1060-3GB uses from 75 to 110W on Linux with reasonable GPU usage (80% or so) because I've measured it at the wall. With such change (35W) depending on what tasks are running we need to be careful with assessments.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 45213 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Rabinovitch
Avatar

Send message
Joined: 25 Aug 08
Posts: 143
Credit: 64,937,578
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 45255 - Posted: 14 Nov 2016, 10:33:27 UTC

Does anyone know what is the better choice - 750 Ti or 1050Ti (both 4Gb)?
From Siberia with love!

ID: 45255 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
kain

Send message
Joined: 3 Sep 14
Posts: 152
Credit: 918,557,369
RAC: 28
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 45257 - Posted: 14 Nov 2016, 14:13:06 UTC - in response to Message 45255.  

Does anyone know what is the better choice - 750 Ti or 1050Ti (both 4Gb)?


1050 Ti of course.
ID: 45257 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
3de64piB5uZAS6SUNt1GFDU9dRhY
Avatar

Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47018 - Posted: 18 Apr 2017, 15:34:19 UTC
Last modified: 18 Apr 2017, 15:35:23 UTC

As an addition to my below graph, the new 1080ti yields 53 GFLOPS/Watt and the Titan XP about the same, assuming that it pulls 220-230W just like the Titan X. Which means the efficiency is identically equal to the non-"ti" 1080. So much for theory...
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.
ID: 47018 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PappaLitto

Send message
Joined: 21 Mar 16
Posts: 513
Credit: 4,673,458,277
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 47110 - Posted: 26 Apr 2017, 13:34:19 UTC - in response to Message 47018.  

As an addition to my below graph, the new 1080ti yields 53 GFLOPS/Watt and the Titan XP about the same, assuming that it pulls 220-230W just like the Titan X. Which means the efficiency is identically equal to the non-"ti" 1080. So much for theory...


Since the 1070 is the same GP104 die, it performs very similarly to the gtx 1080 in this project, costs much less, and uses slightly less power. The 1070 is the best performance per watt, performance per dollar card for this project.
ID: 47110 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

Message boards : Number crunching : Credit per € / $

©2025 Universitat Pompeu Fabra