Big Maxwell GM2*0

Message boards : Graphics cards (GPUs) : Big Maxwell GM2*0
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 6 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 39520 - Posted: 18 Jan 2015, 12:40:33 UTC - in response to Message 39500.  
Last modified: 18 Jan 2015, 12:46:18 UTC

CC5.5 would make sense.

To equal Kepler for DP performance a 3072shader GM200 card would need to have 1/4 DP capable shaders and then it would still be likely have a ~250W TDP. There would be no purpose in doing that (possibly 1/3 but I doubt it). So I agree that GM200 is probably not going to be a high end DP card to replace GK Titans, and is going to be a lot more like a GM104 than you would expect from a big version of Maxwell.
I'm just expecting a slightly more refined architecture tweaked to use up to 12GB DDR5 and not much else unless it adds DirectX 12.1, OpenGL 4.6, or some updated port version.

So 50% bigger than a GTX980, a 384bit bus, 12GB GDDR5 (& likely a 980Ti version with 6GB) and a fat price tag.

NV could still launch a 990 and another dual Titan at a later date; performances would be well spaced out.

According to NV, the successor to Maxwell will be Pascal (previously called Volta) and this is still due in 2016, so I think GM200 is 28nm and a 16nm Pascal is more likely than what would be a fourth generation Maxwell by then. 1/2 to 1/4 DP might reappear on 16nm.

Off topic, Pascal is supposed to introduce 3D memory and Unified memory (CPU can access it) and NVlink for faster GPU to CPU and GPU to GPU communications. These abilities will widen possible research boundaries, making large multi-protein complex modelling (and similar) more accurate and whole organelle modelling possible.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 39520 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
RaymondFO*

Send message
Joined: 22 Nov 12
Posts: 72
Credit: 14,040,706,346
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 39521 - Posted: 18 Jan 2015, 15:16:37 UTC

EVGA has this "Kingpin" (GTX 980 classified version) that has three (3) power inputs (8pin + 8pin + 6pin) available for pre-order starting 01 Feb 2015 for existing EVGA customers. To qualify as an existing EVGA customer, you must have already registered at least one (1) or more EVGA products on their web site.

http://www.evga.com/articles/00896/EVGA-GeForce-GTX-980-KINGPIN-Classified-Coming-Soon/
ID: 39521 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 39544 - Posted: 20 Jan 2015, 22:00:00 UTC - in response to Message 39521.  

That Kingpin is not big Maxwell. And pretty useless, unless you want to chase world records with deep sub-zero temperatures ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 39544 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40152 - Posted: 12 Feb 2015, 2:08:13 UTC - in response to Message 39385.  

3DCenter also commented on "Big Maxwell has no DP units". They've got 2 rather convincing points:

- the statement comes straight from nVidia, aimed at professionals
- 3072 Maxwell Shaders with additional DP units may simply have been too large for 28 nm, where ~600 mm² is the feasible maximum

To this I'll add:
- the Maxwell design does not include the "combined" SP/DP shaders I mentioned above, so nVidia is not using this because they simply don't have them
- Maxwell was planned for 20 nm some 2+ years ago, there was not enough time for such a redesign since it was clear that the chips have to be built with 28 nm
- nVidia won't want the shader blocks to differ throughput chips, the more they can straight recycle the easier (also for software optimization)

And previously I wrote:
don't forget the higher efficiency of Maxwell per shader - some of this will also apply to DP once there are a lot of DP-capable shaders.

I still stand by this statement. However, most of Maxwells increased efficiency per Shader comes from the fact that the super-scalar shaders are not unused most of the time. But in DP there are fewer shaders anyway, so Kepler has not extra tax to pay for unused super-scalar units. Maxwell couldn't improve on this.. and the remaining benefits like better scheduling were probably not worth the cost for nVidia.

Thereby I mean the cost of outfitting GM210 with enough DP units to make it faster than GK210. This would probably have made the chip too large with 24 SMM, which means they would have needed to reduce the number of SMMs and sacrifice SP / gaming performance.

MrS

ETA: excellent post ---- I've been following this ever evolving rumor more closely and surprisingly: 3Dcenter has again "confirmed" Double Precision compute is severely limited compared to enabled GK110.

It's possible the DP rumor is purposeful misinformation distortion by an insider being paid to create uncertainty. Deceptive tactics are nothing new within industries. I'm patiently awaiting trusted compute programs to PROVE weak DP performance. There will be a pile of intriguing bait click rumors before the launch of AMD or NVidia's 28nm "flagship".

The stacked HBM 4096bit bus hoopla about AMD 300 series dampens NVidia's secretive approach. Engaging in a continuously defensive posture about products allows for more rumors to crop up - even with a rising market share. (Never mind the 970 internal transmits issue topping out at 5% return rate) Will NVidia quell GM200 rumors or wait for real world performance results? The Professional market a different domain than game[ing]

Even if Maxwell DP matches Kelper: C.C3.5/3.7 will reign HPC until 16nm Pascal with possible ARM cores. Which engineering or science sector will upgrade from GK110 to GM200? A handful? An upgrade from Fermi to Maxwell is reasonable - unless of course: awful DP. At this point waiting for Pascal seems logical.

C.C3.5 well engineered structure is long in tooth (higher clocks and 32core sub-sets blocks are one of reasons Maxwell is faster for certain paths) C.C3.0 is still a decent Float performer - C.C5.0/5.2 has an edge for integer workloads.

The GPU Technology Conference in March is supposedly where the Quadro/Titan2/980ti/990 will be announced.

If GM200 limited FP64 workload is exposed - what's NVidia thinking behind building (besides profit) a supposedly professional market GPU (DP CUDA accounts for over 60%) - whose major compute component is suddenly missing or underwhelming?

As skgiven mentioned: what's the purpose of GM200 if DP is shunned? To be a +225W TDP FP32/int32/gaming GPU that's +20% a GTX780ti or 980? A GeForce with weak DP is understandable: each successive GP generation has lessened DP performance. It's concerning and mad to revise the Tesla/Quadro/Titan brand backwards by offering less compute options than prior generations. Without an impressive 1/2 ratio: the advantage for DP compute workloads is (1/4r) minimal or inherently less with Maxwell's 1/32 ratio. GK110 plenty capable even if GM200 1/4 supplies similar DP FLOPS. GK110 features became mainstream in all Maxwell's - nary an upgrade.

Maxwell adds a few gaming enhancements and HEVC advances. GM206 became the first GPU to adapt HDMI2.0 and full 265 video rendering. Kelper video SIP block is the first generation. GM107 second revision while GM204 is third with GM206 being the fourth revision. GM200 SIP will most likely be similar to GM206.
ID: 40152 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40216 - Posted: 19 Feb 2015, 20:01:42 UTC - in response to Message 40152.  

Thanks, eXa. Regarding the question:"Why GM2*0 without DP?" I think it's actually rather simple:

Build a fat gaming GPU. GK110 Titan sold well, and so will this one.

Quadros are mainly used for graphics, so they can use such a big Maxwell just fine.

Only offer it on Teslas like the K10 with GK104 GPUs, which are explicitly sold for their SP performance. The remaining market can stick to GK210 based cards. This doesn't cover the entire market, but probably a good share of it.

And finally there's the rising market of GPU virtualization, where one may benefit from better load balancing using fewer fat GPUs.

MrS
Scanning for our furry friends since Jan 2002
ID: 40216 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40353 - Posted: 4 Mar 2015, 22:43:53 UTC

http://anandtech.com/show/9049/nvidia-announces-geforce-gtx-titan-x

No confirmation regarding double precision performance. More information will be released during the NVIDIA GPU Technology Conference (possible launch) in a couple weeks.

ID: 40353 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40404 - Posted: 9 Mar 2015, 14:31:02 UTC - in response to Message 40353.  
Last modified: 17 Mar 2015, 16:39:16 UTC

Titan X will have 8 billion transistors, 3,072 CUDA Cores, 12 GB GDDR5 memory and will be based on Maxwell.
As the GTX980 has 5.2B transistors Titan X will likely be about 8/5.2=1.5 times faster than the GTX980, not taking into account any architectural changes that might infer better performance (or otherwise).

The suggestion is that it will have a 384bit memory bus width, which is 50% bigger than the GTX980 and would sit well with the ~50% cuda core increase.

Power draw is likely to be similar to the Titan Blacks 225-300 watts,
http://www.tomshardware.com/news/nvidia-geforce-gtx-titan-x,28694.html

It has been branded as a gaming card so I don't expect Titan X to reintroduce genuine high performance double precision (1/2) at the expense of sp. That might come in the form of a new Tesla some way down the road, and there already is a K40 with 2880 Cuda Cores & 1/3rd dp performance, albeit Kepler based.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 40404 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40410 - Posted: 9 Mar 2015, 19:36:44 UTC

I thought the Big Maxwell should be really fast and got its own one or more RPM chip(s) so that it need less interaction with the CPU.

But if it is only 1.5 times faster then a GTX980, then I will wait until 2016 to buy new GPU's.
Greetings from TJ
ID: 40410 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40416 - Posted: 10 Mar 2015, 15:09:28 UTC
Last modified: 10 Mar 2015, 15:19:41 UTC

http://videocardz.com/55013/nvidia-geforce-gtx-titan-x-3dmark-performance

No SP/DP/Integer compute benchmarks as of yet. The performance comparison mentioned: 3D mark (extreme) Firestrike benchmark. An overclocked (1200MHz) TitanX is nearly on par with a 5760 CUDA TitanZ and 5632 GCN AMD295x. Compared to overclocked GTX980 - the Firestrike score is +20%.

Reported base clock is 1002MHz which translates into 6TeraFlops for 32bit - 1.4TeraFLOPS more than a 1126MHz reference GTX980.
ID: 40416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40426 - Posted: 11 Mar 2015, 16:17:38 UTC - in response to Message 40416.  
Last modified: 11 Mar 2015, 16:23:06 UTC

Based on those specs the Titan X should be at least 20% faster than a Titan Black which in turn would mean it's at least 25% faster than a GTX980.
However, it's much more likely that it will boost higher than the Titan Black and that GM200 will offer some new performance gain.
Even a conservative 6% boost & 6% design gain would mean the Titan X will be 40% faster than a GTX980.
Raise those anticipated gains to just 9% each and the performance would be ~50% better than a GTX980. That's more realistic and I wouldn't dismiss a 60% improvement.

In terms of performance/Watt it's a clear winner over the Titan Black. Basically >25% extra performance for the same power draw.

I don't expect to see much performance/Watt gain compared to existing Maxwell's.
12GB GDDR5 is a right lump and much more than what's needed for hear. It might however allow 2 or possibly 3 tasks to run simultaneously.

At $999 I don't think there will be many takers, especially when two GTX970's will likely more than match performance while costing significantly less ($309), and that's today's prices against an as yet unreleased GPU.

Hopefully the release of the Titan X will drive down the prices of the GTX980 and GTX970.

Obviously if you want to have one super-system then these will likely be the cards to buy, but you would be looking at a 1200W to 1500W PSU and high-end hardware throughout.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 40426 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40430 - Posted: 11 Mar 2015, 18:48:01 UTC - in response to Message 40426.  

Hopefully the release of the Titan X will drive down the prices of the GTX980 and GTX970.

GM204 prices have fluctuated from below MSRP to way above - steadily selling since. AMD will be releasing new high performance GPU's with-in a few months. When AMD future core performance is above/better than NVidia - prices move downward even further.

In terms of performance/Watt it's a clear winner over the Titan Black. Basically >25% extra performance for the same power draw.

Reference Kelper/Maxwell Titan's clock (+-166MHz/836-1002MHz) difference equals to +1Tera (32bit) code instructions performance improvement.
5~TeraFLOPS 32bit performance for reference 2688/2880 CUDA GK110.
6.1Tera performance for 3072 CUDA GM200 32bit at 1002MHz base clock.
~6Tera 32bit instruction output is possible on a overclocked GK110 Titan.
~7TeraFLOPS = TitanX +1200MHz clock


For 32bit - the TitanX has an advantage. If GM200 SP/DP ratio is the standard Maxwell 1DP core in every 32c subset (4DPper128SMM) than overall performance/watt is slewed. I would compare the TitanX overall compute capabilities to a (120DPc) GTX780ti rather than 896 or 960 DP64 core enabled GK110 Titan. Will GM200 be 64bit worthy?

GM200's 64bit core/memory structure (performance) is unknown. Whitepaper analysis has yet to be revealed. GM107/204/206 Maxwell's 64bit C.C5.0/5.2 lacks the faster C.C3.0/3.5 Kelper warp/thread 64bit shared memory pipeline. (GK210) C.C3.7 upped to 128 bit data path. See CUDA performance guide.

Combining 32bit/64bit - GM204/206/107 fewer total DP cores (C.C5.0/5.2) executes less - lagging behind C.C3.0/3.5/3.7 advanced 64bit code instruction output. GK110 Kelper slower clocks (DP64 driver setting) point to 896/960 Titan 64bit cores energy management requirement. 32bit cores operate at lower wattage. Silicon with less 64bit cores will being down circuitry energy. So far - Maxwell trades less DP cores for higher 32bit core clocks. A 64DP SMX power draw shifts - hard to pin down amount of energy the core is being fed while computing DP.

GM200 raster operations 96(ROP) performance (Game Graphics) doubled compared to Kelper's 48. Faster [24] SMM Polymorph engine(s). GM200's revised Texture Mapping Units (192) same number as [GTX780] - 48/32 less than first generation 15SMX Titan (Black) and 14SMX Titan.

ID: 40430 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40490 - Posted: 17 Mar 2015, 15:37:39 UTC

GM200 officially launches today. GTC starts at 9amPST/12EST.

For those interested:
https://registration.gputechconf.com/form/session-listing

Reference GM200 Titan PCB (NVTTM) similar to GTX690/770/780/780ti/Titan/Black/TitanZ 8 power phase (6+2) design with 6 MOSFETS and a OnSemi NCP4206 voltage chip. Layout is slightly different from GK110 or GK104.

GM204/206 overclocks really well: the GM200 will be capable of (1400-1500Mhz).
ID: 40490 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40491 - Posted: 17 Mar 2015, 16:13:54 UTC - in response to Message 40490.  
Last modified: 17 Mar 2015, 16:40:17 UTC

You can watch the presentation live now,

http://blogs.nvidia.com/blog/2015/03/16/live-gtc/

So $999 and dp cut down to 0.2TFlops - will probably make it better for here.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 40491 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TJ

Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40494 - Posted: 17 Mar 2015, 17:08:44 UTC

Mmm not a bad price and it looks more promising then I thought. I will start saving money so I can buy one in fall.
Greetings from TJ
ID: 40494 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40499 - Posted: 17 Mar 2015, 20:40:14 UTC - in response to Message 40491.  
Last modified: 17 Mar 2015, 21:27:48 UTC

dp cut down to 0.2TFlops

Very disappointing DP performance for a Flagship. Hat tip to 3D center in Germany for correctly predicating horrid DP.

GK110/GK210 (C.C3.5/3.7) is still the HPC compute flagship of NVidia. GM200 is a compute downgrade when factoring in all aspects of how a GPU creates. For anyone who writes in something other then 32bit: GM200 is not the GPU to buy.

GM200 at 1000$ with Intel Broadwell DP performance is outrageous. Nvidia could have used an Ultra/985/1*** series moniker instead of Titan and priced at 649$. Titan Moniker is now tainted. (The future ti version of GM200) at the reasonable price of 649$ with similar performance as GM200 "Titan" - being the replacement for 32bit overall GTX780ti compute capabilities which launched at 699$.

GM200 marketing as the Titan replacement is nonsense because loss of GK110 DP compute features. More gaming revisions less compute options. Many practical long term GPU usage options exist along with games. (GM200 ti release will be sometime in June when AMD shows off their potent GPU's.) NVidia will also release GM cut dies near AMD's launch.

Maxwell's overall DP has been exposed as weak: [32] 32bit memory banks rather [32] 64bit banks like Kelper. The GTX480/580/780 all have similar DP performance as a Maxwell "flagship".

For here: GM200 (a larger GM204/same Compute set) --- FP32 ACEMD performance will be outstanding.

Whomever owns the DP enabled GK110 compute card Titan - now has a GPU with longstanding value that will stay until 16nm Pascal or beyond.
When AMD 4096 core/64CU 390X flagship offers the Hawaii 1/8 ratio (512DP) 8DP per 64C CU -- AMD revised cores are little short of GK110 C.C3.5 complete compute arch. 2048c/32CU Tahiti's 1/4 ratio 512DP cores (16DPper64CU) has given NVidia trouble in some DP markets. CUDA's DP market share - being eroded by OpenACC/OpenMP is eminent if Pascel DP is slow for a "flagship" GPU. A decline of CUDA DP will be a foregone conclusion.

(Doubling the amount of ROPS). For graphics: this an upgrade for Maxwell filtering and display advances. Although Vulcan (OpenGL) and DirectX Kelper (vertex/pixel/tessellation/geometry) unified shaders work with the same feature levels as Maxwell. The revised (8TMUper128SMM) GM200 higher clocked Texture Mapping Units are faster compared to Kelper. 192 for 3072 GM200 cores. The (16TMUper192SMX) GK110 consists of 2304/192 > 2688/224TMU > 2880/240.

http://anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/15

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_titan_x_review,10.html

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-titan-x-gm200-maxwell,4091-6.html

Density of GDDR5 12GB heats up: temps higher than core or VRM temps.
ID: 40499 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40581 - Posted: 22 Mar 2015, 17:03:16 UTC - in response to Message 40494.  
Last modified: 22 Mar 2015, 17:20:38 UTC

Mmm not a bad price and it looks more promising then I thought. I will start saving money so I can buy one in fall.

TJ, this card is 50% more hardware than a GTX980 for about double the price. That's a rather bad value proposition! While I generally support the argument of "fewer but faster GPUs" for GPU-Grid, this is too extreme for my taste. Better wait for AMDs next flag ship, which should arrive within the next 3 months. If it's as good as expected at priced around 700$, nVidia might offer a GTX980Ti based on a slightly cut-down GM200 at a more sane ~700$. Like the original Titan and GTX780Ti - the latter came later but with far better value (for SP tasks).

@eXa: don't equate "compute" with DP. Of course GM200 is weak if you need serious DP - but that's no secret. NVidia knows this, of course, and has made GK210 for those people (until Pascal arrives). For any other workload GM200 is a monster and mostly a huge improvement over GK110. There are a lot of such computing workloads. Actually, even if you have a DP capable card you'd still be better off if you can use it in SP mode.

If this thing is worth the "Titan" name is really irrelevant from my point of view. Few people bought the original Titan for it's DP capabilities. Fine.. they'll know enough to stick to Titan and Titan Black until Pascal. But most are just using them for games & benchmarks. They may want DP, but they don't actually need it.

GM200 at 1000$ with Intel Broadwell DP performance is outrageous.

Only if you measure it by your expectations, formed by the 1st generation of Titans.

And don't forget: including DP units in GM200 would have made it far too big (and made it loose further clock speed & yield, if it was possible at all). Or, at the same size but with DP the chip would have had significantly less SP shaders. The difference to GTX980 would have been too small (such a chip would still loose clock speed compared to smaller Maxwells) and hence a tougher sell for any market that the current GM200 appeals to.

I admit I didn't believe the initial rumor of GM200 without DP. But now it does make a lot of sense. Especially since most Maxwell improvements (keeping the SP shaders busy) wouldn't help in DP, because Kepler was never limited here (in the same way) as it is in SP.

MrS
Scanning for our furry friends since Jan 2002
ID: 40581 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40592 - Posted: 23 Mar 2015, 13:27:59 UTC

When will the GPUGrid app support BigMaxwell?
Does anyone have such card to test the current client?
I'm planning to sell my old cards (GTX670s and GTX680s), and buy one BigMaxwell, as I want to reduce the heat (and the electricity bills) generated by my cards for the summer :)
I didn't bought more GTX980s because of this card (the Titan X), and now I don't plan to buy more than one until its price drops a little.
ID: 40592 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40593 - Posted: 23 Mar 2015, 20:57:31 UTC - in response to Message 40592.  
Last modified: 23 Mar 2015, 21:47:20 UTC

I guess nobody has a GTX Titan X attached and working yet, otherwise it would appear in Performance

While Titan X would do as much work as two GTX780's using ~64% of the power, it is a very expensive unit:

In the UK you can get a Titan X for ~£900, a GTX980 for ~£400, or a GTX970 for ~£260.

So, you could buy 2 GTX970's, do ~24% more work and save £380,
or buy 2 GTX980's, save £100 and do ~33% more work,
or you could get 3 GTX970's, save £120 and do ~70% more work.

In theory, performance/Watt is about the same, and can be tweaked a lot by tuning. So you could reduce the voltage of each 970 to use 20W less power and still match the Titan X for performance.

As the Titan X is a flagship GPU it costs a lot, but it has already driven down the price of the GTX980.

It use to be the case that performance/Watt scaled with die size. While I'm not sure that's still the case the Titan X is carrying 12GB GDDR5. If that had scaled compared to the GTX980 it would only have been 6GB.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 40593 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
eXaPower

Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 40603 - Posted: 24 Mar 2015, 16:58:30 UTC - in response to Message 40592.  

Does anyone have such card to test the current client?

Performance Tab: TitanX -- NOELIA_1mgx (short)

ID: 40603 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 40604 - Posted: 24 Mar 2015, 17:47:28 UTC - in response to Message 40592.  

When will the GPUGrid app support BigMaxwell?


Soon, but not imminently. AFAIK, no one's attached one yet.
(It's working here in the lab).


I'm planning to sell my old cards (GTX670s and GTX680s),


High roller! I'd have throught 980s would be more cost effective?
ID: 40604 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 . . . 6 · Next

Message boards : Graphics cards (GPUs) : Big Maxwell GM2*0

©2025 Universitat Pompeu Fabra