Message boards :
Graphics cards (GPUs) :
Big Maxwell GM2*0
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
| Author | Message |
|---|---|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
When will the GPUGrid app support BigMaxwell? FYI, it's working now (on Windows 8.1). Here is a short workunit processed successfully on a Titan X. (Thanks to eXaPower for pointing it out) I'm planning to sell my old cards (GTX670s and GTX680s), They are. But as I will apparently loose my 2nd place on the overall toplist, I'd like to do it in a stylish manner. So I shall continue to have the fastest GPUGrid host on the planet at least. ;) |
|
Send message Joined: 20 Jul 14 Posts: 732 Credit: 130,089,082 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
https://twitter.com/TEAM_CSF/status/580627791298822144 [CSF] Thomas H.V. Dupont Founder of the team CRUNCHERS SANS FRONTIERES 2.0 www.crunchersansfrontieres |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
--- NOELIA_1mg --- (980) began work -- Titan X finished it. Host# 196801 resultid=14020924 |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
--- NOELIA_1mg --- (980) began work -- Titan X finished it. Host# 196801 resultid=14020924 And we see the Windows limitation of four again...only 4GB of that awesome 12 is recognized and used. I think I will wait for the "real" big Maxwell with its own CPU. At this time 1178 Euro for a EVGA one is too much for my wallet at the moment. Greetings from TJ |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
MJH wrote: I'd have thought 980s would be more cost effective? Yep, and even more so the GTX970's (see SKs post). @TJ: don't worry about the memory, ~1 GB per GPU-Grid task is still fine. And don't wait for any miraculous real big Maxwell. GM200 is about as bis as they can go on 28 nm, and on Titan X it's already fully enabled. I'd expect a cut-down version of GM200 at some point, but apart from that the next interesting chips from nVidia are very probably Pascal's. And about this "integrating CPU" talk: the rumor mill may have gotten Tegra K1 and X1 wrong. These are indeed Kepler and Maxwell combined with ARM CPUs.. as a complete mobile SoC. Anything else wouldn't make much sense in the consumer range (and AMD is not giving them any pressure anyway), so if they experiment with GPU + closely coupled CPU I'd expect this first to arrive together with NVlink and Open Power server for HPC. And priced accordingly. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the explanation. Haha I don't worry about the memory but crunchers on Windows pay quite a lot for 8GB that cannot be used with this new card. It depends on my financial conditions but I think I wait for a GTX980Ti, but not before fall, as summer turns to warm for 24/7 crunching (without AC). Greetings from TJ |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
But as I will apparently loose my 2nd place on the overall toplist, I'd like to do it in a stylish manner. So I shall continue to have the fastest GPUGrid host on the planet at least. ;) (ROBtheLionHeart) Overclocked 780ti (XP) NOELIA's are a hair faster than you're mighty 980 (XP). Rarely a sight not to see you (RZ) with the fastest times! The Performance Tab engaging comparative tool for crunchers a new way to learn about work units and expected GPU performance. Current Long run only Maxwell RAC per day (crunching 24/7) including OS factors:
skgiven's Throughput performances and Performances/Watt chart -- relative to a GK110 GTX Titan.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Roughly 56% faster than a Titan, https://www.gpugrid.net/forum_thread.php?id=1150 What are the boost clocks? http://www.tomshardware.com/reviews/nvidia-geforce-gtx-titan-x-gm200-maxwell,4091-6.html Suggests it boosts to 1190MHz but quickly drops to 1163MHz (during stress tests) and that it's still dropping or increasing in steps of 13MHz, as expected. 1190 is ~15% shy of where I can get my GTX970 to boost, so I expect it will boost higher here (maybe ~1242 or 1255MHz). Apparently the back gets very hot. I've dealt with this in the past by blowing a air directly onto the back of a GPU and by using a CPU water cooler (as that reduces radiating heat). FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Roughly 56% faster than a Titan. As you expected for GPUGRID ACEMD: an increase of ~60% Suggests it boosts to 1190MHz but quickly drops to 1163MHz (during stress tests) and that it's still dropping or increasing in steps of 13MHz, as expected. 1190 is ~15% shy of where I can get my GTX970 to boost, so I expect it will boost higher here (maybe ~1242 or 1255MHz). Apparently the back gets very hot. I've dealt with this in the past by blowing a air directly onto the back of a GPU and by using a CPU water cooler (as that reduces radiating heat). 12Gb of Memory is really hot (over 100C) for prolong use. As a fan blowing on the back helps- Will a back plate lower temps with such density or hinder by holding more heat to those outer (opposite the heat sink) memory pieces? Custom Water block (back) plate? (Full CPU/GPU water cooling loop) Only air cooling at 110C: down clocking of GDDR5 with a voltage drop could help. Longevity concern: can GDDR5 sustain +100C temps? I think the power techup website (Titan X review) shows the model number to reference. GDDR5 rated at 70C/80C/90C? Over bound Temperatures will certainly impact long term overclocking and boost rate prospects. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Cooling 12GB appears to be a big issue and it's likely hindering boost and power consumption, but without the 12GB it wouldn't be a Titan, saying as it doesn't have good dp. Hopefully a GTX980Ti with 6GB will appear soon - perhaps 2688 cuda cores, akin to the original Titan, but 35 to 40% faster for here and <£800, if not <$700 would make it an attractive alternative to the Titan X. For the Titan X, excellent system cooling would be essential for long term crunching, given the high memory temps. I would go for direct air cooling on the back of one or possibly 2 cards, but if I had 3+ cards I would want a different setup. The DEVBOX's just use air cooling, with 2 large front case fans, but I suspect the memory still runs a bit hot. Just using liquid wouldn't be enough, unless it included back cooling. I guess a refrigerated thin-oil system would be ideal, but that would be DIY and Cha-Ching! In my experience high GDDR temps effects performance of lesser cards too, and I found stability by cooling the back of some cards. While tools such as MSI Afterburner allow you to cool the GPU via the fans they don't even report the memory temps. It's often the situation that the top GPU (closest the the CPU) in an air cooled case runs hot. While this is partially from heat radiation it's mostly because the airflow over the back of the card comes from the CPU so it's already warm/hot. A basic CPU water cooler is sufficient to remove this issue, and for only about twice the cost of a good CPU heatsink and fan it's a lot cheaper than a GPU water cooler. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Many voltage controllers are dumb with VID lines only. (Titan X) No i2C support. Driver read only rather than software. There are few a PCB (Maxwell) with the ability to manual read temps or voltages on the back of PCB: 980 strix and all gun metal Zotac. PNY also allows manual measurements on one of dual fan OC models with i2c support. All Asus strix have i2C advanced support. A few Zotac models support i2c. Most others (MSI/EVGA/Gigabyte) don't. Advanced i2C support on a PCB is helpful. http://i2c.info/i2c-bus-specification |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
https://www.gpugrid.net/forum_thread.php?id=3551 For the Titan X, excellent system cooling would be essential for long term crunching, given the high memory temps. I would go for direct air cooling on the back of one or possibly 2 cards, but if I had 3+ cards I would want a different setup. The DEVBOX's just use air cooling, with 2 large front case fans, but I suspect the memory still runs a bit hot. Just using liquid wouldn't be enough, unless it included back cooling. I guess a refrigerated thin-oil system would be ideal, but that would be DIY and Cha-Ching! Cooling 12GB appears to be a big issue and it's likely hindering boost and power consumption, but without the 12GB it wouldn't be a Titan, saying as it doesn't have good dp. Hopefully a GTX980Ti with 6GB will appear soon - perhaps 2688 cuda cores, akin to the original Titan, but 35 to 40% faster for here and <£800, if not <$700 would make it an attractive alternative to the Titan X. Recent reports point to a full GM200 980Ti with 6GB - June/July launch. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The easiest way to cool the memory on the back side would be small passive aluminum RAM coolers. They're obviously not as strong as full water cooling, but cost almost nothing and given some airflow could easily shave off 20 - 30°C. There's not enough space for this in tightly packed multi-GPU configurations, though. And as far as I know the boost mode doesn't deal with memory at all. There's only a secondary interarction via the power draw. But the memory chips don't consume much power anyway, so there's going to be a negligible change with temperature. Regarding longevity: 100°C is hot for regular chips, but fine for e.g. voltage regulators. Not sure about memory chips. One would guess nvidia has thought this through, as they can't afford Titans failing left and right after a short time. Did they consider continous load? I'm not sure, but I hope they expect people paying 1000$ for a GPU to use them for more than the occasional game. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You would think so, but I have a colleague who has bought four (4) 30 inch screens only to play flight simulator! So perhaps the Titan X builders aim mostly on gamers who seems willingly to invest heavily in hardware. And they forget the 24/7 crunchers. They make Tesla-like boards for calculating (hence no monitors can be attached) for dedicated crunching. These cards are a lot more expensive but are build for heavy use with the best possible parts. But I guess the Titan X is heavily tested and some brands use the best parts in their cards too. I am a big fan of EVGA stuff, but you know that off course. Their FOC's are more expensive but have extra high grade components. Greetings from TJ |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Cooling 12GB appears to be a big issue and it's likely hindering boost and power consumption, but without the 12GB it wouldn't be a Titan, saying as it doesn't have good dp. Hopefully a GTX980Ti with 6GB will appear soon - perhaps 2688 cuda cores, akin to the original Titan, but 35 to 40% faster for here and <£800, if not <$700 would make it an attractive alternative to the Titan X. That is very good news indeed. Perhaps I'll wait for that card then. 6GB is ten times more than a GPUGrid task needs so it's an overkill. Then 12GB would be ... complete waste of money. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
As more data aggregates, I get more confused. .............e1s6_3-GERARD_FXCXCL12_LIG_11631322-0-1-RND5032_0 Titan--X--, Win 8.1, 28591 sec e2s13_e1s6f79-GERARD_FXCXCL12_LIG_10920801-0-1-RND3888_0 GTX-980-, Win XP, 25607 sec ...........e1s16_7-GERARD_FXCXCL12_LIG_14907632-0-1-RND4211_0 GTX780Ti, Win XP, 29397 sec e11s54_e4s195f119-NOELIA_27x3-1-2-RND1255_0 GTX-980-, Win XP, 11936 sec ........e7s6_e4s62f37-NOELIA_27x3-1-2-RND3095_0 GTX780Ti, Win XP, 12019 sec ....e7s18_e4s62f162-NOELIA_27x3-1-2-RND1763_0 GTX-980-, Win 8.1, 14931 sec e2s1_792f101-NOELIA_3mgx1-1-2-RND1677_0 GTX-980-, Win 7., 18160 sec (30.7% more time) ..e3s12_47f95-NOELIA_3mgx1-1-2-RND3015_0 Titan--X--, Win8.1, 13893 sec (23.5% faster) e5s155_2x164f180-NOELIA_S1S4adapt4-2-4-RND5279_1 GTX780Ti, Win XP, 12530 sec ....e8s83_e1s20f75-NOELIA_S1S4adapt4-3-4-RND3849_0 Titan--X--, Win 8.1, 12774 sec e7s13_e5s186f149-NOELIA_S1S4adapt4-3-4-RND3162_0 GTX-980-, Win XP, 13822 sec .....-e4s4_e1s27f81-NOELIA_S1S4adapt4-2-4-RND4550_0 GTX-980-, Win XP, 14080 sec |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
There may be a pattern: NOELIA_3mgx1: 75919 atoms, performs very well NOELIA_S1S4adapt4: 60470 atoms, performs OK GERARD_FXCXCL12: 31849 atoms, performs bad The less atoms a WU contains, the less each time step take and the more often CPU intervention is needed. And the more difficult it is to make good use of many shaders. Direct evidence for this is the low GPU usage of the Gerard WUs with "few" atoms. Maybe the CPU support of that Titan X is not configured as well as for the other GPUs? Maybe too many other CPU tasks are running, and / or he's not using "swan_sync=0" in addition to the WDDM overhead. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There may be a pattern: 2% [4% Total] Hyper Threads usage for each ACEMD process. [2] physical cores for AVX DP @ ~60% Total CPU. (OS accounts 1%) GK107 exhibits this behavior: 98% core for NOELIA > GERARD 94%. (13773 atom) SDOERR_villinpubKc 87%. Powerful GPU's affected even greater. Comparing top two time: GERARD_FXCXCL12_LIG_11543841 --- (31843 Natoms) -Titan X (Win8.1) Time per step [1.616 ms.] 8.8% faster than [1.771 ms] GTX780ti (XP) GTX780ti (XP) output 91% of Titan X (Win8.1) for this particular unit. Titan X without WDDM tax lower's time per step and Total Runtime another 10%? GERARD_FXCXCL12_LIG_11543841 --- (31843 Natoms) [28,216.69/30,980.94] 9% Runtime difference Titan X (Win8.1) > 780ti (XP) Titan X possibly 20~% faster with (XP) at similar power consumption. Titan X core clocks are unknown (overclocked?) further complicating actual differences among (footprint) variables. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Over the last couple of years the performance variation between task types has increased significantly. This has made it awkward to compare cards of different generation/even sub-generation. Task performances vary significantly because different tasks challenge the GPU in different ways, even exposing the limitations of more subtle design differences. While DooKey's 5820K system (with the Titan X) may be using lots of CPU, it's a 6 cores/12 thread system, and while he has a high rac at Universe@home (a CPU project), he does not have that system hooked up there, http://boincstats.com/signature/-1/user/21224/sig.png http://boincstats.com/en/stats/-1/user/detail/21224/projectList http://universeathome.pl/universe/hosts_user.php?userid=2059 The CPU could be dropping into a lower power state, he could be gaming or playing with the setup/config: 3855MHz GDDR seems like a heavy OC for 3500MHz/7GHz memory. While SKHynix have 4GB memory chips rated at 8GHz, I don't think the Titan X uses them? Several errors here, https://www.gpugrid.net/results.php?hostid=196801 and looking at some of the tasks, there are a lot of recoveries, before the failures, https://www.gpugrid.net/result.php?resultid=14040943 The WDDM overhead could well be greater with the bigger cards and it might not be a constant, it could vary itself by task type (atom count/CPU demand). Last time we had a look there did seem to be some variation with GPU performance. A significant overall performance gain might come by running more than one WU, as with the GTX970 and GTX980 on W7, Vista, W8 & W10 preview (WDDM2.0). Perhaps 3 WU's on a Titan X, would be optimal? FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GERARD_FXCXCL12 update: Zoltan's 980 reclaimed [25,606.92] fastest runtime and [1.462 ms] time per step. RZ's 980 9% faster than DooKey's [1.616 ms.] Titan X and 11-19% better than GTX780ti [1.771 ms.] |
©2025 Universitat Pompeu Fabra