Message boards :
Graphics cards (GPUs) :
GeForce GTX Titan launching the 18th
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan? Two Titan's would be more expensive and do less work (in theory). The benefit of a Titans is the reference exhaust cooling system over the radial system used by the GTX690; which blows air out of the back of the case, but also into the case. A single GTX690 is fine, but multiple GTX690's would require you to have a very large side panel input fan and exhaust both from the rear and front of the ATX case. This is probably doable for two GTX690 cards, but for 3 or 4 cards it's more difficult. Also if you intend to use Linux you could in theory have 8 GPU's (four GTX690 cards) but in Windows I don't think you can. Basically GPU's don't scale well. Adding more GPU's requires greater skills, and isn't something for the amateur cruncher. If anyone is seriously considering 3 or more GTX690's you should primarily be thinking about cooling. Something like this case (with 4 side panel fans) could be used to blow cool air onto the GPU's, and then it could be drawn out from the rear and front. I'm sure this sort of case could handle two GTX690's but I would want to know the temperatures before adding a third, and would only add them one by one. I also like the one very large side panel fan on this case, but not the drive bays. This open frame case looks very interesting, if you don't mind dusting. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan? From the performance per price point of view the GTX690 is better than a Titan, as BOINC loads scale pretty much perfectly with GPU count (except POEM). However, if you're running into limits with the amount of GPUs the Titans might be a better option. Not for 4 vs. 2 chips, though. And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows! MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan? I was wondering the same thing and it's confusing for newbies like me when there are statements that refer to Titan's massive compute power as well as statements like "nVIDIA finally got it right with Titan". If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks. As for exhaust problems on the 690... piece o' cake. I'm going to put four 690s on one mobo if Linux, drivers and BOINC will permit and show y'all how it's done, no fancy case required. Picture 4 - GTX690s, all the same model, nicely lined up in a row, 4 exhaust ports perfectly lined up one above the other, a manifold made from a $6 heat vent boot (unless I can find one in a scrap pile somewhere first) that fits nicely over all four exhaust ports and transitions into 1 collector connected to a suck fan. Once done it's gonna have the highest RAC here for a looooong time. In fact maybe I won't show y'all because then you'll build one too. BOINC <<--- credit whores, pedants, alien hunters |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows! The GeForce GTX 285 was released just over 4 years ago. Until the 3.1app was deprecated it could still manage to return tasks for the full bonus. Now using the slower 4.2app (just slower for the 200 series cards) it would depend on the WU. I expect Toni's tasks to return in time, but NOELIA's Long tasks might not. They would still get the 25% bonus though... So probably good for ~4years. If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks. Exactly. FP64 is not needed here and hasn't been in the past. So it's FP64 Compute benefit isn't applicable to here. I expect the card would still be faster at POEM than a GTX680 or GTX690, but the issue there is the CPU and PCIE over-usage. So you are never going to quite get the most out of the card. It will probably shine at MW, but against the top ATI cards I don't think it's going to be anything special. So it's just an expensive alternative. If an FP64 Fluid Dynamics CUDA based project suddenly appeared then Titan would be the 'bees knees'. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Another point I forgot yesterday: Titan can provide more registers per warp and might differ internally in cache sizes and such. I think this has manifested itself in compute performance 2 to 3 times that of a GTX680 in some benchmarks - much higher than the raw horse power implies (Anandtech Compute Bench Part 1 and next page). We can not yet say if anything like this is going to happen at GPU-Grid as well. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I think they would need to specifically develop towards finding improvements from the registers per warp increase. If it's likely that improvements can be gained from this I'm sure they will try, if they get one to test on. As Titan is CC3.5, any finds could be used in the one app (as it can identify the cards compute capability). The issue I see is the price and limited availability. I can't see a big uptake of the card here, which suggests any such development would be a waste of time, but perhaps lesser versions of the card will appear making it worthwhile. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We are now trying to get hand on one titan for optimizing the application on it. As usual it's very difficult to find them over here, if anyone is willing to donate one please contact us. In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory. Extra registers could also provide a further boost, but we don't know yet. Stay tuned. gdf |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
From the benchmarks posted I would expect the Titan's production/$ to be lower than some other solutions (660 TI, 650 TI, etc.). |
Gattorantolo [Ticino]Send message Joined: 29 Dec 11 Posts: 44 Credit: 251,211,525 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 958,266,958 RAC: 31 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory. I'm missing a part of the picture here. According to GPU-Z, a GPUGrid job on a GTX 670 uses 384~534MB (depending on the task type) of the 2GB GPU memory. Could you explain please, why would three times more memory on the Titan be more sufficient than the 2GB on the GTX 670/680, when a task is using only the quarter of it? |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available. Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher. Anyway, as soon as we get one, we will report the performance. gdf |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Tom's Hardware article about TITAN says: As we know, though, Nvidia limits those units to 1/8 clock rates by default—not to be nefarious, but to create more thermal headroom for higher clock rates. That’s why, if you want the card’s full compute potential, you need to toggle a driver switch. Doing this, in my experience so far, basically disables GPU Boost, limiting your games to the card’s base clock rate. Does anyone know what driver switch is that? Has anyone tried that and what would be the result ? |
|
Send message Joined: 8 Feb 13 Posts: 5 Credit: 6,750 RAC: 0 Level ![]() Scientific publications
|
Does anyone know what driver switch is that? Has anyone tried that and what would be the result ? On Linux systems with K20s, these settings are controlled using the "nvidia-smi" program. Hopefully that will also be so for the Titans. MJH |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks. So this works on workstation models only? Do you know how to change this setting using that tool? Does anyone know what driver switch is that? Has anyone tried that and what would be the result ? |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system. MrS Scanning for our furry friends since Jan 2002 |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system. So that enhancement is disabled in all other consumer cards? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system. The rest of the consumer cards are GK104 (only have low FP64 ability, 1/24th of FP32; 8 FP64 units per SMX block) while Titan is GK110 (up to 1/3rd FP32; 64 FP64 units per SMX block). For Titan, FP64 is set by default to a low level (1/8th speed) and to use FP64 faster you just crank it up. As this is controlled by NVIDIA's System Management Interface (nvidia-smi), apps could turn it up and down. I wonder if this is stepped? I think the GK104 cards just perform FP64 at a speed which increases and decreases with the rest of the GPU. |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks, that makes it clear. |
|
Send message Joined: 28 Mar 09 Posts: 490 Credit: 11,731,645,728 RAC: 51 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available. We already have a rating system for video cards that is broken down into 4 categories: most recommended, highly recommended, recommended and not recommend. So, shouldn't we have the same for video memory size? See example: most recommended : 4 GB + highly recommended : 2 to 4 GB recommended: 1 to 2 GB not recommended : less than 1 GB |
©2025 Universitat Pompeu Fabra