GeForce GTX Titan launching the 18th

Message boards : Graphics cards (GPUs) : GeForce GTX Titan launching the 18th
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28677 - Posted: 22 Feb 2013, 22:10:23 UTC - in response to Message 28673.  

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

Two Titan's would be more expensive and do less work (in theory). The benefit of a Titans is the reference exhaust cooling system over the radial system used by the GTX690; which blows air out of the back of the case, but also into the case.

A single GTX690 is fine, but multiple GTX690's would require you to have a very large side panel input fan and exhaust both from the rear and front of the ATX case. This is probably doable for two GTX690 cards, but for 3 or 4 cards it's more difficult. Also if you intend to use Linux you could in theory have 8 GPU's (four GTX690 cards) but in Windows I don't think you can. Basically GPU's don't scale well. Adding more GPU's requires greater skills, and isn't something for the amateur cruncher.

If anyone is seriously considering 3 or more GTX690's you should primarily be thinking about cooling. Something like this case (with 4 side panel fans) could be used to blow cool air onto the GPU's, and then it could be drawn out from the rear and front. I'm sure this sort of case could handle two GTX690's but I would want to know the temperatures before adding a third, and would only add them one by one. I also like the one very large side panel fan on this case, but not the drive bays.

This open frame case looks very interesting, if you don't mind dusting.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 28677 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28678 - Posted: 22 Feb 2013, 22:54:04 UTC - in response to Message 28673.  

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

From the performance per price point of view the GTX690 is better than a Titan, as BOINC loads scale pretty much perfectly with GPU count (except POEM). However, if you're running into limits with the amount of GPUs the Titans might be a better option. Not for 4 vs. 2 chips, though.

And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows!

MrS
Scanning for our furry friends since Jan 2002
ID: 28678 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28680 - Posted: 23 Feb 2013, 3:14:37 UTC - in response to Message 28673.  
Last modified: 23 Feb 2013, 3:18:15 UTC

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?


I was wondering the same thing and it's confusing for newbies like me when there are statements that refer to Titan's massive compute power as well as statements like "nVIDIA finally got it right with Titan". If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks.

As for exhaust problems on the 690... piece o' cake. I'm going to put four 690s on one mobo if Linux, drivers and BOINC will permit and show y'all how it's done, no fancy case required. Picture 4 - GTX690s, all the same model, nicely lined up in a row, 4 exhaust ports perfectly lined up one above the other, a manifold made from a $6 heat vent boot (unless I can find one in a scrap pile somewhere first) that fits nicely over all four exhaust ports and transitions into 1 collector connected to a suck fan. Once done it's gonna have the highest RAC here for a looooong time. In fact maybe I won't show y'all because then you'll build one too.
BOINC <<--- credit whores, pedants, alien hunters
ID: 28680 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28694 - Posted: 23 Feb 2013, 7:07:02 UTC - in response to Message 28678.  

And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows!
MrS

The GeForce GTX 285 was released just over 4 years ago. Until the 3.1app was deprecated it could still manage to return tasks for the full bonus. Now using the slower 4.2app (just slower for the 200 series cards) it would depend on the WU. I expect Toni's tasks to return in time, but NOELIA's Long tasks might not. They would still get the 25% bonus though... So probably good for ~4years.

If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks.

Exactly. FP64 is not needed here and hasn't been in the past. So it's FP64 Compute benefit isn't applicable to here.
I expect the card would still be faster at POEM than a GTX680 or GTX690, but the issue there is the CPU and PCIE over-usage. So you are never going to quite get the most out of the card.
It will probably shine at MW, but against the top ATI cards I don't think it's going to be anything special. So it's just an expensive alternative.
If an FP64 Fluid Dynamics CUDA based project suddenly appeared then Titan would be the 'bees knees'.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 28694 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28697 - Posted: 23 Feb 2013, 11:42:47 UTC

Another point I forgot yesterday: Titan can provide more registers per warp and might differ internally in cache sizes and such. I think this has manifested itself in compute performance 2 to 3 times that of a GTX680 in some benchmarks - much higher than the raw horse power implies (Anandtech Compute Bench Part 1 and next page). We can not yet say if anything like this is going to happen at GPU-Grid as well.

MrS
Scanning for our furry friends since Jan 2002
ID: 28697 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28699 - Posted: 23 Feb 2013, 12:16:18 UTC - in response to Message 28697.  

I think they would need to specifically develop towards finding improvements from the registers per warp increase. If it's likely that improvements can be gained from this I'm sure they will try, if they get one to test on. As Titan is CC3.5, any finds could be used in the one app (as it can identify the cards compute capability). The issue I see is the price and limited availability. I can't see a big uptake of the card here, which suggests any such development would be a waste of time, but perhaps lesser versions of the card will appear making it worthwhile.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 28699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28700 - Posted: 23 Feb 2013, 12:21:38 UTC - in response to Message 28697.  
Last modified: 23 Feb 2013, 13:03:47 UTC

We are now trying to get hand on one titan for optimizing the application on it.

As usual it's very difficult to find them over here, if anyone is willing to donate one please contact us.

In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory.

Extra registers could also provide a further boost, but we don't know yet. Stay tuned.

gdf
ID: 28700 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28705 - Posted: 23 Feb 2013, 15:41:44 UTC

From the benchmarks posted I would expect the Titan's production/$ to be lower than some other solutions (660 TI, 650 TI, etc.).
ID: 28705 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Gattorantolo [Ticino]
Avatar

Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 28708 - Posted: 23 Feb 2013, 17:14:41 UTC - in response to Message 28673.  

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

With 2 GTX690 is possible to crunch 4 WU at the same time, with the GTX Titan "only" 2 WU at the same time...

Member of Boinc Italy.
ID: 28708 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile dskagcommunity
Avatar

Send message
Joined: 28 Apr 11
Posts: 463
Credit: 958,266,958
RAC: 31
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28710 - Posted: 23 Feb 2013, 18:07:12 UTC

CUDA Core Count win goes to the 2*690
DSKAG Austria Research Team: http://www.research.dskag.at



ID: 28710 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28712 - Posted: 23 Feb 2013, 19:16:14 UTC - in response to Message 28700.  

In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory.

I'm missing a part of the picture here.
According to GPU-Z, a GPUGrid job on a GTX 670 uses 384~534MB (depending on the task type) of the 2GB GPU memory. Could you explain please, why would three times more memory on the Titan be more sufficient than the 2GB on the GTX 670/680, when a task is using only the quarter of it?
ID: 28712 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28719 - Posted: 24 Feb 2013, 0:22:40 UTC - in response to Message 28718.  

acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available.

Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher.

Anyway, as soon as we get one, we will report the performance.

gdf
ID: 28719 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Mumak
Avatar

Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28759 - Posted: 25 Feb 2013, 12:56:17 UTC

Tom's Hardware article about TITAN says:
As we know, though, Nvidia limits those units to 1/8 clock rates by default—not to be nefarious, but to create more thermal headroom for higher clock rates. That’s why, if you want the card’s full compute potential, you need to toggle a driver switch. Doing this, in my experience so far, basically disables GPU Boost, limiting your games to the card’s base clock rate.


Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?
ID: 28759 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
gianni

Send message
Joined: 8 Feb 13
Posts: 5
Credit: 6,750
RAC: 0
Level

Scientific publications
wat
Message 28761 - Posted: 25 Feb 2013, 14:56:55 UTC - in response to Message 28759.  

Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?


On Linux systems with K20s, these settings are controlled using the "nvidia-smi" program. Hopefully that will also be so for the Titans.

MJH
ID: 28761 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Mumak
Avatar

Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28762 - Posted: 25 Feb 2013, 15:24:30 UTC - in response to Message 28761.  
Last modified: 25 Feb 2013, 15:26:28 UTC

Thanks.
So this works on workstation models only?
Do you know how to change this setting using that tool?

Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?


On Linux systems with K20s, these settings are controlled using the "nvidia-smi" program. Hopefully that will also be so for the Titans.

MJH
ID: 28762 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28767 - Posted: 25 Feb 2013, 18:58:18 UTC

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

MrS
Scanning for our furry friends since Jan 2002
ID: 28767 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28772 - Posted: 25 Feb 2013, 19:56:45 UTC - in response to Message 28767.  

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

So that enhancement is disabled in all other consumer cards?
ID: 28772 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28775 - Posted: 25 Feb 2013, 21:41:27 UTC - in response to Message 28772.  
Last modified: 25 Feb 2013, 22:00:17 UTC

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

So that enhancement is disabled in all other consumer cards?

The rest of the consumer cards are GK104 (only have low FP64 ability, 1/24th of FP32; 8 FP64 units per SMX block) while Titan is GK110 (up to 1/3rd FP32; 64 FP64 units per SMX block). For Titan, FP64 is set by default to a low level (1/8th speed) and to use FP64 faster you just crank it up. As this is controlled by NVIDIA's System Management Interface (nvidia-smi), apps could turn it up and down. I wonder if this is stepped? I think the GK104 cards just perform FP64 at a speed which increases and decreases with the rest of the GPU.
ID: 28775 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Mumak
Avatar

Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28776 - Posted: 25 Feb 2013, 22:46:01 UTC - in response to Message 28775.  


The rest of the consumer cards are GK104 (only have low FP64 ability, 1/24th of FP32; 8 FP64 units per SMX block) while Titan is GK110 (up to 1/3rd FP32; 64 FP64 units per SMX block). For Titan, FP64 is set by default to a low level (1/8th speed) and to use FP64 faster you just crank it up. As this is controlled by NVIDIA's System Management Interface (nvidia-smi), apps could turn it up and down. I wonder if this is stepped? I think the GK104 cards just perform FP64 at a speed which increases and decreases with the rest of the GPU.


Thanks, that makes it clear.
ID: 28776 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Bedrich Hajek

Send message
Joined: 28 Mar 09
Posts: 490
Credit: 11,731,645,728
RAC: 51
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28780 - Posted: 26 Feb 2013, 1:38:44 UTC - in response to Message 28719.  

acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available.

Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher.

Anyway, as soon as we get one, we will report the performance.

gdf


We already have a rating system for video cards that is broken down into 4 categories: most recommended, highly recommended, recommended and not recommend. So, shouldn't we have the same for video memory size?

See example:

most recommended : 4 GB +
highly recommended : 2 to 4 GB
recommended: 1 to 2 GB
not recommended : less than 1 GB


ID: 28780 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Graphics cards (GPUs) : GeForce GTX Titan launching the 18th

©2025 Universitat Pompeu Fabra