gtx680

Message boards : Graphics cards (GPUs) : gtx680
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24116 - Posted: 23 Mar 2012, 19:40:10 UTC - in response to Message 24115.  

Well, if we make the rather speculative presumption that a GF680 would work with Coolbits straight out of the box, then yes we can cool a card on Linux, but AFAIK it only works for one GPU and not for overclocking/downclocking. I think Coolbits was more useful in the distant past, but perhaps it will still work for GF600's.
Anyway, when the manufacturer variants appear, with better default cooling profiles, GPU temps won't be something to worry about on any OS.
Cheers for the tip/recap, it's been ~1year since I put it in an FAQ.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24116 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
matlock

Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 24117 - Posted: 23 Mar 2012, 21:14:15 UTC - in response to Message 24116.  

It appears there may be another usage of the term "Coolbits" (unfortunately) for some old software. The one I was referring to is part of the nvidia Linux driver, and is set within the Device section of the xorg.conf.

http://en.gentoo-wiki.com/wiki/Nvidia#Manual_Fan_Control_for_nVIDIA_Settings

It has worked for all of my nvidia GPUs so far.
ID: 24117 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24118 - Posted: 23 Mar 2012, 23:06:20 UTC - in response to Message 24117.  

Thanks Mowskwoz, we have taken this thread a bit off target, so I might move our fan control on linux posts to a linux thread later. I will look into NVidia CoolBits again.

I see Zotac intend to release a GTX 680 chip clocked at 2GHz!
An EVGA card has already OC'ed to 1.8GHz, so the markets should see some sweet bespoke GTX680's in the future.

So much for PCIE 3.0

I see NVidia are listing a GT 620 in their drivers section...
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24118 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile oldDirty
Avatar

Send message
Joined: 17 Jan 09
Posts: 22
Credit: 3,805,080
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwat
Message 24120 - Posted: 24 Mar 2012, 0:21:29 UTC

Wow, this 680 monster seems to run with Handbrakes on, poor performance on CL, more worst than 580 and of corse HD79x0.
nVidia want to protect their quadro/tesla Cards.
Or i get it wrong?
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-15.html
and
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-14.html
ID: 24120 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JLConawayII

Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 24121 - Posted: 24 Mar 2012, 0:49:14 UTC

No, that seems to be the case. OpenCL performance is poor at best, although in the single non-OpenCL bench I saw it performed decently. Not great, but at least better than the 580. Double precision performance is abysmal, it looks like ATI will be holding onto that crown for the forseeable future. I will be curious to see exactly what these projects can get out of the card, but so far it's not all that inspiring on the compute end of things.
ID: 24121 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24129 - Posted: 24 Mar 2012, 11:18:59 UTC

For the 1.8 GHz LN2 was neccessary. That's extreme and usually yields clock speds ~25% higher than achievable with water cooling. Reportedly the voltage was only 1.2 V, which sounds unbelievable.

2 GHz is a far stretch from this. I doubt it's possible even with triple stage phase change cooling (by far not as cold as LN2, but sustainable). And the article says "probably only for the chinese market". Hello? If you go all the way to produce such a monster you'll want to sell them on Ebay, worldwide. You'd earn thousands of bucks a piece.

And something like "poor OpenCL performance" can not be said. It all depends on the software you're running. And mind you, Kepler offloads some scheduling work to the compiler rather than doing it in hardware. This will take some time to mature.

Anyway, as others have said, double precision performance is downright ugly. Don't buy these for Milkyway.

MrS
Scanning for our furry friends since Jan 2002
ID: 24129 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Evil Penguin
Avatar

Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 24153 - Posted: 26 Mar 2012, 2:27:57 UTC - in response to Message 24094.  

We have a small one, good enough for testing. The code works on Windows with some bugs. We are assessing the performance.

gdf

That's pretty good news.
I'm glad that AMD managed to put out three different cores that are GCN based.
The cheaper cards still have most if not all of the compute capabilities of the HD 7970.

Hopefully there will be a testing app soon and I'll be one of the first in line. ;)
ID: 24153 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Palamedes

Send message
Joined: 19 Mar 11
Posts: 30
Credit: 109,550,770
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24155 - Posted: 26 Mar 2012, 17:19:15 UTC

Okay so this thread has been all over the place.. can someone sum up?

Is the 680 good or bad?
ID: 24155 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24156 - Posted: 26 Mar 2012, 17:45:31 UTC

They're testing today.
ID: 24156 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24172 - Posted: 28 Mar 2012, 19:54:16 UTC - in response to Message 24156.  
Last modified: 28 Mar 2012, 20:22:39 UTC

Hello: The summary of what I've read several analyzes on the performance of GTX680 in caculation is as follows:

Simple Presision............. +50% to +80%
Double Precision............. -30% to -73%



" Because it’s based around double precision math the GTX 680 does rather poorly here, but the surprising bit is that it did so to a larger degree than we’d expect. The GTX 680’s FP64 performance is 1/24th its FP32 performance, compared to 1/8th on GTX 580 and 1/12th on GTX 560 Ti. Still, our expectation would be that performance would at least hold constant relative to the GTX 560 Ti, given that the GTX 680 has more than double the compute performance to offset the larger FP64 gap "
ID: 24172 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24180 - Posted: 29 Mar 2012, 21:10:42 UTC - in response to Message 24172.  

Hey where's that from? Is there more of the good stuff? Did Anandtech update their launch article?

MrS
Scanning for our furry friends since Jan 2002
ID: 24180 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24185 - Posted: 30 Mar 2012, 12:18:57 UTC - in response to Message 24180.  

Yes, looks like Ryan added some more info to the article. He tends to do this - it's good reporting, makes their reviews worth revisiting.
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17

Any app requiring doubles is likely to struggle, as seen with PG's.

Gianni said that the GTX 680 is as fast as a GTX580 on a CUDA 4.2 app here.
When released the new CUDA4.2 app is also supposed to be 15% faster for Fermi cards, which is more important at this stage.
The app is still designed for Fermi, but can't be redesigned for the GTX680 until the dev tools are less buggy.
In the long run it's likely that there will be several app improvement steps for the GF600.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24185 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile dskagcommunity
Avatar

Send message
Joined: 28 Apr 11
Posts: 463
Credit: 958,266,958
RAC: 31
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24186 - Posted: 30 Mar 2012, 13:07:20 UTC
Last modified: 30 Mar 2012, 13:09:54 UTC

Why does nvidia caps his 6xx series in this way? when they think it kills there own tesla series cards....why they still sold them when they perform that bad in comparsion to the modern desktop cards??? It would much cheaper for us and nvdia would sold much more of there desktop cards to grid computing...or they set an example of 8 of uncensored gtx680 chips on one tesla card for that price a tesla costs..
DSKAG Austria Research Team: http://www.research.dskag.at



ID: 24186 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
frankhagen

Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24187 - Posted: 30 Mar 2012, 14:10:21 UTC - in response to Message 24186.  

Why does nvidia caps his 6xx series in this way? when they think it kills there own tesla series cards....


plain simple?

they wanted gaming performance and sacrificed computing capabilities which are not needed there.

why they still sold them when they perform that bad in comparsion to the modern desktop cards??? It would much cheaper for us and nvdia would sold much more of there desktop cards to grid computing...or they set an example of 8 of uncensored gtx680 chips on one tesla card for that price a tesla costs..


GK-104 is not censored!

it's plain simple a mostly pure 32-bit desgin.

i bet they will come up with something completey different for kepler cards.
ID: 24187 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24188 - Posted: 30 Mar 2012, 14:14:38 UTC - in response to Message 24186.  
Last modified: 30 Mar 2012, 14:22:08 UTC

Some of us expected this divergence in the GeForce.
GK104 is a Gaming Card, and we will see a Compute card (GK110 or whatever) probably towards the end of the year (maybe Aug but more likely Dec).

Although it's not what some wanted, it's still a good card; matches a GTX580 but uses less power (making it about 25% more efficient). GPUGrid does not rely on OpenCL or FP64, so these weaknesses are not an issue here. Stripping down FP64 and OpenCL functionality helps efficiency on games and probably CUDA to some extent.

With app development, performance will likely increase. Even a readily achievable 10% improvement would mean a theoretical 37% performance per Watt improvement over the GTX580. If the performance can be improved by 20% over the GTX580 then the GTX680 would be 50% more efficient here. There is a good chance this will be attained, but when is down to dev tools.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24188 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile dskagcommunity
Avatar

Send message
Joined: 28 Apr 11
Posts: 463
Credit: 958,266,958
RAC: 31
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24189 - Posted: 30 Mar 2012, 15:32:26 UTC
Last modified: 30 Mar 2012, 15:32:59 UTC

ok i read ya both answers and understood, i only read anywhere that it is cut in performance for not matching there tesla. Seems to be a wrong article then ^^ (dont ask where i read that, dont know anymore). So i beleave now the gtx680 is a still good card then ;)
DSKAG Austria Research Team: http://www.research.dskag.at



ID: 24189 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
frankhagen

Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24191 - Posted: 30 Mar 2012, 17:52:16 UTC - in response to Message 24189.  
Last modified: 30 Mar 2012, 17:53:46 UTC

So i beleave now the gtx680 is a still good card then ;)


well, it is - if you know what you get.

taken from the CUDA_C guide in CUDA 4.2.6 beta:

CC 2.0 compared to CC 3.0

OP's per clock-cycle and SM/SMX:

32-bit floating-point: 32 : 192
64-bit floating-point: 16 : 8
32-bit integer add: 32 : 168
32-bit integershift, compare : 16: 8
logical operations: 32: 136
32-bit integer : 16 : 32

.....

+ optimal warp-size seems to have moved up from 32 to 64 now!

it's totally different and the apps need to be optimized to take advantage of that.
ID: 24191 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24192 - Posted: 30 Mar 2012, 20:56:11 UTC

Another bit to add regarding FP64 performance: apparently GK104 uses 8 dedicated hardware units for this, in addition to the regular 192 shaders per SMX. So they actually spent more transistors to provide a little FP64 capability (for development or sparse usage).

MrS
Scanning for our furry friends since Jan 2002
ID: 24192 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5

Message boards : Graphics cards (GPUs) : gtx680

©2025 Universitat Pompeu Fabra