Message boards :
Graphics cards (GPUs) :
gtx680
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well, if we make the rather speculative presumption that a GF680 would work with Coolbits straight out of the box, then yes we can cool a card on Linux, but AFAIK it only works for one GPU and not for overclocking/downclocking. I think Coolbits was more useful in the distant past, but perhaps it will still work for GF600's. Anyway, when the manufacturer variants appear, with better default cooling profiles, GPU temps won't be something to worry about on any OS. Cheers for the tip/recap, it's been ~1year since I put it in an FAQ. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 12 Dec 11 Posts: 34 Credit: 86,423,547 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It appears there may be another usage of the term "Coolbits" (unfortunately) for some old software. The one I was referring to is part of the nvidia Linux driver, and is set within the Device section of the xorg.conf. http://en.gentoo-wiki.com/wiki/Nvidia#Manual_Fan_Control_for_nVIDIA_Settings It has worked for all of my nvidia GPUs so far. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks Mowskwoz, we have taken this thread a bit off target, so I might move our fan control on linux posts to a linux thread later. I will look into NVidia CoolBits again. I see Zotac intend to release a GTX 680 chip clocked at 2GHz! An EVGA card has already OC'ed to 1.8GHz, so the markets should see some sweet bespoke GTX680's in the future. So much for PCIE 3.0 I see NVidia are listing a GT 620 in their drivers section... FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
oldDirtySend message Joined: 17 Jan 09 Posts: 22 Credit: 3,805,080 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Wow, this 680 monster seems to run with Handbrakes on, poor performance on CL, more worst than 580 and of corse HD79x0. nVidia want to protect their quadro/tesla Cards. Or i get it wrong? http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-15.html and http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-14.html |
|
Send message Joined: 31 May 10 Posts: 48 Credit: 28,893,779 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
No, that seems to be the case. OpenCL performance is poor at best, although in the single non-OpenCL bench I saw it performed decently. Not great, but at least better than the 580. Double precision performance is abysmal, it looks like ATI will be holding onto that crown for the forseeable future. I will be curious to see exactly what these projects can get out of the card, but so far it's not all that inspiring on the compute end of things. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For the 1.8 GHz LN2 was neccessary. That's extreme and usually yields clock speds ~25% higher than achievable with water cooling. Reportedly the voltage was only 1.2 V, which sounds unbelievable. 2 GHz is a far stretch from this. I doubt it's possible even with triple stage phase change cooling (by far not as cold as LN2, but sustainable). And the article says "probably only for the chinese market". Hello? If you go all the way to produce such a monster you'll want to sell them on Ebay, worldwide. You'd earn thousands of bucks a piece. And something like "poor OpenCL performance" can not be said. It all depends on the software you're running. And mind you, Kepler offloads some scheduling work to the compiler rather than doing it in hardware. This will take some time to mature. Anyway, as others have said, double precision performance is downright ugly. Don't buy these for Milkyway. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 15 Jan 10 Posts: 42 Credit: 18,255,462 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
We have a small one, good enough for testing. The code works on Windows with some bugs. We are assessing the performance. That's pretty good news. I'm glad that AMD managed to put out three different cores that are GCN based. The cheaper cards still have most if not all of the compute capabilities of the HD 7970. Hopefully there will be a testing app soon and I'll be one of the first in line. ;) |
|
Send message Joined: 19 Mar 11 Posts: 30 Credit: 109,550,770 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Okay so this thread has been all over the place.. can someone sum up? Is the 680 good or bad? |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
They're testing today. |
Carlesa25Send message Joined: 13 Nov 10 Posts: 328 Credit: 72,619,453 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hello: The summary of what I've read several analyzes on the performance of GTX680 in caculation is as follows: Simple Presision............. +50% to +80% Double Precision............. -30% to -73% " Because it’s based around double precision math the GTX 680 does rather poorly here, but the surprising bit is that it did so to a larger degree than we’d expect. The GTX 680’s FP64 performance is 1/24th its FP32 performance, compared to 1/8th on GTX 580 and 1/12th on GTX 560 Ti. Still, our expectation would be that performance would at least hold constant relative to the GTX 560 Ti, given that the GTX 680 has more than double the compute performance to offset the larger FP64 gap " |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hey where's that from? Is there more of the good stuff? Did Anandtech update their launch article? MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, looks like Ryan added some more info to the article. He tends to do this - it's good reporting, makes their reviews worth revisiting. http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17 Any app requiring doubles is likely to struggle, as seen with PG's. Gianni said that the GTX 680 is as fast as a GTX580 on a CUDA 4.2 app here. When released the new CUDA4.2 app is also supposed to be 15% faster for Fermi cards, which is more important at this stage. The app is still designed for Fermi, but can't be redesigned for the GTX680 until the dev tools are less buggy. In the long run it's likely that there will be several app improvement steps for the GF600. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 958,266,958 RAC: 31 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Why does nvidia caps his 6xx series in this way? when they think it kills there own tesla series cards....why they still sold them when they perform that bad in comparsion to the modern desktop cards??? It would much cheaper for us and nvdia would sold much more of there desktop cards to grid computing...or they set an example of 8 of uncensored gtx680 chips on one tesla card for that price a tesla costs.. DSKAG Austria Research Team: http://www.research.dskag.at
|
|
Send message Joined: 18 Sep 08 Posts: 65 Credit: 3,037,414 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Why does nvidia caps his 6xx series in this way? when they think it kills there own tesla series cards.... plain simple? they wanted gaming performance and sacrificed computing capabilities which are not needed there. why they still sold them when they perform that bad in comparsion to the modern desktop cards??? It would much cheaper for us and nvdia would sold much more of there desktop cards to grid computing...or they set an example of 8 of uncensored gtx680 chips on one tesla card for that price a tesla costs.. GK-104 is not censored! it's plain simple a mostly pure 32-bit desgin. i bet they will come up with something completey different for kepler cards. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some of us expected this divergence in the GeForce. GK104 is a Gaming Card, and we will see a Compute card (GK110 or whatever) probably towards the end of the year (maybe Aug but more likely Dec). Although it's not what some wanted, it's still a good card; matches a GTX580 but uses less power (making it about 25% more efficient). GPUGrid does not rely on OpenCL or FP64, so these weaknesses are not an issue here. Stripping down FP64 and OpenCL functionality helps efficiency on games and probably CUDA to some extent. With app development, performance will likely increase. Even a readily achievable 10% improvement would mean a theoretical 37% performance per Watt improvement over the GTX580. If the performance can be improved by 20% over the GTX580 then the GTX680 would be 50% more efficient here. There is a good chance this will be attained, but when is down to dev tools. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 958,266,958 RAC: 31 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
ok i read ya both answers and understood, i only read anywhere that it is cut in performance for not matching there tesla. Seems to be a wrong article then ^^ (dont ask where i read that, dont know anymore). So i beleave now the gtx680 is a still good card then ;) DSKAG Austria Research Team: http://www.research.dskag.at
|
|
Send message Joined: 18 Sep 08 Posts: 65 Credit: 3,037,414 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
So i beleave now the gtx680 is a still good card then ;) well, it is - if you know what you get. taken from the CUDA_C guide in CUDA 4.2.6 beta: CC 2.0 compared to CC 3.0 OP's per clock-cycle and SM/SMX: 32-bit floating-point: 32 : 192 64-bit floating-point: 16 : 8 32-bit integer add: 32 : 168 32-bit integershift, compare : 16: 8 logical operations: 32: 136 32-bit integer : 16 : 32 ..... + optimal warp-size seems to have moved up from 32 to 64 now! it's totally different and the apps need to be optimized to take advantage of that. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Another bit to add regarding FP64 performance: apparently GK104 uses 8 dedicated hardware units for this, in addition to the regular 192 shaders per SMX. So they actually spent more transistors to provide a little FP64 capability (for development or sparse usage). MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra