Tests on GTX680 will start early next week [testing has started]

Message boards : News : Tests on GTX680 will start early next week [testing has started]
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

AuthorMessage
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24586 - Posted: 25 Apr 2012, 17:51:11 UTC - in response to Message 24574.  

Sorry guys, big changes over here in the lab and we are a bit busy, so I could not find the time to upload the new application.

One of the changes is my machine. First we were compiling on Fedora10, now we will be compiling on Fedora14. If you have an earlier release it could be a problem.

Also, I am having problems with the driver for the GTX680 on Linux.

gdf
ID: 24586 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24587 - Posted: 25 Apr 2012, 19:39:50 UTC

Thanks for the update, always appreciated.

I've read the Linux drivers are pretty shotty (bad) as well. Windows aren't too bad, but are still not great unfortunately.

Wish you the best of luck, know you guys want to get it out

One question when you get the time, if I'm correct this app will be 4.2 but in another thread you ( or one of you) mentioned cuda 5. Any big changes that will effect this project down the road?
ID: 24587 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24588 - Posted: 26 Apr 2012, 7:31:45 UTC - in response to Message 24587.  

This will be cuda4.2. If I mentioned cuda5 was by mistake.
Later on we will also probably drop cuda3.1 in favor of cuda4 to make sure that people don't need the latest driver version.

gdf
ID: 24588 · Rating: 0 · rate: Rate + / Rate - Report as offensive
wiyosaya

Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24605 - Posted: 28 Apr 2012, 3:13:16 UTC - in response to Message 24588.  

This will be cuda4.2. If I mentioned cuda5 was by mistake.
Later on we will also probably drop cuda3.1 in favor of cuda4 to make sure that people don't need the latest driver version.

gdf

If cuda3.1 is dropped, will this affect those of us with older cards, such as an 8800 GT and a GTX 460?

Thanks.
ID: 24605 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24619 - Posted: 28 Apr 2012, 16:55:01 UTC - in response to Message 24605.  
Last modified: 28 Apr 2012, 17:02:49 UTC

CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute?
I think these and other CC1.1 cards are overdue for retirement from this project, and I suspect that CUDA4.2 tasks that run on CC1.1 cards will perform worse than they do now, increasing the probability for retirement. While CC1.1 cards will perform less well, Fermi and Kepler cards will perform significantly better.

There isn't much on CUDA4.2, but CUDA4.1 requires 286.19 on Win and 285.05.33 on Linux. I think support arrived with the non-recommended 295.x drivers; on one of my GTX470's (295) Boinc says it supports CUDA 4.2, the other (Linux 280.13) says 4.0.

For CUDA 4.2 development, NVidia presently recommends the 301.32 Dev drivers for Windows and 295.41 drivers for Linux, and 4.2.9 toolkit - Fedora14_x64, for example.

I would expect the high end GTX200 series cards (CC1.3) will still be supported by GPUGrid, but I don't know what the performance would be and it's not my decision. I would also expect support for CC1.1 cards to be dropped, but we will have to wait and see.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24619 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24636 - Posted: 29 Apr 2012, 14:50:07 UTC

Someone's running betas.......
ID: 24636 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Butuz

Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24637 - Posted: 29 Apr 2012, 14:56:12 UTC

http://www.gpugrid.net/show_host_detail.php?hostid=108890

:-)

Butuz
ID: 24637 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24638 - Posted: 29 Apr 2012, 15:47:05 UTC
Last modified: 29 Apr 2012, 15:51:35 UTC

Not entirely sure why you posted that individuals user id, App page still says nothing new is out for beta testing. Hoping this means they finally got their linux drivers working properly and are finally testing in house.

Maybe tomorrow?

EDIT: Tried to grab some on Windows, and still none available. Someone is definately grabbing and returning results though.
ID: 24638 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24656 - Posted: 30 Apr 2012, 17:44:08 UTC - in response to Message 24460.  

I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app.
That would make the GTX680 ~28% faster than a GTX580 on the new app.

new app on gtx 580 115 ns/day
new app on gtx 680 150 ns/day

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.
ID: 24656 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24659 - Posted: 30 Apr 2012, 18:00:09 UTC
Last modified: 30 Apr 2012, 18:35:59 UTC

I would like to know.what limits performance as.well, but.the shader clock speed is.actually lower. Remember you have to.double others clock to get shader clock, so 680 =1.1 ghz on boost while 580 stock 772*2 for shader clock. Also more efficient. Running a 3820 @ 4.3 on wcg, and Einstein with gpu at 80% utilization. This system currently only uses 300 W
ID: 24659 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Butuz

Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24663 - Posted: 1 May 2012, 0:14:41 UTC - in response to Message 24656.  

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.


I think you are wrong. You are looking it at totally the wrong way, concentrating on the negatives rather than the positives.

1. The card is purposefully designed not to excel at compute applications. This is a design goal for NVidia. They designed it to play games NOT crunch numbers. 95 % of people buy these cards to play games. The fact that there is any improvement at all over the 5xx series cards in GPUGRID is a TOTAL BONUS for us - and I think testament to the hard work of the GPUGRID developers and testers rather than anything else NVidia have done.

2. It looks like we are going to get a 30.4% performance increase at GPUGRID and at the same time a 47% drop in power usage (and thus a drop in heat and noise) on a card that is purposefully designed to be awful at scientific computing. And you are not happy with that?

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.

My 2p anyway.

Butuz
ID: 24663 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 24664 - Posted: 1 May 2012, 0:36:49 UTC - in response to Message 24663.  
Last modified: 1 May 2012, 0:48:50 UTC

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.


For $1000 a card, I would expect to see a very significant increase, boardering on, if not actually massive - no luck about it. The power reduction comes with the territory for 28nm, so thats out of the equation. What is left on the compute side is a 30% improvement achieved by the 30% improvement on the GPU clocks.

From a Compute angle is it worth dropping £1000 on a card that - essentially - has only increased its clocks compared to the 580? I very much doubt it. In any case NVidia supply of 28nm is barely adequate at best, so a high priced 690 goes along with that, and its likely to stay that way for a good while until 28nm supply improves.

There is little doubt that they have produced a winner for gaming, its a beast for sure, and is going to "win" this Round. I doubt though that there will be many gamers, even the hard core "I just want the fastest" players, who will drop the money for this. $1000 is a step too far, and I believe will over time result in a real push back on price - its way too much when the mid range cards will nail any game going, let alone in SLI.

Fingers crossed the Project team can pull the cat out of the bag as far as GPUGRID is concerned - but its not looking great at present - at least not for $1000 it isnt.

Regards
Zy
ID: 24664 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24665 - Posted: 1 May 2012, 0:53:24 UTC
Last modified: 1 May 2012, 1:09:23 UTC

The only "issue" I have with the new series, is that it will be on boost 100% of the time, with no way to change it. The card uses 1.175 v and runs at 1105 Mhz in boost (specific to each card) with the amount of stress we put these things through, and that Maxwell will not be out til 2014 I actually paid EVGA $25 to.extend the 3 year to 5. Plan on having these at LEAST til 2015, since i will have both cards be 600 series, bought one and step uped a 570. Whenever Maxwell or 7xx series comes out ill buy more, but these will be in one system or another for quite some time. Even though temps at 80% utilization are 48-50, I'm not taking any chances with that high of a voltage 24/7/365

EDIT why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. And the 680 is 500, the 690 is $1000

EDIT AGAIN: If you already own say 5 580's or whatever, AND live in a place with high electricity, considering used cards can still get roughly $250, you MAY actually be able to recover the costs of electricity alone, let alone the increased throughput. AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?
ID: 24665 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24666 - Posted: 1 May 2012, 10:18:00 UTC - in response to Message 24665.  

With the Fermi series' the shaders were twice as fast as the GPU core. I guess people presume this is still the case with Kepler.

It's possible that some software will turn up that enables you to turn turbo off, though I expect many would want it to stay on.
Can the Voltage not be lowered using MSI Afterburner or similar?
1.175v seems way too high to me; my GTX470 @ 680MHz is sitting at 1.025v (73degC at 98% GPU load).

I think the scientific research methods would need to change in order to increase utilization of the shaders. I'm not sure that is feasible, or meritable.
While it would demonstrate adaptability by the group, it might not increase scientific accuracy, or might require so much effort that it proves to be too much of a distraction. It might not even work, or could be counterproductive. Given that these cards are going to be the mainstream GPU's for the next couple of years, a methodology rethink might be worth investigating.

Not having a Kepler or tasks for one, I could only speculate on where the calculations are taking place. It might be the case that a bit more is now done on the GPU core and somewhat less on the shaders.

Anyway, it's up to the developers and researchers to get as much out of the card as they can. It's certainly in their interests.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24666 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24667 - Posted: 1 May 2012, 10:36:43 UTC - in response to Message 24665.  

why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580.
....
AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?

As a consequence of the architectural changes (say improvements), the new shaders in the Kepler chip can do the same amount of work as the shaders in the Fermi at doubled core clock. That's why the Kepler can be more power efficient than the Fermi. (And because the 28nm lithography of course)
ID: 24667 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24668 - Posted: 1 May 2012, 12:43:34 UTC

No, voltage monitor does not efffect this card whatsoever. Some say you can limit it by limiting the power usage monitor, but since we put a different kind of load on the chip, at 80% utilization , mine is on boost with only 60% power load. I've tried offseting down to base clock (-110) but voltage was still at 1.175.

It bothers me a lot too, I mean my temps are around 50, but as i said before this is why I paid EVGA another $25 to extend the warranty to 5 years. If it does eventually bust, wouldn't be my fault.
ID: 24668 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24669 - Posted: 1 May 2012, 17:08:35 UTC - in response to Message 24668.  

Perhaps in a month or so EVGA Precision will include an update to allow you to change the Voltage or release a separate tool that does.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 24669 · Rating: 0 · rate: Rate + / Rate - Report as offensive
]{LiK`Rangers`
Avatar

Send message
Joined: 5 Jan 12
Posts: 117
Credit: 77,256,014
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 24670 - Posted: 1 May 2012, 19:01:17 UTC - in response to Message 24669.  

just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not?
ID: 24670 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 24671 - Posted: 1 May 2012, 19:25:45 UTC - in response to Message 24670.  
Last modified: 1 May 2012, 19:31:33 UTC

..... should I be waiting for a 600 or not?


Thats a $64,000 question :)

Its built as a gamers card, not a Compute card, and thats the big change from previous NVidia iterations, where previously comparable gaming and compute performance increases were almost a given - not on this one, nor - it seems likely - on the 690. The card also has abysmal to appauling double precision capability, and whilst thats not required here, it does cut off some BOINC projects.

If its gaming, its almost a no brainer if you are prepared to suck up the high price, its a gaming winner for sure.

If its Compute useage, there hangs the question mark. It seems unlikely that it will perform well in a comparitive sense to older offerings given the asking price, and the fact that the architecture does not lend itself to Compute applications. The Project Team have been beavering away to see what they can come up with. The 580 was built on 40nm, the 680 is built on 28nm but early indications only indicate a 50% increase over the 580 - that like for like, given the 40nm to 28nm switch, indicates the design change and concentration on gaming not Compute.

Dont take it all as doom and gloom, but approach the 680/690 Compute with healthy caution until real world testing comes out so your expectations can be tested, and the real world result compared with what you want.

Not a straight answer because its new territory - an NVidia card built for gaming that appears to "ignore" Compute. Personally I am waiting to see the Project Team's results, because if these guys cant get it to deliver Compute to a decent level thats comensurate with the asking price and change from 40nm to 28nm, no one can. Suggest you wait for the test and development results from the Project Team, then decide.

Regards
Zy
ID: 24671 · Rating: 0 · rate: Rate + / Rate - Report as offensive
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24672 - Posted: 1 May 2012, 20:30:30 UTC

Don't know if this is relevant for what we do, but someone just posted this on NVIDIA forums:

It seems that integer multiply-add (IMAD) on GTX 680 runs 6x slower than in single precision floating point (FFMA). Apparently, only 32 cores out of 192 on each SM can do it.

A power user for Berkeley wrote this, AGAIN DON'T KNOW IF IT"S CORRECT OR RELEVANT FOR WHAT WE DO, BUT CONSIDERING THE TOPIC ABOUT COMPUTE CAPABILITIES, FIGURED I WOULD POST IT.
ID: 24672 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

Message boards : News : Tests on GTX680 will start early next week [testing has started]

©2025 Universitat Pompeu Fabra