Message boards :
News :
Tests on GTX680 will start early next week [testing has started]
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Sorry guys, big changes over here in the lab and we are a bit busy, so I could not find the time to upload the new application. One of the changes is my machine. First we were compiling on Fedora10, now we will be compiling on Fedora14. If you have an earlier release it could be a problem. Also, I am having problems with the driver for the GTX680 on Linux. gdf |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the update, always appreciated. I've read the Linux drivers are pretty shotty (bad) as well. Windows aren't too bad, but are still not great unfortunately. Wish you the best of luck, know you guys want to get it out One question when you get the time, if I'm correct this app will be 4.2 but in another thread you ( or one of you) mentioned cuda 5. Any big changes that will effect this project down the road? |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
This will be cuda4.2. If I mentioned cuda5 was by mistake. Later on we will also probably drop cuda3.1 in favor of cuda4 to make sure that people don't need the latest driver version. gdf |
|
Send message Joined: 22 Nov 09 Posts: 114 Credit: 589,114,683 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This will be cuda4.2. If I mentioned cuda5 was by mistake. If cuda3.1 is dropped, will this affect those of us with older cards, such as an 8800 GT and a GTX 460? Thanks. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute? I think these and other CC1.1 cards are overdue for retirement from this project, and I suspect that CUDA4.2 tasks that run on CC1.1 cards will perform worse than they do now, increasing the probability for retirement. While CC1.1 cards will perform less well, Fermi and Kepler cards will perform significantly better. There isn't much on CUDA4.2, but CUDA4.1 requires 286.19 on Win and 285.05.33 on Linux. I think support arrived with the non-recommended 295.x drivers; on one of my GTX470's (295) Boinc says it supports CUDA 4.2, the other (Linux 280.13) says 4.0. For CUDA 4.2 development, NVidia presently recommends the 301.32 Dev drivers for Windows and 295.41 drivers for Linux, and 4.2.9 toolkit - Fedora14_x64, for example. I would expect the high end GTX200 series cards (CC1.3) will still be supported by GPUGrid, but I don't know what the performance would be and it's not my decision. I would also expect support for CC1.1 cards to be dropped, but we will have to wait and see. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Someone's running betas....... |
|
Send message Joined: 13 Sep 10 Posts: 5 Credit: 17,517,835 RAC: 0 Level ![]() Scientific publications ![]()
|
http://www.gpugrid.net/show_host_detail.php?hostid=108890 :-) Butuz |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Not entirely sure why you posted that individuals user id, App page still says nothing new is out for beta testing. Hoping this means they finally got their linux drivers working properly and are finally testing in house. Maybe tomorrow? EDIT: Tried to grab some on Windows, and still none available. Someone is definately grabbing and returning results though. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app. new app on gtx 580 115 ns/day Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true. 150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031) All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment. It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would like to know.what limits performance as.well, but.the shader clock speed is.actually lower. Remember you have to.double others clock to get shader clock, so 680 =1.1 ghz on boost while 580 stock 772*2 for shader clock. Also more efficient. Running a 3820 @ 4.3 on wcg, and Einstein with gpu at 80% utilization. This system currently only uses 300 W |
|
Send message Joined: 13 Sep 10 Posts: 5 Credit: 17,517,835 RAC: 0 Level ![]() Scientific publications ![]()
|
Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true. I think you are wrong. You are looking it at totally the wrong way, concentrating on the negatives rather than the positives. 1. The card is purposefully designed not to excel at compute applications. This is a design goal for NVidia. They designed it to play games NOT crunch numbers. 95 % of people buy these cards to play games. The fact that there is any improvement at all over the 5xx series cards in GPUGRID is a TOTAL BONUS for us - and I think testament to the hard work of the GPUGRID developers and testers rather than anything else NVidia have done. 2. It looks like we are going to get a 30.4% performance increase at GPUGRID and at the same time a 47% drop in power usage (and thus a drop in heat and noise) on a card that is purposefully designed to be awful at scientific computing. And you are not happy with that? I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used. My 2p anyway. Butuz |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used. For $1000 a card, I would expect to see a very significant increase, boardering on, if not actually massive - no luck about it. The power reduction comes with the territory for 28nm, so thats out of the equation. What is left on the compute side is a 30% improvement achieved by the 30% improvement on the GPU clocks. From a Compute angle is it worth dropping £1000 on a card that - essentially - has only increased its clocks compared to the 580? I very much doubt it. In any case NVidia supply of 28nm is barely adequate at best, so a high priced 690 goes along with that, and its likely to stay that way for a good while until 28nm supply improves. There is little doubt that they have produced a winner for gaming, its a beast for sure, and is going to "win" this Round. I doubt though that there will be many gamers, even the hard core "I just want the fastest" players, who will drop the money for this. $1000 is a step too far, and I believe will over time result in a real push back on price - its way too much when the mid range cards will nail any game going, let alone in SLI. Fingers crossed the Project team can pull the cat out of the bag as far as GPUGRID is concerned - but its not looking great at present - at least not for $1000 it isnt. Regards Zy |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The only "issue" I have with the new series, is that it will be on boost 100% of the time, with no way to change it. The card uses 1.175 v and runs at 1105 Mhz in boost (specific to each card) with the amount of stress we put these things through, and that Maxwell will not be out til 2014 I actually paid EVGA $25 to.extend the 3 year to 5. Plan on having these at LEAST til 2015, since i will have both cards be 600 series, bought one and step uped a 570. Whenever Maxwell or 7xx series comes out ill buy more, but these will be in one system or another for quite some time. Even though temps at 80% utilization are 48-50, I'm not taking any chances with that high of a voltage 24/7/365 EDIT why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. And the 680 is 500, the 690 is $1000 EDIT AGAIN: If you already own say 5 580's or whatever, AND live in a place with high electricity, considering used cards can still get roughly $250, you MAY actually be able to recover the costs of electricity alone, let alone the increased throughput. AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
With the Fermi series' the shaders were twice as fast as the GPU core. I guess people presume this is still the case with Kepler. It's possible that some software will turn up that enables you to turn turbo off, though I expect many would want it to stay on. Can the Voltage not be lowered using MSI Afterburner or similar? 1.175v seems way too high to me; my GTX470 @ 680MHz is sitting at 1.025v (73degC at 98% GPU load). I think the scientific research methods would need to change in order to increase utilization of the shaders. I'm not sure that is feasible, or meritable. While it would demonstrate adaptability by the group, it might not increase scientific accuracy, or might require so much effort that it proves to be too much of a distraction. It might not even work, or could be counterproductive. Given that these cards are going to be the mainstream GPU's for the next couple of years, a methodology rethink might be worth investigating. Not having a Kepler or tasks for one, I could only speculate on where the calculations are taking place. It might be the case that a bit more is now done on the GPU core and somewhat less on the shaders. Anyway, it's up to the developers and researchers to get as much out of the card as they can. It's certainly in their interests. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. As a consequence of the architectural changes (say improvements), the new shaders in the Kepler chip can do the same amount of work as the shaders in the Fermi at doubled core clock. That's why the Kepler can be more power efficient than the Fermi. (And because the 28nm lithography of course) |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
No, voltage monitor does not efffect this card whatsoever. Some say you can limit it by limiting the power usage monitor, but since we put a different kind of load on the chip, at 80% utilization , mine is on boost with only 60% power load. I've tried offseting down to base clock (-110) but voltage was still at 1.175. It bothers me a lot too, I mean my temps are around 50, but as i said before this is why I paid EVGA another $25 to extend the warranty to 5 years. If it does eventually bust, wouldn't be my fault. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Perhaps in a month or so EVGA Precision will include an update to allow you to change the Voltage or release a separate tool that does. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 5 Jan 12 Posts: 117 Credit: 77,256,014 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not? |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
..... should I be waiting for a 600 or not? Thats a $64,000 question :) Its built as a gamers card, not a Compute card, and thats the big change from previous NVidia iterations, where previously comparable gaming and compute performance increases were almost a given - not on this one, nor - it seems likely - on the 690. The card also has abysmal to appauling double precision capability, and whilst thats not required here, it does cut off some BOINC projects. If its gaming, its almost a no brainer if you are prepared to suck up the high price, its a gaming winner for sure. If its Compute useage, there hangs the question mark. It seems unlikely that it will perform well in a comparitive sense to older offerings given the asking price, and the fact that the architecture does not lend itself to Compute applications. The Project Team have been beavering away to see what they can come up with. The 580 was built on 40nm, the 680 is built on 28nm but early indications only indicate a 50% increase over the 580 - that like for like, given the 40nm to 28nm switch, indicates the design change and concentration on gaming not Compute. Dont take it all as doom and gloom, but approach the 680/690 Compute with healthy caution until real world testing comes out so your expectations can be tested, and the real world result compared with what you want. Not a straight answer because its new territory - an NVidia card built for gaming that appears to "ignore" Compute. Personally I am waiting to see the Project Team's results, because if these guys cant get it to deliver Compute to a decent level thats comensurate with the asking price and change from 40nm to 28nm, no one can. Suggest you wait for the test and development results from the Project Team, then decide. Regards Zy |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Don't know if this is relevant for what we do, but someone just posted this on NVIDIA forums: It seems that integer multiply-add (IMAD) on GTX 680 runs 6x slower than in single precision floating point (FFMA). Apparently, only 32 cores out of 192 on each SM can do it. A power user for Berkeley wrote this, AGAIN DON'T KNOW IF IT"S CORRECT OR RELEVANT FOR WHAT WE DO, BUT CONSIDERING THE TOPIC ABOUT COMPUTE CAPABILITIES, FIGURED I WOULD POST IT. |
©2025 Universitat Pompeu Fabra