Message boards : News : Tests on GTX680 will start early next week [testing has started]
Author | Message |
---|---|
We are looking forward to testing the new nvidia architecture. We will report the performance soon and really thank one anonymous cruncher for the donation. | |
ID: 24095 | Rating: 0 | rate: / | |
As I am going to do a new PC build later this year, I'll be interested in these results. I have seen several "gamer" oriented reviews that also tested "compute" capabilities, and I was not impressed by the compute results. All compute tests except for one were slower than previous gen cards. So, I still hold out hope that the GTX 680 will perform better than the previous gen cards. | |
ID: 24124 | Rating: 0 | rate: / | |
... I'll be interested in these results. Just like we all are. | |
ID: 24133 | Rating: 0 | rate: / | |
I have a bad feeling | |
ID: 24149 | Rating: 0 | rate: / | |
I have a bad feeling THAT's on an LLR-app which heavily relies on 64-bit capabilities. it was pretty obvious right from the start, that GK-104 design was not planned for that. | |
ID: 24150 | Rating: 0 | rate: / | |
The GTX680 has 100% more flops and 30% more transistors than the GTX580. We would be happy to have something in between those numbers. | |
ID: 24151 | Rating: 0 | rate: / | |
The GTX680 has 100% more flops and 30% more transistors than the GTX580. We would be happy to have something in between those numbers. ??? PG-sieve apps which are still cuda 2.3 do work. maybe not using the full potential, but they produce valid results. | |
ID: 24152 | Rating: 0 | rate: / | |
With cuda4.2 I think that we will be within the window of performance I indicated before. | |
ID: 24154 | Rating: 0 | rate: / | |
With cuda4.2 I think that we will be within the window of performance I indicated before. I thought cuda 4.2 was still under NDA. Would a cuda 4.1 app be a reasonable compromise? At least 4.1 is publicly released so we can beta test. Besides given all the driver sleep bug issues people are not likely to have a 4.2 capable driver installed at the moment. ____________ BOINC blog | |
ID: 24160 | Rating: 0 | rate: / | |
cuda4.2 is publicly available from nvidia forums but not widely advertised. | |
ID: 24162 | Rating: 0 | rate: / | |
We have now two GTX680 installed locally and we are now testing the BOINC app. | |
ID: 24210 | Rating: 0 | rate: / | |
any result? | |
ID: 24234 | Rating: 0 | rate: / | |
Due to Easter holidays, we have a stop now. The BOINC cuda4.2 application is being tested though. | |
ID: 24260 | Rating: 0 | rate: / | |
Just got my 680 in, unfortuantely I have to have it on Windows for gaming, and would like to know whether or not I should attach it to the project yet? Don't want to be returning a whole bunch of invalid or errors. | |
ID: 24284 | Rating: 0 | rate: / | |
I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though? | |
ID: 24286 | Rating: 0 | rate: / | |
It appears that they are Linux only. If I wasn't running out of drive space, i would give this rig the dual-boot, since I now know how to configure it. Don't feel like using usb, since im running wcg currently. Might go pick up a larger SSD in order to accomodate this weekend. | |
ID: 24288 | Rating: 0 | rate: / | |
Just keep Windows. We use Linux because it is easier for us, but we should be having a Windows application soon after. | |
ID: 24298 | Rating: 0 | rate: / | |
First off let me apologize for the "tone" of my written voice, but after spending 6 hours last night trying to install ubuntu i can say I HATE the "disgruntled" GRUB. Windows 7 install refused to play nice with it. Kept getting grub-efi failed to install on /target error, as well as MANY others. Even went to the trouble of disconnecting my Windows SATA connection, but still kept getting same error on fresh drive. Do to the fact that it is Easter Weekend (and the habit of wanting betas ala WCG), I have decided to unistall Windows 7 in order to accomplish my goal. Since this is mainly a crunching rig (0 files stored internally, I keep all on external encrypted HDDs), and besides the one game 1 game I play (which was ruined due to a recent "patch"), having Windows does nothing for me ATM. Should have this uninstalled shortly, and hopefully with it unistalled maybe GRUB will not be so grumpy (as well as me). | |
ID: 24300 | Rating: 0 | rate: / | |
I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though? In order to crunch anything at all - beta or otherwise - you need both a supply of tasks and an application to run them with. The standard BOINC applications page still seems to work, even if like me you can't find a link from the redesigned front page. No sign of a Beta application yet for either platform, which may take some time pressure off the OS (re-)installs. | |
ID: 24301 | Rating: 0 | rate: / | |
Literally getting ready to de-install before you posted that....... Is that because those beta wu's can't be sent to anyone with the designated platforms unless they have a 680 though, meaning they don't want them to go to people who have the other apps, but without the proper GPU to run them? | |
ID: 24302 | Rating: 0 | rate: / | |
The beta WUs are from before, they don't go out because there is no beta app now. | |
ID: 24304 | Rating: 0 | rate: / | |
Thank you for detailed post. MUCH appreciation. | |
ID: 24305 | Rating: 0 | rate: / | |
Any progress? | |
ID: 24398 | Rating: 0 | rate: / | |
Besides that fact Zoltan, which from what I can tell, the CPU is basically what's crippling the 680 across the board for every project. | |
ID: 24399 | Rating: 0 | rate: / | |
The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU. I don't see why the CPU should have any different impact between compared to Fermi. Any progress? | |
ID: 24400 | Rating: 0 | rate: / | |
: ). If they would have kept that ROP at 48 as with the 580, it would have been 50% faster though , but 25% sounds good to me. Keep up the good work guys can't wait til it's released. | |
ID: 24402 | Rating: 0 | rate: / | |
Compared to 580, it has 1/3 more cores than 580 (1536 vs 1024), but a 1/3 less ROPS. A GTX 580 has 512 cuda cores and a GTX 680 has 1536. CUDA is different from OpenCL. On several OpenCL projects high CPU requirement appears to be the norm. I would expect a small improvement when using PCIE3 with one GPU. If you have 2 GTX680's in a PCIE2 system that drops from PCIE2 x16 to PCIE2 x8, then the difference would be much more noticeable, compared to a board supporting two PCIE3 x16 lanes. If you're going to get 3 or 4 PCIE3 capable GPU's then it would be wise to build a system that properly supports PCIE3. The difference would be around 35% of one card, on an PCIE3 X16, X16, X8 system compared to a PCIE2 X8, X8, X4 system. For one card it's not really worth the investment. If we are talking 25% faster @ 20% less power, then in terms of performance per Watt the GTX680 is ~50% better than a GTX580. However that doesn't consider the rest of the system. Of the 300W a GTX680 system might use, for example, ~146W is down to the GPU. Similarly, for a GTX580 it would be ~183W. The difference is ~37W. So the overall system would use ~11% less power. If the card can do ~25% more work then the overall system improvement is ~39% in terms of performance per Watt. Add a second or third card to a New 22nm CPU system and factor in the PCIE improvements and the new systems performance per Watt would be more significant, perhaps up to ~60% more efficient. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 24410 | Rating: 0 | rate: / | |
The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU. It sounds promising, and a little disappointing at the same time (as it is expected). I don't see why the CPU should have any different impact between compared to Fermi. Because there is already 25-30% variation in GPU usage between different type of workunits on my GTX 580. For example NATHAN_CB1 runs at 99% GPU usage while NATHAN_FAX4 runs at only 71-72%. I wonder how much the GPUGrid client could feed a GPU with as many CUDA cores as the GTX 680 has, while it could feed a GTX 580 to run at only 71-72% (and the GPU usage drops as I raise the GPU clock, so the performance is CPU and/or PCIe limited). To be more specific, I'm interested in how much is the GPU usage of a NATHAN_CB1 and a NATHAN_FAX4 on a GTX 680 (and on a GTX 580 with the new client)? | |
ID: 24411 | Rating: 0 | rate: / | |
I brought the the core count up to 1024, instead of 512, since I kept trying to figure out the math for what the improvement was going to be. Meaning, if I doubled the core count, I could do away with the shader clock, as they did in kepler. (i know kepler was quadrupled, but in terms of performance it was just doubled) The math SEEMED to work out ok. So, I was working with 1024 cores working at core clock of 772 meant 1/3 more cores on 680 than 580 (adjusted for the doubled shader freq).This led to a difference in shader clock of 23.2% faster for Kepler (772/1005). Which meant (to me and my 0 engineering knowledge), a benefit of 56.6% (increase in amount of cores*increase in adjusted freq) However, since there are 1/3 less ROPs, that got me down to 23.4% (but if I'm not mistaken, the ROP freq. is calc. off core, and I learned this after adjusting for 570, 480 and 470, once i learned the ROP freq i quit trying) | |
ID: 24414 | Rating: 0 | rate: / | |
One more thing. I'm assuming Zoltan meant, as he already explained in relation to GPUgrid WUs, that like Einstein apps, have we hit a."wall" to where the CPU matters more than gpu once you reach a certain point. As per his description some tasks are dependent on a fast CPU,someone in other forum is failing tasks because he has a 470 or a 480 can't remember, in a xeon @ 2.5, which is currently causing him issues. | |
ID: 24415 | Rating: 0 | rate: / | |
I have a 550 ti and my cpu AMD athlon X4 underclocked to 800 mhz and gpugrid uses only 10% of my cpu (runing in linux) | |
ID: 24417 | Rating: 0 | rate: / | |
FYI. Half of your tasks error out. | |
ID: 24419 | Rating: 0 | rate: / | |
Oh, and it's not whether or not they'll finish, it's about whether or not the CPU will bottleneck the GPU. I reference Einstein, b/c as mentioned, anything above a 560ti 448 will finish the task in roughly the same GPU time, and what makes the difference in how fast you finish the WU is based off of how fast your CPU is. This can SEVERELY cripple performance. | |
ID: 24420 | Rating: 0 | rate: / | |
cuda4.2 is publicly available from nvidia forums but not widely advertised. Could you mention which of the new drivers allow using it without the sleep bug? | |
ID: 24421 | Rating: 0 | rate: / | |
The newest R300 series doesn't have the sleep bug, but it is a beta. The 4.2 came out with 295, so it's either beta or wait til WHQL is released. The beta version is 301.24. Or if possible in your situation, you can tell Windows to Never turn off display. This prevents sleep bug, and you can do whatever you want. | |
ID: 24423 | Rating: 0 | rate: / | |
Thanks. Now I'll go look for GTX680 specs to see if it will fit the power limits for my computer room, and the length limits for my computers. | |
ID: 24424 | Rating: 0 | rate: / | |
Oh, if you're possibly getting a 680, use 301.10 | |
ID: 24430 | Rating: 0 | rate: / | |
Thanks. Now I'll go look for GTX680 specs to see if it will fit the power limits for my computer room, and the length limits for my computers. The GTX 680 exceeds both the power limit and the length limit for my computers. I'll have to look for later members of the GTX6nn family instead. | |
ID: 24433 | Rating: 0 | rate: / | |
Just a friendly reminder about what you're getting with anything less than 680/670 The 660ti will be based off othe 550ti's board. Depending on each users power requirements. I would HIGHLY reccomend waiting for results from said boards, or would reccomend the 500 series. Since a 660ti will most likely have half the cores, and a 15% decrease in clock compared to 580, this could severely cripple other 600 series as far as crunching is concerned. Meaning, a 560Ti 448 and above will, IMO (I can't stress this enough), probably be able to beat a 660Ti when it's released. Again, IMHO. This is as far as speed is concerned. Performace/watt may be a different story, but a 660ti will be based off of a 550ti specs (keep that in mind) | |
ID: 24436 | Rating: 0 | rate: / | |
Sorry, meant to say half the cores of the 680 in prior statement. Again, this new design is not meant for crunching, and all boards are effectively "1" off, so 660ti = 550ti BOARD. | |
ID: 24437 | Rating: 0 | rate: / | |
my gtx680 didnt compute any work from GPUGrid | |
ID: 24450 | Rating: 0 | rate: / | |
my gtx680 didnt compute any work from GPUGrid Please be patient. The current GPUGrid application doesn't support the GTX 680. A new version is under construction, it will support the GTX 680. | |
ID: 24451 | Rating: 0 | rate: / | |
Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680. | |
ID: 24457 | Rating: 0 | rate: / | |
Want NOW | |
ID: 24458 | Rating: 0 | rate: / | |
50%!!!!!!!!!!!!!! WOW!!!!!! Great work guys!!!! Waiting "patiently"..... :) | |
ID: 24459 | Rating: 0 | rate: / | |
Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680. I don't think that means the GTX680 is 50% faster than a GTX580! I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app. That would make the GTX680 ~28% faster than a GTX580 on the new app. In terms of performance per Watt that would push it to ~160% compared to the GTX580, or twice the performance per Watt of a GTX480 ;) ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 24460 | Rating: 0 | rate: / | |
just to give some numbers for clarity: | |
ID: 24462 | Rating: 0 | rate: / | |
Lol. I knew that...... Either way, a lot faster than my 570 that's currently attached! And 160% more efficient is amazing. Again, great work guys!!! Still need to find a mobo for ivy that supports 3.0 at 2 x16. | |
ID: 24463 | Rating: 0 | rate: / | |
Wow that is excellent news. I had been waiting for news on this before comitting to my GFX card purchase. You guys rock. Your dedication and speed of reaction is outstanding! | |
ID: 24464 | Rating: 0 | rate: / | |
Also I have one more question to those in the know. If I run a GTX680 on a PCIE2 motherboard will it take a performance hit on that 150% figure? Could this be tested if you have time GDF - I know its not a high priority but may help people like me who dont have a next gen motherboard make an informed decision. | |
ID: 24465 | Rating: 0 | rate: / | |
All I can say is on the note about the performance hit is, I'm going to THINK that it won't, PCI 3.0 allows for 16 GB/s in each direction. For what we do, this is A LOT of bandwidth. From the results that I've seen, which are based on games, the performance increase seems to be only 5-7%, if this is the case, I would ASSUME that there wouldn't be that big of a performance hit. | |
ID: 24468 | Rating: 0 | rate: / | |
I don't know if PCI3 will make a little change or not. We are trying on a PCI3 motherboard. | |
ID: 24475 | Rating: 0 | rate: / | |
This post, in this thread, discusses speculatively PCIE3 vs PCIE2. | |
ID: 24478 | Rating: 0 | rate: / | |
In regards to the PCI 3.0 running 2x16. If I am reading this correctly, am I gonna be "forced" to get a SB-E now, and would most likely get the 3930K, since 3820 isn't unlocked, and that's what I would prefer? http://www.anandtech.com/show/4830/intels-ivy-bridge-architecture-exposed | |
ID: 24482 | Rating: 0 | rate: / | |
Hi, An interesting comparison NVIDIA vs AMD; GTX680/580 versus HD6970/7970. | |
ID: 24485 | Rating: 0 | rate: / | |
Glad we do FP32 | |
ID: 24487 | Rating: 0 | rate: / | |
EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal. I've been looking at the 3820 myself. In my opinion, that is the only SB-E to get. Techspot got the 3820 up to 4.625 GHz, and at that speed, it performs pretty much equally as well as a 3960K at 4.4 GHz. To me, it's a no-brainer - $1000 3960K, $600 3930K, or $250 3820 that performs as well as the $1K chip. According to the Microcenter web site, that price is in-store only. Where SB-E will really excel is in applications that are memory intensive, such as FEA and solid modelling - which is a conclusion that I came to as a result of the Techspot review - that tested the 3820 in a real-world usage scenario of SolidWorks. Anyway, IB is releasing on Monday, and it might be worth the wait. Personally, I do not think IB will beat SB-E in memory intensive applications, however, I'll be looking very closely at the IB reviews. ____________ | |
ID: 24525 | Rating: 0 | rate: / | |
IB is not going to beat ANY SB-E. It's slight performance improvement and energy savings may very well be negated by its ability to OC less than Sandy (what I've read anyways). | |
ID: 24526 | Rating: 0 | rate: / | |
I know patience is a virtue, and I REALLY hate to ask GDF, but........... how's the progress on the beta app coming. | |
ID: 24563 | Rating: 0 | rate: / | |
Better than the ATi version, probably. | |
ID: 24574 | Rating: 0 | rate: / | |
Sorry guys, big changes over here in the lab and we are a bit busy, so I could not find the time to upload the new application. | |
ID: 24586 | Rating: 0 | rate: / | |
Thanks for the update, always appreciated. | |
ID: 24587 | Rating: 0 | rate: / | |
This will be cuda4.2. If I mentioned cuda5 was by mistake. | |
ID: 24588 | Rating: 0 | rate: / | |
This will be cuda4.2. If I mentioned cuda5 was by mistake. If cuda3.1 is dropped, will this affect those of us with older cards, such as an 8800 GT and a GTX 460? Thanks. ____________ | |
ID: 24605 | Rating: 0 | rate: / | |
CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute? | |
ID: 24619 | Rating: 0 | rate: / | |
Someone's running betas....... | |
ID: 24636 | Rating: 0 | rate: / | |
http://www.gpugrid.net/show_host_detail.php?hostid=108890 | |
ID: 24637 | Rating: 0 | rate: / | |
Not entirely sure why you posted that individuals user id, App page still says nothing new is out for beta testing. Hoping this means they finally got their linux drivers working properly and are finally testing in house. | |
ID: 24638 | Rating: 0 | rate: / | |
I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app. new app on gtx 580 115 ns/day Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true. 150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031) All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment. It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future. | |
ID: 24656 | Rating: 0 | rate: / | |
I would like to know.what limits performance as.well, but.the shader clock speed is.actually lower. Remember you have to.double others clock to get shader clock, so 680 =1.1 ghz on boost while 580 stock 772*2 for shader clock. Also more efficient. Running a 3820 @ 4.3 on wcg, and Einstein with gpu at 80% utilization. This system currently only uses 300 W | |
ID: 24659 | Rating: 0 | rate: / | |
Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true. I think you are wrong. You are looking it at totally the wrong way, concentrating on the negatives rather than the positives. 1. The card is purposefully designed not to excel at compute applications. This is a design goal for NVidia. They designed it to play games NOT crunch numbers. 95 % of people buy these cards to play games. The fact that there is any improvement at all over the 5xx series cards in GPUGRID is a TOTAL BONUS for us - and I think testament to the hard work of the GPUGRID developers and testers rather than anything else NVidia have done. 2. It looks like we are going to get a 30.4% performance increase at GPUGRID and at the same time a 47% drop in power usage (and thus a drop in heat and noise) on a card that is purposefully designed to be awful at scientific computing. And you are not happy with that? I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used. My 2p anyway. Butuz | |
ID: 24663 | Rating: 0 | rate: / | |
I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used. For $1000 a card, I would expect to see a very significant increase, boardering on, if not actually massive - no luck about it. The power reduction comes with the territory for 28nm, so thats out of the equation. What is left on the compute side is a 30% improvement achieved by the 30% improvement on the GPU clocks. From a Compute angle is it worth dropping £1000 on a card that - essentially - has only increased its clocks compared to the 580? I very much doubt it. In any case NVidia supply of 28nm is barely adequate at best, so a high priced 690 goes along with that, and its likely to stay that way for a good while until 28nm supply improves. There is little doubt that they have produced a winner for gaming, its a beast for sure, and is going to "win" this Round. I doubt though that there will be many gamers, even the hard core "I just want the fastest" players, who will drop the money for this. $1000 is a step too far, and I believe will over time result in a real push back on price - its way too much when the mid range cards will nail any game going, let alone in SLI. Fingers crossed the Project team can pull the cat out of the bag as far as GPUGRID is concerned - but its not looking great at present - at least not for $1000 it isnt. Regards Zy | |
ID: 24664 | Rating: 0 | rate: / | |
The only "issue" I have with the new series, is that it will be on boost 100% of the time, with no way to change it. The card uses 1.175 v and runs at 1105 Mhz in boost (specific to each card) with the amount of stress we put these things through, and that Maxwell will not be out til 2014 I actually paid EVGA $25 to.extend the 3 year to 5. Plan on having these at LEAST til 2015, since i will have both cards be 600 series, bought one and step uped a 570. Whenever Maxwell or 7xx series comes out ill buy more, but these will be in one system or another for quite some time. Even though temps at 80% utilization are 48-50, I'm not taking any chances with that high of a voltage 24/7/365 | |
ID: 24665 | Rating: 0 | rate: / | |
With the Fermi series' the shaders were twice as fast as the GPU core. I guess people presume this is still the case with Kepler. | |
ID: 24666 | Rating: 0 | rate: / | |
why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. As a consequence of the architectural changes (say improvements), the new shaders in the Kepler chip can do the same amount of work as the shaders in the Fermi at doubled core clock. That's why the Kepler can be more power efficient than the Fermi. (And because the 28nm lithography of course) | |
ID: 24667 | Rating: 0 | rate: / | |
No, voltage monitor does not efffect this card whatsoever. Some say you can limit it by limiting the power usage monitor, but since we put a different kind of load on the chip, at 80% utilization , mine is on boost with only 60% power load. I've tried offseting down to base clock (-110) but voltage was still at 1.175. | |
ID: 24668 | Rating: 0 | rate: / | |
Perhaps in a month or so EVGA Precision will include an update to allow you to change the Voltage or release a separate tool that does. | |
ID: 24669 | Rating: 0 | rate: / | |
just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not? | |
ID: 24670 | Rating: 0 | rate: / | |
..... should I be waiting for a 600 or not? Thats a $64,000 question :) Its built as a gamers card, not a Compute card, and thats the big change from previous NVidia iterations, where previously comparable gaming and compute performance increases were almost a given - not on this one, nor - it seems likely - on the 690. The card also has abysmal to appauling double precision capability, and whilst thats not required here, it does cut off some BOINC projects. If its gaming, its almost a no brainer if you are prepared to suck up the high price, its a gaming winner for sure. If its Compute useage, there hangs the question mark. It seems unlikely that it will perform well in a comparitive sense to older offerings given the asking price, and the fact that the architecture does not lend itself to Compute applications. The Project Team have been beavering away to see what they can come up with. The 580 was built on 40nm, the 680 is built on 28nm but early indications only indicate a 50% increase over the 580 - that like for like, given the 40nm to 28nm switch, indicates the design change and concentration on gaming not Compute. Dont take it all as doom and gloom, but approach the 680/690 Compute with healthy caution until real world testing comes out so your expectations can be tested, and the real world result compared with what you want. Not a straight answer because its new territory - an NVidia card built for gaming that appears to "ignore" Compute. Personally I am waiting to see the Project Team's results, because if these guys cant get it to deliver Compute to a decent level thats comensurate with the asking price and change from 40nm to 28nm, no one can. Suggest you wait for the test and development results from the Project Team, then decide. Regards Zy | |
ID: 24671 | Rating: 0 | rate: / | |
Don't know if this is relevant for what we do, but someone just posted this on NVIDIA forums: | |
ID: 24672 | Rating: 0 | rate: / | |
I wish the latest Intel processors were only 50% faster! | |
ID: 24673 | Rating: 0 | rate: / | |
well 50% increase in compute speed sounds good to me, especially since nvidia had, not sure if they still do, 620 driver link on there site as someone here noted. but if it comes down to it i guess a new 570 probably wont be a bad deal. | |
ID: 24675 | Rating: 0 | rate: / | |
I wish the latest Intel processors were only 50% faster! Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards. | |
ID: 24677 | Rating: 0 | rate: / | |
I posted a question in regards to GPU Boost on NVIDIA's forums, and the high voltage given to the card (1.175) and my concerns about this running 24/7, and asking (pleading) that we should be allowed to turn Boost off. | |
ID: 24678 | Rating: 0 | rate: / | |
Hi Robert, Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 24686 | Rating: 0 | rate: / | |
Hi Robert, you will defintely have to have a close look what you get there: http://www.geforce.com/hardware/desktop-gpus/geforce-gt-640-oem/specifications 6 different versions under the same label! mixed up, mungled up, fraudulent - at least potentially. :( | |
ID: 24687 | Rating: 0 | rate: / | |
Hi Robert, I see only three versions there, but definitely mixed up. However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450. I may have to look at that one some more, while waiting for the GPUGRID software to be updated enough to tell if the results make it worth upgrading. | |
ID: 24688 | Rating: 0 | rate: / | |
I see only three versions there, but definitely mixed up. take a closer look! it's 2 kepler's and 1 fermi. it's 1 or 2 - or 1,5 of 3GB or ram. + DDR3 vs. DDR5. and that's only the suggested specs - OEM's are free to do whatever they want according to clock-rates.. However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450. if you want a rebranded GTX560se.. | |
ID: 24689 | Rating: 0 | rate: / | |
I see only three versions there, but definitely mixed up. I see what you mean about RAM sizes. However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450. I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop. | |
ID: 24690 | Rating: 0 | rate: / | |
I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop. look there: http://en.wikipedia.org/wiki/GeForce_600_Series probably best for you to wait for a gt?-650 to show up.. | |
ID: 24691 | Rating: 0 | rate: / | |
These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know. | |
ID: 24692 | Rating: 0 | rate: / | |
These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know. probably that's the clue we will have. only one thing left on the bright side: the low TDP kepler-version GT640 will most likely show up even fanless. | |
ID: 24693 | Rating: 0 | rate: / | |
just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not? This 50% increase is actually around 30%. The answer depends on what you prefer. The GTX 680 and mostly the GTX 690 is an expensive card, and they will stay expensive for at least until Xmas. However, considering the running costs, it could be worth the investment in long term. My personal opinion is that nVidia won't release the BigKepler as a GeForce card, so there is no point in waiting for a better cruncher card from nVidia this time. In a few months we'll see if I was right in this matter. Even if nVidia releases the BigKepler as a GeForce card, its price will be between the price of the GTX 680 and 690. On the other hand, there will be a lot of cheap Fermi based (CC2.0) cards, either second-hand ones, or some "brand new" from a stuck stockpile, so one could buy 30% less computing power approximately at half (or maybe less) price. | |
ID: 24694 | Rating: 0 | rate: / | |
Until the GF600 app gets released there's not much point buying any GF600. | |
ID: 24695 | Rating: 0 | rate: / | |
Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards. The solution is easy, don't vent the hot exhaust from your GPU into the room. Two ways to do that: 1) Get a fan you can mount in the window. If the window is square/rectangular then get a fan with a square/rectangular body as opposed to a round body. Mount the fan in the window then put the computer on a stand high enough to allow the air that blows out of the video card to blow directly into the fan intake. Plug the open space not occupied by the fan with whatever cheap plastic material you can find in a building supply store, a painted piece of 1/4" plywood, kitchen counter covering (arborite) or whatever. 2) I got tired of all the fan noise so I attached a shelf outside the window and put both machines out there. An awning over my window keeps the rain off but you don't have to have an awning, there are other ways to keep the rain off. Sometimes the wind blows snow into the cases in the winter but it just sits there until spring thaw. Sometimes I need to pop a DVD in the tray so I just open the window. I don't use DVD's much anymore so it's not a problem. I screwed both cases to the shelf so they can't be stolen. It never gets much colder than -30 C here and that doesn't seem to bother them. Now I'm finally back to peaceful computing, the way it was before computers needed cooling fans. | |
ID: 24705 | Rating: 0 | rate: / | |
Any news about GpuGRID support for GTX 680 (under linux) ? | |
ID: 24707 | Rating: 0 | rate: / | |
CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute? At this point, I run the short queue tasks on my 8800 GT. It simply cannot complete long queue tasks in a reasonable time. If tasks in the short queue start taking longer than 24 hours to complete, I will probably retire it from this project. That said, if CUDA4.2 will bring significant performance improvements to fermi, I'll be looking forward to it. As to the discussion of what card to buy, I found a new GTX 580 for $370 after rebate. Until I complete my new system, which should be in the next two weeks or so, I have been and will be running it in the machine where the 8800 GT was. It is about 2.5x faster than my GTX 460 on GPUGrid tasks. As I see it, there are great deals on 580s out there considering that about a year ago, these were the "top end" cards in the $500+ range. ____________ | |
ID: 24780 | Rating: 0 | rate: / | |
670's are looking to perform AT LEAST at 580 levels if not better, and with a GIANT decrease in power consumption. They come out Thursday. | |
ID: 24782 | Rating: 0 | rate: / | |
How is DP performance on 670s? Given DP performance on 680s, I would expect that DP performance on the 670 would be worse than the 680. | |
ID: 24785 | Rating: 0 | rate: / | |
Agreed. Do not even consider 6xx if your looking for DP | |
ID: 24786 | Rating: 0 | rate: / | |
Wow, you guys are attentive. I missed the appearance of the 670. Must have blinked. Looks like it'll be 80% the speed of a 680.
Coming soon. There were big problems with recent (295.4x) Linux drivers that rather nixed things for us for a while. MJH | |
ID: 24793 | Rating: 0 | rate: / | |
I might have suggested the release of a Windows app and worry about the Linux app when new drivers turn up, if it wasn't for the fact that NVidia are not supporting WinXP for their GeForce 600 cards. | |
ID: 24794 | Rating: 0 | rate: / | |
Ok guys, | |
ID: 24815 | Rating: 0 | rate: / | |
Glad to hear it. Windows as well? | |
ID: 24816 | Rating: 0 | rate: / | |
it's out for linux now. | |
ID: 24839 | Rating: 0 | rate: / | |
I'm glad you guys were able to get it out for linux. Know its been hard with the driver issues. Is there a timeframe for a Windows beta app yet? I've got another 680 on the way, and a 670 being purchased soon. Would love to be able to bring them over here. | |
ID: 24840 | Rating: 0 | rate: / | |
Failed again.. | |
ID: 24847 | Rating: 0 | rate: / | |
Failed again.. The failed workunits are 'ordinary' long tasks, which use the old application, no wonder they failing on your GTX 680. You should set up your profile to accept only beta work for a separate 'location', and assign your host with the GTX 680 to this 'location'. | |
ID: 24853 | Rating: 0 | rate: / | |
Ok thank you for the information. | |
ID: 24865 | Rating: 0 | rate: / | |
Please use the New beta application for kepler is out thread for Beta testing. | |
ID: 24882 | Rating: 0 | rate: / | |
Message boards : News : Tests on GTX680 will start early next week [testing has started]