Message boards :
News :
Tests on GTX680 will start early next week [testing has started]
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
| Author | Message |
|---|---|
|
Send message Joined: 13 Nov 10 Posts: 3 Credit: 105,044,879 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
my gtx680 didnt compute any work from GPUGrid i tested it on S@H, it works without problem so i bought it from US, then send it to Mongolia, and Mongolia to Korea fml |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
my gtx680 didnt compute any work from GPUGrid Please be patient. The current GPUGrid application doesn't support the GTX 680. A new version is under construction, it will support the GTX 680. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680. It will come out first as a beta and stay as a separate application for now. We will try to get it out quickly as it makes a big difference. It should come out within this week for Linux and Windows. gdf |
StoneagemanSend message Joined: 25 May 09 Posts: 224 Credit: 34,057,374,498 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Want NOW |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
50%!!!!!!!!!!!!!! WOW!!!!!! Great work guys!!!! Waiting "patiently"..... :) Profile has been changed to accept betas only for that rig. Again, 50%!!!!!!!! Sorry Einstein, but your apps have NOTHING on this jump in performance!! And that doesn't even account for performance/watt. My EVGA step up position in queue better increase faster!!! My 570 is still at #501!!! |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680. I don't think that means the GTX680 is 50% faster than a GTX580! I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app. That would make the GTX680 ~28% faster than a GTX580 on the new app. In terms of performance per Watt that would push it to ~160% compared to the GTX580, or twice the performance per Watt of a GTX480 ;) FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
just to give some numbers for clarity: production app on gtx580 98ns/day new app on gtx 580 115 ns/day new app on gtx 680 150 ns/day gdf |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Lol. I knew that...... Either way, a lot faster than my 570 that's currently attached! And 160% more efficient is amazing. Again, great work guys!!! Still need to find a mobo for ivy that supports 3.0 at 2 x16. |
|
Send message Joined: 13 Sep 10 Posts: 5 Credit: 17,517,835 RAC: 0 Level ![]() Scientific publications ![]()
|
Wow that is excellent news. I had been waiting for news on this before comitting to my GFX card purchase. You guys rock. Your dedication and speed of reaction is outstanding! Will the new app also give speed boosts on GTX 570 and 560 cards? Cheers |
|
Send message Joined: 13 Sep 10 Posts: 5 Credit: 17,517,835 RAC: 0 Level ![]() Scientific publications ![]()
|
Also I have one more question to those in the know. If I run a GTX680 on a PCIE2 motherboard will it take a performance hit on that 150% figure? Could this be tested if you have time GDF - I know its not a high priority but may help people like me who dont have a next gen motherboard make an informed decision. Cheers |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
All I can say is on the note about the performance hit is, I'm going to THINK that it won't, PCI 3.0 allows for 16 GB/s in each direction. For what we do, this is A LOT of bandwidth. From the results that I've seen, which are based on games, the performance increase seems to be only 5-7%, if this is the case, I would ASSUME that there wouldn't be that big of a performance hit. The only reason that I want a PCI 3 mobo, which can run 2 cards at x16 each, is because i play games, well one, and two; because it's just a mental thing for me (meaning running at full capacity) even if it's not noticed. I also don't plan on building another rig for some time, and I would like this one to be top notch ;). It will MOST LIKELY only make a difference to those who run either a) huge monitors or b) multiple monitors using NVIDIA Surround, which I plan on doing with a 3+1 monitor setup. Think of it like this, even the biggest tasks for GPUgrid only use a lil over a GB if i'm not mistaken (in memory), the need for 16GB/s is way overpowered I would imagine. I'll let you know how my 680 runs once the beta is out (it's on a PCI 2.0 mobo currently) |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
I don't know if PCI3 will make a little change or not. We are trying on a PCI3 motherboard. The fact that now the PCI controller is inside the CPU, might make some difference in lower latency. gdf |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This post, in this thread, discusses speculatively PCIE3 vs PCIE2. Basically, for a single card it's probably not worth the investment, for two cards it depends on what you want from the system, and for 3 or 4 it's worth it. As you are looking to get a new system it may be worth it. Obviously we won't know for sure until someone posts actual results for both PCIE2 and 3 setups and multiple cards. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
In regards to the PCI 3.0 running 2x16. If I am reading this correctly, am I gonna be "forced" to get a SB-E now, and would most likely get the 3930K, since 3820 isn't unlocked, and that's what I would prefer? http://www.anandtech.com/show/4830/intels-ivy-bridge-architecture-exposed Further, since IB-E won't be released until MAYBE Q3-Q4, probably towards christmas would be my guess, and won't really offer any benefit besides a die shrink. I guess this explains why I was having a hard time finding a PCI 3.0 2x16 mobo. Wow, my idea of 100% GPU functionality just increased the price by about another $250. Hmmmm... Oh, and I found this on andantech (though it's for AMD GPU) Simply enabling PCIe 3.0 on our EVGA X79 SLI motherboard (EVGA provided us with a BIOS that allowed us to toggle PCIe 3.0 mode on/off) resulted in a 9% increase in performance on the Radeon HD 7970. This tells us two things: 1) You can indeed get PCIe 3.0 working on SNB-E/X79, at least with a Radeon HD 7970, and 2) PCIe 3.0 will likely be useful for GPU compute applications, although not so much for gaming anytime soon. Doesn't list what they ran or any specs though EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal. |
Carlesa25Send message Joined: 13 Nov 10 Posts: 328 Credit: 72,619,453 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi, An interesting comparison NVIDIA vs AMD; GTX680/580 versus HD6970/7970. Makes it quite clear the poor performance of GTX680 in FP64 and best performance in the HD6970/7070 MilkyWay. Greetings. http://muropaketti.com/artikkelit/naytonohjaimet/gpgpu-suorituskyky-amd-vs-nvidia Less is better |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Glad we do FP32 Further the 680 has only 8 FP64 cores, which arent included in its core count, which run at full speed compared to the reduced speed of previous generations. |
|
Send message Joined: 22 Nov 09 Posts: 114 Credit: 589,114,683 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal. I've been looking at the 3820 myself. In my opinion, that is the only SB-E to get. Techspot got the 3820 up to 4.625 GHz, and at that speed, it performs pretty much equally as well as a 3960K at 4.4 GHz. To me, it's a no-brainer - $1000 3960K, $600 3930K, or $250 3820 that performs as well as the $1K chip. According to the Microcenter web site, that price is in-store only. Where SB-E will really excel is in applications that are memory intensive, such as FEA and solid modelling - which is a conclusion that I came to as a result of the Techspot review - that tested the 3820 in a real-world usage scenario of SolidWorks. Anyway, IB is releasing on Monday, and it might be worth the wait. Personally, I do not think IB will beat SB-E in memory intensive applications, however, I'll be looking very closely at the IB reviews. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
IB is not going to beat ANY SB-E. It's slight performance improvement and energy savings may very well be negated by its ability to OC less than Sandy (what I've read anyways). The real advantage of Ivy will come from it's PCIE 3 support, but since SB-E already has this, plus with the ability to natively support 40 lanes instead of 1155 CPUs 16 is my MAIN reason. We power users probably won't see improvement, if any, until Haswell? is released, the next "tock" phase. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I know patience is a virtue, and I REALLY hate to ask GDF, but........... how's the progress on the beta app coming. Truly itching to bring my 680 over to you guys. :) As always, you guys do a great job, and I can't wait to hear about how the experiment with the cystallographers works out!! Keep it up!! |
|
Send message Joined: 15 Jan 10 Posts: 42 Credit: 18,255,462 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Better than the ATi version, probably. |
©2025 Universitat Pompeu Fabra