gtx680

Message boards : Graphics cards (GPUs) : gtx680
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23441 - Posted: 13 Feb 2012, 8:58:25 UTC - in response to Message 23427.  

I wouldn't be surprised if they processes each of the 32 threads/warps/pixels/whatever in a wave front in one clock, rather than 2 times 16 in 2 clocks.
MrS


That's what it seems from the diagram, they have 32 load/store units now.
ID: 23441 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23572 - Posted: 20 Feb 2012, 19:54:04 UTC - in response to Message 23509.  

They've got the basic parameters of the HD7970 totally wrong, although it's been officially introduced 2 months ago. Performance is also wrong: it should be ~30% faster than HD6970 in games, but they're saying 10%. They could argue that their benchmark is not what you'd typically get in games.. but then what else is it?

I'm not going to trust their data on unreleased hardware ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 23572 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23781 - Posted: 5 Mar 2012, 19:50:54 UTC - in response to Message 23572.  

It seems that we are close
http://semiaccurate.com/2012/03/05/nvidia-will-launch-gk104keplergtx680-in-a-week/

gdf
ID: 23781 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Damaraland

Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23835 - Posted: 8 Mar 2012, 22:06:49 UTC - in response to Message 23781.  

More news.
March 8, is the day where the press that Nvidia
March 12, Nvidia will paper launch the cards
March 23-March 26 sellings

http://semiaccurate.com/2012/03/08/the-semiaccurate-guide-to-nvidia-keplergk104gtx680-launch-activities/
HOW TO - Full installation Ubuntu 11.10
ID: 23835 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 23914 - Posted: 12 Mar 2012, 10:28:17 UTC

More rumours ... Guru3D article:

http://www.guru3d.com/news/nvidia-geforce-gtx-680-up-to-4gb-gddr5/

Regards
Zy
ID: 23914 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 23961 - Posted: 14 Mar 2012, 18:37:04 UTC

ID: 23961 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23973 - Posted: 15 Mar 2012, 21:04:22 UTC - in response to Message 23961.  

Better pictures, benchmarks and specifications.
http://www.tomshardware.com/news/Nvidia-Kepler-GeForce-GTX680-gpu,15012.html

It should be out 23th March, but by the time it gets to Barcelona is going to be May or June.
If somebody cand give one to the project we can start porting the code earlier. This seems to be an even bigger change than Fermi cards were.

ID: 23973 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23974 - Posted: 16 Mar 2012, 0:46:47 UTC

I still have a bad feeling about the 1536 CUDA cores....
ID: 23974 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JLConawayII

Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23976 - Posted: 16 Mar 2012, 1:24:44 UTC

What sort of "bad feeling"?
ID: 23976 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23977 - Posted: 16 Mar 2012, 1:59:29 UTC - in response to Message 23976.  

I had the same sort of "bad feeling" - these cuda cores are not what they use to be, and the route to using them is different. Some things could be much faster if PhysX can be used, but if not who knows.
http://www.tomshardware.com/news/hpc-tesla-nvidia-GPU-compute,15001.html Might be worth a look.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23977 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JLConawayII

Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23979 - Posted: 16 Mar 2012, 2:59:16 UTC

I wouldn't worry about it. I'm pretty sure the 6xx cards will be great. If they're not, you can always buy more 5xx cards at plummeting prices. There's really no losing here I think.
ID: 23979 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23980 - Posted: 16 Mar 2012, 7:03:44 UTC - in response to Message 23979.  

Well at the very least they are seem to be like the fermi gpus with 48 cores per multiprocessor which we know that have a comparative poor performance.

I hope that they figured it out, otherwise without code changes it might well be on par with a gtx580.
ID: 23980 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23982 - Posted: 16 Mar 2012, 9:29:54 UTC
Last modified: 16 Mar 2012, 9:30:25 UTC

There's a price going to be paid for increasing the shader count by a factor of 3 while even lowering TDP. 28 nm alone is by far not enough for this.

Seems like Kepler is more in line with AMDs vision: provide plenty of raw horse power and make "OK to use", but not as bad as with VLIW, and not as easy as previously. Could be the two teams are converging to rather similar architectures with Kepler and GCN. The devil's just in the details and software.
(I haven't seen anything but rumors on Kepler, though)

MrS
Scanning for our furry friends since Jan 2002
ID: 23982 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23988 - Posted: 16 Mar 2012, 13:44:06 UTC - in response to Message 23982.  

Suggested price is $549, and suggested 'paper' launch date is 22nd March.

With the 1536 shaders being thinner than before, similar to AMD's approach, getting more work from the GPU and reaching the shaders might be the challenge this time.

The proposed ~195W TDP sits nicely between an HD 7950 and 7970, and noticeably lower than the 244W of the GTX580 (25% higher), so even if it can just match a GTX580 the energy savings are not insignificant. The price however is a bit daunting and until a working app is developed (which might take some time) we will have no idea of performances compared to the GTX500's.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23988 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23994 - Posted: 16 Mar 2012, 17:03:14 UTC - in response to Message 23976.  

What sort of "bad feeling"?

I have two things on my mind:
1. The GTX 680 looks to me more like an improved GTX 560 than an improved GTX 580. If the GTX 560's bottleneck is present in the GTX 680, then GPUGrid could utilize only the 2/3rd of its shaders (i.e. 1024 from 1536)
2. It could mean that the Tesla and Quadro series will be improved GTX 580s, and we won't have an improved GTX 580 in the GeForce product line.
ID: 23994 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23996 - Posted: 16 Mar 2012, 19:23:15 UTC
Last modified: 16 Mar 2012, 19:44:44 UTC

Hi, I am most strange is the following relationship:

GTX580 = 3,000 = Transistors Mill. 512 cores. GF 110
GTX680 = 3,540 = Transistors Mill. 1536 cores. GK 104

I do not understand that with few transistors can triple cores.


GTX 285 = 1400 Trans. Mill 240 cores Die 470mm2
GTX 580 = 3000 Trans. Mill 512 cores Die 520mm2
GTX680 = 3540 Trans. Mill 1536 cores Die 294mm2

These numbers do not add up to me, the relationship of these values ​​between GTX200 and GTX500 do not fit the GTX600 evolution.
ID: 23996 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23999 - Posted: 16 Mar 2012, 21:07:01 UTC - in response to Message 23996.  
Last modified: 16 Mar 2012, 21:08:52 UTC

That's because Kepler fundamentally changes the shader design. How is not exactly clear yet.
@Retvari: and that's why comparisons to GTX560 are not relevant here. I'm saying it's going to be great, just that it'll be very different.

BTW: in the past nVidia chips got rather close to their TDP in "typical" loads, i.e. games. There an HD7970 hovers around the 200 W mark. 250 W is just the power tune limit.

Edit: further information for the brave.. original is chineese.

MrS
Scanning for our furry friends since Jan 2002
ID: 23999 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24002 - Posted: 16 Mar 2012, 22:48:47 UTC - in response to Message 23999.  

That's because Kepler fundamentally changes the shader design. How is not exactly clear yet.
@Retvari: and that's why comparisons to GTX560 are not relevant here. I'm saying it's going to be great, just that it'll be very different.

I know, but none of the rumors comfort me. I remember how much was expected of the GTX 460-560 line, and they are actually great for games, but not so good at GPUGrid. I'm afraid that nVidia want to separate their gaming product line from the professional product line even more than before.
I'd like to upgrade my GTX 590, because it's too noisy, but I'm not sure it will worth it.
We'll see it in a few months.
ID: 24002 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24003 - Posted: 17 Mar 2012, 7:12:53 UTC - in response to Message 24002.  

They cannot afford to separate gaming and computing. The chips will still need to be the same for economy of scale and there is a higher and higher interest in computing within games.

Changes are good, after all there are more shaders, we just have to learn how to use them. As it is the flagship product we are prepared to invest a lot on it.

gdf
ID: 24003 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24007 - Posted: 17 Mar 2012, 12:43:38 UTC - in response to Message 24002.  

Well.. even if they perform "only" like a Fermi CC 2.0 with 1024 or even 768 Shaders: that would still be great, considering they accomplish it with just 3.5 billion transistors instead of 3.2 billion for 512 CC 2.0 shaders. That's significant progress anyway.

MrS
Scanning for our furry friends since Jan 2002
ID: 24007 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Graphics cards (GPUs) : gtx680

©2025 Universitat Pompeu Fabra