gtx680

Message boards : Graphics cards (GPUs) : gtx680
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 5 · Next

AuthorMessage
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 22760 - Posted: 20 Dec 2011, 16:48:44 UTC
Last modified: 20 Dec 2011, 16:52:47 UTC

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeForce_600_Series

I don't think that the table is correct. Flops are too high.

gdf
ID: 22760 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22774 - Posted: 20 Dec 2011, 19:36:29 UTC - in response to Message 22760.  

Far too high; unless the 256 cuda cores of a GTX650 for $179 really will outperform the 1024 cuda cores of a one year old GTX590 ($669). No chance; that would kill their existing market, and you know how NVidia likes to use the letter S.

My calculated guess is that a GTX680 will have a GFlops peak of around 3000 to 3200 - just over twice that of a GTX580, assuming most of the rest of the info is reasonably accurate.

When it comes to crunching, a doubling of the 500 generation performance would be a reasonable expectation, but 4.8 times seems too high.

I don't see how XDR2 would in itself double performance, and I doubt that architectural enhancements will squeeze out massive performance gains given that it's dropped in size from 520 to 334mm.sq.; transistor count will apparently remain the same.
Perhaps for some enhanced application that fully uses the performance of XDR2 you might see such silly numbers, but for crunching I wouldn't expect anything more than a 2.0 to 2.5 times increase in performance (generation on generation).
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 22774 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22775 - Posted: 20 Dec 2011, 19:39:20 UTC - in response to Message 22760.  

Wow, that looks totally stupid! Looks like a boy's christmas wish list. Well, before every new GPU generation you'll find a rumor for practically every posible (and impossible) configuration floating around..

- XDR2 memory: seems like rumor guys have fallen in love with it. It's not on GCN, though. And if I were nVidia I'd consider it too risky to transition the entire new product lineup at once. There'd need to be serious production capacity ready by now.

- Traditionally nVidia rather goes with wider memory busses than higher clocks, which matches well with their huge chips. I don't see any reason for this to change.

- The core clocks are much higher than even on AMDs HD7970 (925 MHz on pre-release slides). Traditionally nVidias core clocks have been lower and I see no reason why this should change now.

- The shader clocks are totally through the roof. They hit 2.1 GHz on heavily OC'ed G92s, but the stock clocks have been hoovering around 1.5 GHz for a long time. Going anywhere higher hurts power efficiency.

- They introduced a fixed factor of 2 between base and shader clock with Fermi. Why, if they'd change it again with Kepler? I'd expect this to stay, for some time at least.

- 3.0 billion transistors for the flag ship would actually be lower than GF100 and GF110 at ~3.2 billion. At the same time the shader count is said to increase to 640. And the shaders support more advanced features (i.e. must become bigger). Unless Fermi was a totally inefficient chip (I'm talking about the design, not the GF100 chip!), I don't expect this to be possible.

- Just 190 W TDP for their flagship? They've designed power constrained monster chips since some time. If these specs were true, rest assured that Kepler would have gotten a lot more shaders.

- The proposed die size of 334 mm² actually looks reasonable for a 3.0 billion transsitor chip at 28 nm.

- The astronomic FLOPS are a direct result of the insane clocks speeds. Not going to happen.

Overall the proposed data looks more like a traditional ATI "mean & lean" design than a nVidia design.

They may be able to push clocks speeds way higher if they used even more hand crafted logic rather than synthesized one (like in a CPU). Count me in for a pleasant surprise if they actually pulled that off (it requires tons and megatons of work).

MrS
Scanning for our furry friends since Jan 2002
ID: 22775 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23109 - Posted: 23 Jan 2012, 20:26:01 UTC

Rumors say there's a too large bug in the Kepler A1 stepping in the PCIe 3 part, which means introduction will have to wait another stepping -> maybe April.

MrS
Scanning for our furry friends since Jan 2002
ID: 23109 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Damaraland

Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23247 - Posted: 4 Feb 2012, 15:04:16 UTC

Now there are rumours that they will be realsed on this month.

tomshardware "Rumor: Nvidia Prepping to Launch Kepler in February"
ID: 23247 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Evil Penguin
Avatar

Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 23249 - Posted: 4 Feb 2012, 16:48:29 UTC
Last modified: 4 Feb 2012, 16:48:45 UTC

I don't expect anything from nVidia until April.
In any case, I hope AMD and nVidia continue to compete vigorously.
ID: 23249 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MarkJ
Volunteer moderator
Volunteer tester

Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23280 - Posted: 7 Feb 2012, 11:40:01 UTC

Anther article with a table of cards/chip types:

here

Its a bit blurry. I can't claim credit for finding this one it was posted by one of the guys over at Seti. Interesting spec sheets though.
BOINC blog
ID: 23280 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23283 - Posted: 7 Feb 2012, 12:42:24 UTC - in response to Message 23280.  

That's the same what's being posted here. Looks credible to me fore sure. Soft evolution of the current design, no more CC 2.1 style super sacalar shaders (all 32 shaders per SM). Even the expected performance compared to AMD fits.

However, in the comments people seems very sure that there's no "hot shader clock" in Kepler. That's strange and would represent a decisive redesign. I'd go as far to say: nVidia needs the "2 x performance per shader" from the hot clock. If they removed this they'd either have to increase the whole chip clock (unlikely) or perform a serious redesign of the shaders, make them more power efficient (easy at lower clocks) and either greatly improve their performance (not easy) or make them much smaller (this was not being done here, according to these specs).

So overall.. let's wait for April then :D

MrS
Scanning for our furry friends since Jan 2002
ID: 23283 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 23291 - Posted: 7 Feb 2012, 15:50:25 UTC

Charlie's always good for a read on this stuff - he seems to have mellowed in his old age just lately :)

GK104:
http://semiaccurate.com/2012/02/01/physics-hardware-makes-keplergk104-fast/

GK110:
http://semiaccurate.com/2012/02/07/gk110-tapes-out-at-last/

I hope the increasing rumours on performance are true - whether its real raw power, or slight of hand aimed at gamers - either which way is a win for consumers as prices will trend down with competition, an aspect thats been sorely lacking in the last 3 years.

2012 shaping up to be a fun year :)

Regards
Zy
ID: 23291 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23293 - Posted: 7 Feb 2012, 17:20:56 UTC - in response to Message 23291.  

GK110 release in Q3 2012.. painful for nVidia, but quitre possible given they w´don't want to repeat Fermi and it's a huge chips. Which needs another stepping before final tests can be made (some other news 1 or 2 weeks ago).

And the other article: very interesting read. If Charlie is right (and he has been right in the past) Kepler is indeed a dramatic departure from the current designs.

MrS
Scanning for our furry friends since Jan 2002
ID: 23293 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23294 - Posted: 7 Feb 2012, 17:29:02 UTC - in response to Message 23291.  
Last modified: 7 Feb 2012, 22:25:20 UTC

No more CC 2.1-like issues will mean choosing a GF600 NVidia GPU to contribute towards GPUGrid will be easier; basically down to what you can afford.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23294 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23295 - Posted: 7 Feb 2012, 19:39:42 UTC - in response to Message 23294.  

They are probably going to be CC 3.0. What ever that will mean ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 23295 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23298 - Posted: 8 Feb 2012, 1:16:17 UTC - in response to Message 23295.  

I'm concerned about the 256-bit memory path and comments such as "The net result is that shader utilization is likely to fall dramatically". Suggestions are that unless your app uses physics 'for Kepler' performances will be poor, but if they do use physics 'for Kepler' performances will be good. Of course only games sponsored by Nvidia will be physically enhanced 'for Kepler', not research apps.

With NVidia (and AMD) going out of their way to have patches coded for games that tend to be used for benchmarking, the Internets Dubious Information On Technology will be even more salty and pinched. So wait for a Kepler app and then see the performances before buying.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23298 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23304 - Posted: 8 Feb 2012, 9:17:32 UTC - in response to Message 23298.  

If the added speed is due to some new instructions, then we might be able to take advantage of them or not. We have no idea. Memory bandwidth should not be a big problem.

gdf
ID: 23304 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23319 - Posted: 8 Feb 2012, 19:07:00 UTC

GK104 is supposed to be a small chip. With 256 bit bandwidth you can easily get HD6970 performance in games without running into limitations. Try to push performance considerably higher and your shaders will run dry. That's what Charlie suggests.

This is totally unrelated to GP-GPU performance: just take a look at how little bandwidth MW requires. It "depends on the code".. as always ;)

And, as GDF said, if nVidia made the shaders more flexible (they probably did) and more efficient for game physics, this could easily benefit real physics (the equations will be different, the general scheme rather similar).

MrS
Scanning for our furry friends since Jan 2002
ID: 23319 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23386 - Posted: 10 Feb 2012, 23:48:45 UTC - in response to Message 23319.  

Some interesting info:
http://wccftech.com/alleged-nvidia-kepler-gk104-specs-exposed-gpu-feature-1536-cuda-cores-hotclocks-variants/
ID: 23386 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23410 - Posted: 12 Feb 2012, 8:44:04 UTC - in response to Message 23386.  

http://www.brightsideofnews.com/news/2012/2/10/real-nvidia-kepler2c-gk1042c-geforce-gtx-670680-specs-leak-out.aspx

If this is real, it seems that kepler multiprocessors are doubled GF104 MP. I hope that it works better than GF104 for compute.

gdf
ID: 23410 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23416 - Posted: 12 Feb 2012, 13:47:56 UTC - in response to Message 23410.  

If you can only use 32 of the 48 cuda cores on 104 then you could be looking at 32 from 96 with Kepler, which woud make them no better than existing hardware. Obviously they might have made changes that allow for easier access, so we don't know that will be the case, but the ~2.4 times performance over 104 should be read as 'maximum' performance, as in 'up to'. My impression is that Kepler will generally be OK cards with some exceptional performances here and there, where physics can be used to enhance performance. I think you will have some development to do before you get much out of the card, but hey that's what you do!
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 23416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 23419 - Posted: 12 Feb 2012, 14:35:38 UTC - in response to Message 23416.  
Last modified: 13 Feb 2012, 8:55:34 UTC

no, it should be at least 64/96, but still i hope they have improved the scheduling.
Anyway, with such changes there will be time for optimizations.
gdf
ID: 23419 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23427 - Posted: 12 Feb 2012, 21:46:50 UTC

32 from 96 would mean going 3-way superscalar. They may be green, but they're not mad ;)
As GDF said, 64 of 96 would retain the current 1.5-way superscalar ratio. And seeing how this did OK, but not terribly good I'd also say they rather increase the the number of wave fronts in flight than this ratio. I wouldn't be surprised if they processes each of the 32 threads/warps/pixels/whatever in a wave front in one clock, rather than 2 times 16 in 2 clocks.

And don't forget that shader clock speeds are down, so don't expect a linear speed increase with shader number. Anyway, it's getting interesting!

MrS
Scanning for our furry friends since Jan 2002
ID: 23427 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 5 · Next

Message boards : Graphics cards (GPUs) : gtx680

©2025 Universitat Pompeu Fabra