NVIDIA BigKepler

Message boards : Graphics cards (GPUs) : NVIDIA BigKepler
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24547 - Posted: 22 Apr 2012, 22:04:47 UTC - in response to Message 24546.  

GF100 and GF110 have been about as wasteful as GF104 compared to GK104. It's actually the other way around: Kepler is a huge step forward in efficiency, both power-consumption and transistor-wise.

Rest assured that "big Kepler" will borrow a lot of these tricks from "small Kepler". It wouldn't make sense to develop 2 different architectures. What they need to change, though, is the balance of execution units. GK104 is heavily tilted towards gaming, with OK FP32 compute and FP64 just for development. By changing the shader design somewhat (the internals, external they'll act pretty much the same way, so the same scheduling hardware etc. can be used) and maybe other tweaks they can provide good FP32 and massive FP64 (1/2 rate) compute power.

You can't draw a straight line through these designs and generations. Both Fermis, the compute and the gamer versions, have been developed in parallel. They're just as much different incarnations of the same basic architecture as the Keplers will be.

MrS
Scanning for our furry friends since Jan 2002
ID: 24547 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zarck

Send message
Joined: 16 Aug 08
Posts: 145
Credit: 328,473,995
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24575 - Posted: 24 Apr 2012, 10:56:25 UTC - in response to Message 24547.  
Last modified: 24 Apr 2012, 10:56:56 UTC

http://www.hardware.fr/news/12254/nvidia-gk110-7-milliards-transistors-gtc.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+hardware%2Ffr%2Fnews+%28HardWare.fr+-+News%29

"As we noted in our review of the GeForce GTX 680, it is not based on the largest GPU family Kepler, which was delayed or replaced for unknown reasons, probably related to the problem of making the 28 nanometer process.

If Nvidia has decided to introduce initially the GeForce GTX 600 premium based on the "small" GK104, originally intended for the lower segment, this does not mean that the largest family of GPU was canceled. If it is not strictly necessary in terms of performance involved, Nvidia can not do without it for the professional market, especially the high performance computing. Indeed, the GK104 made many compromises at this level, and if it can offer excellent performance in video games, this is no longer the case when it comes to the use as an accelerator massively parallel.

Like GK104 (GTX 680) is actually the successor of GF104/114 (GTX 460/560 Ti), GK110 is the true successor of GF100/110 (GTX 480/580). This task will be to drive the point in terms of raw performance, which will also benefit the graphics, but also to continue the evolution towards greater flexibility for massively parallel computing.

Apart from its obvious purpose, little information flowing to date on this GPU. As a teaser, however Nvidia has slipped a little information in the description of the sessions of the GPU Technology Conference (GTC) to be held in San José from 14 to 17 May:

In this talk, Individuals from the GPU architecture and CUDA software Will Dive Into groups the features of the architecture for compute "Kepler" - NVIDIA's new 7-billion transistor GPU. From the processing cores reorganized with new instructions and processing capabilities, to An improved memory systems with faster processing and low-atomic ECC overhead, We Will explores how the Kepler GPU Achieves world leading performance and efficiency, and how it Enables Wholly new kinds of parallel To Be solved problems.
Without the GK110 is directly appointed, a new GPU Nvidia mentions Kepler no less than 7 billion transistors! To recap, the GK104 is satisfied with 3.5 billion while the largest GPU from AMD, Tahiti shows "only" 4.3 billion to the counter.

Compared to GK104, many features strictly related to professional computing will be introduced. Nvidia mentions here a memory subsystem for improved atomic operations more efficient and reduce overhead for ECC, already on the GF100/110 but with a relatively high cost. The new GPU will also have a capacity of double counting very high accuracy, while the holder is anecdotal GK104 with performance equivalent to 1/24th that of the single precision. Previously, Nvidia also put forward the explosion of energy efficiency in double precision calculation (x3) to position relative to the Fermi Kepler.

The GTX 680 already replaced after 2 months? Not really: except huge surprise, the GeForce GTX 690 to be presented shortly will be a bi-GPU card based on GK104. GeForce based GK110 undoubtedly emerge (GTX 685? GTX 780?), But probably not before school starts. In the immediate Nvidia probably wants above all to prepare professionals arrival, development cycles are very long, but also convince them of the benefits of GPU computing, at a time when Intel's response to Knights Corner and MIC architecture is no longer far away.

In all cases, we will be at the GTC and we will certainly bring you all the information related to the new GPUs and the CUDA platform 5!"
ID: 24575 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24594 - Posted: 26 Apr 2012, 13:47:46 UTC
Last modified: 26 Apr 2012, 13:55:39 UTC

I MAY have found something interesting, unless this is old news... But what do you notice that is interesting about this pic from Microcenter (probably old, and they forgot to change it)


http://www.microcenter.com/storefronts/nvidia/GTX_680%20promo/index.html

EDIT: It's either a 680 4GB reference version or............

2nd EDIT: http://www.techspot.com/news/47898-nvidia-reclaims-performance-crown-with-geforce-gtx-680.html

Same pic, but, can you spot the difference?
ID: 24594 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24596 - Posted: 26 Apr 2012, 16:10:24 UTC - in response to Message 24594.  

There is a 8+6 pin PCIe connector on the first image, while there is only a 6+6 pin on the second one.
Both of these pictures are not actual photographs of a real product, they are just rendered images for teasing your mind :)
ID: 24596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24599 - Posted: 27 Apr 2012, 3:44:11 UTC

And tease they did.... Knew it was of a 680, but with all the talk of them switching the 670 to 680, or whatever everyone was "rumor mill' talking about when it first came out, I still found it interesting none the less. Still looking forward to this release date though. :)
ID: 24599 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24616 - Posted: 28 Apr 2012, 14:03:44 UTC

In 12 hours, in appears that they will be releasing (paper release at 680 rate) the lower end Keplers. Could be the 690 but I doubt it. Should be interesting to see the specs.
ID: 24616 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zarck

Send message
Joined: 16 Aug 08
Posts: 145
Credit: 328,473,995
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25100 - Posted: 15 May 2012, 22:21:20 UTC

ID: 25100 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25104 - Posted: 16 May 2012, 4:29:32 UTC

Well here's GK110 Tesla style http://www.brightsideofnews.com/news/2012/5/15/nvidia-tesla-k20-ie-gk110-is-71-billion-transistors2c-300w-tdp2c-384-bit-interface.aspx

Whoever is good with math and figuring out specs based on knowmn Kepler config, would love to know how someone like this might be configured, it's "guessed" specs, etc. Won't be coming out til 4Q 2012, so a 7xx series GK110 is possible WAY WAY down the road (next spring maybe) I would ASSUME. Enjoy
ID: 25104 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25121 - Posted: 17 May 2012, 3:46:43 UTC

And here's the whitepaper. Would love to know if any of those interesting new features (doubt will get them) would be beneficial to this project. Also, what do you guys think, could this be what they release for us as a 780 next year?
http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
ID: 25121 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25123 - Posted: 17 May 2012, 7:09:22 UTC - in response to Message 25121.  

...what do you guys think, could this be what they release for us as a 780 next year?

They released a modified GTX 690 as a compute card, so I don't think they will do the opposite with GK110.
BTW looking at the GK110 architecture, I can see that it is superscalar (in single precision) as well as the GK104, so only 960 of its 2880 shaders could be utilized by GPUGrid.
ID: 25123 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 25124 - Posted: 17 May 2012, 7:09:26 UTC - in response to Message 25121.  

I would not be surprised if we can get a 2x compared to a gtx680.

gdf
ID: 25124 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25125 - Posted: 17 May 2012, 7:33:56 UTC - in response to Message 25121.  
Last modified: 17 May 2012, 7:50:57 UTC

GK110 looks like it's basically GK104 but with an increase in registers/thread, Hyper‐Q and Dynamic Parallelism.
I think this GPU will offer a lot to researchers generally, but my fear is that it might not turn up in a GeForce GTX card, or if it does too much will be trimmed from it.
Anyone for 'Big Data'?
Massive scalability through network clustering is arriving. Lets hope the rest of technology can keep up. Of course this would be aimed more at data centers and studios than research centers, but for those with the resources, still very useful.

To more fully utilize it's resources (and GK104's) GPUGrid would need to redesign the apps/research methods (around Hyper-Q for the GK110).

I don't see any reason, other than supply, that these could not be released this year, and I think the Autumn suggestions sound reasonable.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 25125 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25126 - Posted: 17 May 2012, 9:53:48 UTC - in response to Message 25123.  

If I was nVidia I would surely put this chip on a Geforce. Even if this wouldn't make much sense, some people would by it to have the latest and greatest. Castrate DP, as usual, leave out ECC memory, but give it all the rest. Make it expensive, if you must.

BTW looking at the GK110 architecture, I can see that it is superscalar (in single precision) as well as the GK104, so only 960 of its 2880 shaders could be utilized by GPUGrid.

Isn't that "1920 out of 2880"? Assuming the superscalar capabilities can still not be used (newer architecture, newer software.. not sure this still holds true for Kepler).

MrS
Scanning for our furry friends since Jan 2002
ID: 25126 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25165 - Posted: 19 May 2012, 0:18:38 UTC
Last modified: 19 May 2012, 0:24:54 UTC

From Amorphous@NVIDIA, he saw us discussing K10 and K20 in regards to how this "confirmed" that GK104 was originally the 660Ti (680) and 660 (670).

His response, "Pssst. I've been saying that the GK104 has always been intended as our flagship GPU since launch. :thumbup: You're only willing to accept insider information when it confirms your belief."

So with this in mind, I personally still do not see a 685 coming out this year. Maybe next year (780), but not 2012. They're going to be making WAY too much money off selling K20s.
ID: 25165 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25166 - Posted: 19 May 2012, 7:31:29 UTC - in response to Message 25165.  
Last modified: 19 May 2012, 7:41:33 UTC

Well maybe the cloud will gobble up all the K20's, leaving us research enthusiasts floundering. It will be interesting to see how these perform with 1/3 (I think) FP64/DP performance against the HD7970's and probably some dual AMD version.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 25166 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25178 - Posted: 20 May 2012, 17:08:00 UTC
Last modified: 20 May 2012, 17:08:18 UTC

K20's will not be for us. They may be more a lot efficient at Milkyway than other nVidia chips.. but that's still way too expensive to consider over AMDs.

However, GK110 based Geforce should arrive in 2013.

MrS
Scanning for our furry friends since Jan 2002
ID: 25178 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25180 - Posted: 20 May 2012, 23:15:11 UTC - in response to Message 25178.  

GK110 based Geforce should arrive in 2013.

I wouldn't take this source trustworthy. AMD maybe could force nVidia to release the BigKepler as a GeForce card by releasing a new series of GPU, but it would be a prestige card only (to keep the "fastest single chip GPU card manufacturer" title at nVidia), because it wouldn't be much faster than a GTX 690 (but producing a GK110 chip costs more than producing two GK104 chips). I think there is no sense in releasing a card which is not faster than the GTX 690 while it costs more to produce (and last but not least it shrinks the supply of Teslas and Quadros). There is only one way for nVidia to top the GTX 690: releasing a GeForce card with two GK110 on it. Now that would be a miracle.
ID: 25180 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25183 - Posted: 21 May 2012, 11:09:15 UTC - in response to Message 25180.  

You can trust that source as far as it says "someone at the conference (presumably nVidia) said, they're planning to introduce GK110 based Geforce's in 2013". Obviously nVidia could change their mind any day.

If a GK110 based Geforce makes sense is difficult to tell now. Performance depends on yield (=number of functional units active) and final clock speed, both of which nVidia can't know exactly yet. The raw gaming power will not be higher than 2 GK104 chips. However, dual chip SLI doesn't yield a 100% benefit, let alone 3 and 4-chip configurations. If you compare 4 GK104 to 2 GK110, such a card should start to make sense. You get beter perfmance (due to the scaling issue using 4 smaller chips), reduce the amount of micro stutter and can use 50% of the installed memory rather than 25%.

If history has told us anything: if they introduce such cards there will be people buying them.

Regarding "taking chips away from the Tesla and Quadro supply": that's why we're talking about anytime in 2013. The Geforce won't arrive before they can satisfy demand for the more expensive versions. Or need to papaer-launch a new flag ship product ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 25183 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Carlesa25
Avatar

Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25187 - Posted: 21 May 2012, 17:58:32 UTC

ID: 25187 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
5pot

Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25188 - Posted: 21 May 2012, 18:55:14 UTC

It's well known that radeon is faster. That's not the issue, the issue is the coding in opencl. Radeon does not help out the researchers code their work like NVIDIA (CUDA) does. From what I've read and heard, you're on your own. This is one reason why many prefer CUDA. They get help when asked.

Again, from what I've heard.
ID: 25188 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Graphics cards (GPUs) : NVIDIA BigKepler

©2025 Universitat Pompeu Fabra