Message boards :
Graphics cards (GPUs) :
Fermi
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 16 · Next
| Author | Message |
|---|---|
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The GPUs that NVidia produces are an entirely different product from their system chipsets. There is nothing that Intel is doing to stop dedicated GPU cards created by Nvidia or any other company from working in our PCs. While some Intel CPUs do have integrated graphics so you don't need to have a dedicated graphics card this really doesn't matter to to us. Larrabee failed and currently Intel has no public plans to create a dedicated graphics card solution. Thanks - Steve |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
That was an interesting week's discussion ..... :) Putting technical possibilities that may or may not present themselves to NVidia for averting the impending fermi disaster - and thats still my personal view - there is the very real and more pertinent commercials that have to be taken into account. ATI is already one step ahead of NVidia, they were mid last year, let alone now. They will by end of this year be two generations ahead, the Fermi hassles will not be resolved until early 2011 (aka Fermi2). If we assume they have found a hidden bag of gold to fund their way out of this, and we assume they will not repeat the crass technical directions they have taken for the last three years in terms of architecture - and thats a huge leap of faith given their track record in last three years - by mid 2011, ATI will be bringing out cards three generations ahead. Its only 28nm process that will give NVidia breathing space, not only architecturally, but also in terms of implementing DX11. At present they are no where near DX11 capability, and will not be for most of 2010. So come 2011, why should anyone buy a product that is 3 steps behind the competition, has immature new DX11 abilities, and still has no prospect - at least none we know of - to adjust the core architecture direction sufficiently fast and in such a huge leap in one bound to be competitive? Then to cap it all, financially NVidia will be strapped for cash, 2010 will be a disaster year for income from gpu cards. Technically there maybe an outside chance of them playing catchup, personally I doubt it, but that aside, they are in a mega investment black hole, and I cant see them reversing out of it. With such a huge gap in a technology sense ATI / NVidia that is about to unfold, I really cant see mass take up of a poorly performing Fermi card, even if they could afford to subsidise them all year. There is the age old saying "When you are in a hole, stop digging". NVidia is not in a hole, its in a commercial ravine. Technology possibilities are one thing, especially given their abysmal track record in the last years, Commercial reality is quite another. Regards Zy |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Indeed it's been interesting. But most of the things discussed were things that nVidia can choose to go with, in the future. That Fermi is in trouble because it requires lots of power, producing lots of heat, & that only 4% of their 40nm wafers are usable with such high requirements. Makes me think that maybe the 4% is of "extreme edition" grade, & that the real problem might me the power requirement & therefore also heat produced. The only workaround is placing the GPU outside, so that space won't be an issue, & therefore neither will power or heat dissipation. If nVidia only can use & sell their purest chips (at a loss), they will empty out their coffers & unless they do find a leprechaun at the end of the rainbow & his pot of gold, they will be in trouble. If nVidia takes these 4% & use them for Workstation GPU's & salvage as much of the rest as possible to make external mainstream GPU's. They might be able to cut loss' until they get to that 28nm. I like nVidia, but I also like Ati pressuring nVidia.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The anticipated supply limitations for Fermi's will mean that the cards will not be available in Europe for some time. I expect the cards will first be available in the USA, Canada, Japan and a few hotspots in Asia. They may well make their way to Australia before Europe. No doubt they will arrive in the more affluent parts of Europe first too. So Germany, France and the UK may also see them a few weeks before Spain, Portugal and the more Easterly countries, if they even make it that far. As the numbers are expected to be so few, 5K to 8K, very few cards will make it to GPUGrid crunchers in the near future. So if anyone in the USA or Japan gets one early on, please post the specs, performance and observations as it will help other crunchers decide whether to get one or not (should they ever become available in other areas). If they do turn up, and you want one, grab it quickly (as long as it gets a good review)! Let us all hope they get really bad reviews, or the gamers will buy them up and very few will make it to GPUGrid! |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Let us all hope they get really bad reviews, or the gamers will buy them up and very few will make it to GPUGrid! While I'm sure you're just kidding, if this does happen it will simply be another big nail in NVidia's coffin. We need 2 viable GPU companies to keep advances coming and prices down. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
We need 2 high end competitors! Only that way will new technologies advance progressively. At the time being ATI do not have a competitor! So will an average review change anything? A bad review might be just what they need to joult them back on track. Lets face it, Fermi is a loss leader that will depend on a gimic sales pitch that it outperforms ATI is some irrelivant areas of gaming. The problem is that anyone can compete at the low end, even Intel! But if there are several low end manufacturers eating up the profit margins of the high end GPU manufacturers, they start to look over their shoulder and perhaps panic, rather than looking to the future. By the way, nice link RobertMiles. http://www.brightsideofnews.com/news/2010/2/26/asus-to-introduce-dual-fermi-on-computex-2010.aspx Asus are going to limit their super Republic of Gamers HD5970 Ares dual GPU card to 1000. If they try a similar trick with Fermi's they may find it difficult to get their hands on enough GPUs! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Guys, I don't get it: why are (some of) you talking as if shrinking GF100 to 28 nm was nVidias only option? Just because Charlie said so? Like I said before: IMO all they need is a Fermi design of a sane size, a little over 2 billion transistors and with high speed 256 bit GDDR5. Take a look at the GT200 on the GTX285: it's one full technology node behind Cypress and features 50% less transistors, so it should be at least 50% slower. How often do you see Cypress with more than a 50% advantage? It happens, but it's not the average. I'd say that's a good starting point for a Fermi-based mainstream chip of about Cypress' size. nVidia said they'd have something mainstream in summer. I'm still confident they won't screw this one up. BTW: the 4% yield is probably for fully functional chips. And is probably already outdated. Yields change weekly, especially when the process is not yet mature. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some time ago I read 9% yield. Shrink the transistors and the yield should go up by the square. Fermi's low yield is a result of 3 main issues; a change in casting design (larger), a jump to 40nm, and 3Billion transistors. If they cast smaller and reduced the transistor count, mainstream cards would flow out of the factories. My arguement was that if they did this, even with GTX 285 chip design (6 months ago), they would have been producing something competitive. Obviously they should not do this now, because they have Fermi technologies, but I agree they have to shrink the transistor count, Now! In 18months or 2years, they may be able to use 28nm, but by then even Fermi will be dated (mind you, it never stopped NVidia from using dated designs in the past). |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Guys, I don't get it: why are (some of) you talking as if shrinking GF100 to 28 nm was nVidias only option? Just because Charlie said so? Like I said before: IMO all they need is a Fermi design of a sane size, a little over 2 billion transistors and with high speed 256 bit GDDR5. But if they've already gone into production faze, it's too late, isn't it to modify the design in one way or the other. They only thing they can do now is to make chips & put them on bigger boards. 28nm is the future, but isn't changing the design of the Fermi also in the future? I don't have the knowledge that you guys have & you can read in my wordings that I don't. But I do know that when a product goes into production faze, you can't go back to the drawing board & redesign everything. The only thing that you can do is to find a way to make what you've got work. They're already doing the same thing with the Fermi as they did with the GTX 260, making a less powerful GTX 280. If nVidia wants to launch the low-mid range before they can make a workable high end, it's just stalling, & they can't stall for too long. But if redesigning the PCB is easier & faster, then IMO it's they only way to get the original Fermi they wanted from the very start. Here I'm thinking about 3 things. The first is to place the GPU outside the PC casing. The second is to go with water & bundle Fermi cards with "easy" water cooling kits that function like the Corsair Hydro: http://www.corsair.com/products/h50/default.aspx Or thirdly, make a huge 3-4 slot gpu card with 80mm fans blowing cool air from the front of the out the back of the casing, instead of using the blower their using now, & using a tower heatsink. (but it's upside down in most casings isn't it?)
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
NVidia design chips, its what they do, and they are good at it. Unfortunately they overstreached themselves and started to pursue factory mods to manufacture larger numbers of 40nm GPUs. They tried to kill 2 birds with one stone, and missed l0l NVidia can, and I think will, produce a modified version of Fermi - with a reduced transistor count (it's what they do), but in my opinion they should have done this earlier. NVidia have already produced low to mid range products. GT210 & Ion (low and very low), GT 220 (low to mid), GT240 (mid range), but then there is a gap before you reach the End of Line GTX 260's (excluding any older GT92 based GPUs that are still being pushed out). By the way I have tried a GTS 240; it uses G92b (like the GTS 250) and basically does not work well on GPUGrid, ditto for several other mid-high end GT92 based cards. I'm not a gamer. When you see Asus redign PCBs, add a bigger, and in this case better heatsink and fan, it is no wonder they get much better results. So there is more to be had! They have made 2 vastly more powerful GPUs and both create less noise! Well done, twice. Tackle Fermi too, please. NVidia need that sort of help. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That was an interesting week's discussion ..... :) I'd say it's more like ATI is one step ahead in making GPU chips for graphics applications, but at least one step behind in making GPU chips for computation purposes and offering the software needed to use them for computation well. This could lead to a splitting of the GPU market could with commercial consequences almost as bad as you predict, though. It could also mean a major setback for GPU BOINC projects that aren't already using ATI graphics cards at least as well as Nvidia graphics cards, or well on the path to getting there; and the same for all commercial operations depending on the move to GPU computing. Zy, I suppose that you'd automatically agree with all of ATI's rather biased opinions of the Fermi, if you haven't done this already: http://www.brightsideofnews.com/news/2010/2/10/amd-community-has-29-tough-gf100-questions-for-nvidia.aspx |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
But if they've already gone into production faze, it's too late, isn't it to modify the design in one way or the other. They only thing they can do now is to make chips & put them on bigger boards. 28nm is the future, but isn't changing the design of the Fermi also in the future? I don't have the knowledge that you guys have & you can read in my wordings that I don't. But I do know that when a product goes into production faze, you can't go back to the drawing board & redesign everything. The only thing that you can do is to find a way to make what you've got work. They're already doing the same thing with the Fermi as they did with the GTX 260, making a less powerful GTX 280. If nVidia wants to launch the low-mid range before they can make a workable high end, it's just stalling, & they can't stall for too long. But if redesigning the PCB is easier & faster, then IMO it's they only way to get the original Fermi they wanted from the very start. Production phase of THAT CHIP and too late to change the design of THAT CHIP in time to help. But are you sure that they aren't already working on another chip design that cuts down the number of GPU cores enough to use perhaps half as many transistors, but is similar otherwise? I'd think more along the lines of for many products, putting the GPU outside the computer's main cabinet, but also adding a second CPU and second main memory close to it to provide a fast way of interfacing to it. This second cabinet should not even try to follow the ATX guidelines. For some others, don't even try to use enough boards to make them plug in to all the cards slots it blocks other cards from using; this should leave more space for a fan and a heatsink. Or, alternatively, make some of the cards small enough that they don't even try to use the card slots they plug into for more than a way to get more power from the backplane. These products could still plug in to normal motherboards, even if they don't fully follow the guidelines for cards for those motherboards. Your plans sound good otherwise. As for using Fermi chips that don't have all the GPU cores usable, are you sure they don't have a plan to disable use of the unusable cores in a way that will stop them from producing heat, then use them in a board family similar to the GTX 260 boards that also used chips with many of the GPU cores disabled? Likely to mean a problem with some software written assuming that all GPU cores are usable, though, and therefore difficult to use in creating programs for GPU BOINC projects. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
An idea for Nvidia to try for any newer boards meant to use chips with some GPU cores disabled: Add a ROM to the board design that hold information about which GPU cores are disabled, if the Fermi chip design doesn't already include such a ROM, then make their software read this ROM just before making a last-minute decision of which GPU cores to distribute the GPU program among on that board. Require any other providers of software for the new boards to include a mention of whether their software can also handle a last-minute decision of which GPU cores to use, in their public offerings of the software. This could mean a new Nvidia application program specifically for taking GPU programs compiled in a way that does not specify which GPU cores are usable, then distributing them among the parts of the GPU chip that actually are usable. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Do you suppose that Intel's efforts against Nvidia are at least partly because Intel plans to buy Nvidia and convert them into a GPU division of Intel, but wants to drive down the price first? |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Of course they're diabling non-functional shaders, just as in all previous high end chips. The GTX470 is rumored to have 448 shaders instead of 512 (2 of 16 blocks disabled). And as mighty as Intel is, I don't think they've got anything to do with the current Fermi problems. Denying the QPI license yes, but nothing else. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 30 Jul 09 Posts: 21 Credit: 7,081,544 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
News from cebit: heise.de (in German) In short: - one 6 and one 8 pin power connector - GTX 480 might have less than 512 shaders - problems with driver - availability: end of april Michael Team Linux Users Everywhere
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks! That means: - 225 - 300 W power consumption, as 2 x 6 pin isn't enough (not very surprising..) - the early performance numbers are to be taken with a grain of salt, as the drivers are not yet where they should be MrS Scanning for our furry friends since Jan 2002 |
ZydorSend message Joined: 8 Feb 09 Posts: 252 Credit: 1,309,451 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
[quote] ...... Zy, I suppose that you'd automatically agree with all of ATI's rather biased opinions of the Fermi ...... I would expect ATI Marketing to go on a feeding frenzy over this, it would be surprising if they did not. No, I dont automatically go with ATI viewpoint, mainly because I have been an NVidia fan for years - since they first broke into the market. I am the last one to want them to back out now. However, NVidia fan or not, they have to come up with the goods and be competitive, its not a charity out there. As for ATI articles etc, I treat those the same as NVidia articles - all pre cleared by marketing and not worth the paper they are printed on until reality appears in the marketplace. Corporate press releases or articles these days are comparible to Political statements released by mainstream Political parties - the truth is only co-incidental to the main objective of misleading the less well informed. In other words, I now view both genres as automatic lies until corroborated by other sources or methods. As GDF wisely stated above, we will know in a month or so whats true ..... If NVidia come up with a good design on general release (not short term subsidised PR stunt), they get my £s, if they dont, ATI will for the first time since NVidia appeared in the consumer/gamer end of the gpu market. Regards Zy |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
For all those curious, here's a look at the goodies: http://www.theinquirer.net/inquirer/blog-post/1594686/nvidia-nda-broken-topless-gtx480-pics
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
They should be ashamed of stripping this poor little GTX naked and revealing it's private parts to the public! MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra