Message boards :
Graphics cards (GPUs) :
Fermi
Message board moderation
Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · Next
| Author | Message |
|---|---|
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Here's a provocative yet interesting article concerning the GF104 - GF108 chips. Take it with a grain of salt as SemiAccurate sometimes lives up to its name: http://www.semiaccurate.com/2010/06/21/what-are-nvidias-gf104-gf106-and-gf108/ |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I beg to differ. Becoming general purpose does loose out on specific tasks. But just as PCI-E Cell Processors are on the way out, General Purpose GPU's are on the way in. Skgiven even posted a PCI-E 1X 220GT with plans of a PCI-E 1X 240GT. As CUDA matures & more people use it, they'd have a market for more then just graphics & BTW, where is ATI??? Sooner or later there might not even be a point in having that VGA, DVI, or HDMI in the back, because nobody buying these cards use them.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That report is a bit crazy, even for SemiAccurate. There is little point starting a GF104 report by comparing GF100 architectures to ATI (it’s been done, repeatedly). “Cutting down an inefficient architecture into smaller chunks does not change the fundamental problem of it being inefficient compared to the competition”. How could they know, they don’t have one to compare? They don’t even know the GF104 architecture! I think NVidia did a little more than trim the transistor count. Smaller chips, faster frequencies, less leakage; the GF104 should do well as will their lesser cards in the mid-range market. Saying things like, “the GF104/106/108 chips are a desperate stopgap that simply won't work” is just wrong. So as suggested, I will take that report with a pinch of salt. I’m expecting a competitive variety of smaller Fermi cards with higher frequencies and at lower prices. We are soon to see the second phase of Fermi. No doubt over time there will be more revisions (improvements) and increased variety. In the very long run (perhaps a year or two), a modified Fermi design will move to 28 or 32nm. This will inherently overcome many of the present off-putting aspects of the architecture, should yields be sufficient. Anyway, I look forward to the first test review of a GF104 card. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
OMG... everytime I read an article about nVidia by Charlie I get an urgen to punch him into the face. Real hard. Afterwards I want him to write "I will use my brain before I post articles on the internet" onto every black board I can find. A hundred times, at least. But seriously.. this report is so bad and apparently full of blind hatred, it's not even worth going into detail where he's wrong. I'm not sure anyone had the patience to read it through anyway. He does point out something worthy, though: things are not looking too good for the smaller Fermis. By now we should have seen more of them at Computex. And if nVidia really didn't fix the via issue.. well, they're (apparently) pretty dumb since ATI showed them how to do it a long time ago. That report is a bit crazy, even for SemiAccurate. Yeah.. what he actually reports is fine, but all the "blubb blubb" and "blah blah" he adds around is so full of misunderstanding or maybe deliberately misinterpreting things.. it's disgusting. Edit: read it again. Well.. maybe it's not as bad as I said. Some statements are actually more careful than they appear at first sight - if one takes a closer look. MrS (disclaimer: my main work horse is an ATI) Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I stopped reading before I got half way through the assault. It does improve, a bit, but he also contradicts himself several times; so he is only wrong half the time :) You just have to work out which half! You might as well do the report yourself, without shouting crazy hatred. I guess NVidia didn’t send him a free Fermi, so now he does not want a cut-down version, and he is telling everyone, loudly! We can only speculate on how right he is about the financial aspects of NVidia and the yields, but both are generally considered to be not great. This has been known for some time. There is an ‘alleged’ pic of the GF104 chip here, http://forums.techpowerup.com/showthread.php?t=124943 Several sites are now reporting 336 shaders, higher clocks (675 is about 11% over that of a GTX470 and a GTX465 at 608MHz) and potential design improvements. Should they manage to get the TDP down to 130-150W then a dual card would make perfect sense (though I think these might come a bit later). http://www.tomshardware.co.uk/gtx-460-fermi-gf104,news-33758.html I expect these cards will be excellent for GPUGrid, especially if the shaders overclock to anywhere near the reported 1660MHz. Two of them should outperform a GTX480 for games and probably cost about the same. For most people the existing Fermi cards are too expensive to purchase, and to run, so these cards would be welcomed by many people wanting to crunch here. If a GTX460 can match the performance of a GTX285 (TDP 204W) and does turn up with a TDP of 130-150W, then that is still a big improvement for NVidia. Compared to ATI cards, they might not be very competitive, but some competition is better than none! |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This looks interesting, http://www.gnd-tech.com/main/content.php/238-ASUS-GTX-465-SoftMod |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Looks very interesting. Nvidia can't shoot themselves in the foot, but Asus can shoot Nvidia in the foot. Not legally, but since Asus sells Nvidia & Ati, the bottom line being to sell Asus cards, as long as Nvidia can't sue Asus for shooting them in the foot, then it OK??? ;-) It's good for consumers, if it's true, & hope other manufacturers follow in creative ways of shooting both Nvidia & Ati in their feet. So the SoftMod increased the amount of CUDA cores from 352 to 448. This caused an increase in pixel fillrate and texture fillrate. Normally doing such a thing would be impossible since the additional clusters of CUDA cores/texture units/raster operators are filled with silicon, rendering them unusable. This is clearly not the case with the ASUS GTX 465 and many other GTX 465s. Just call me paranoid & a nut, I don't care. I even said that the only builds that fall nicely, are those you plan to take down & that poppies are worth much more then oil. Only problem with shooting everybody in the feet, is if Intel is the one doing all the shooting of Nvidia & Ati in their feet, so often & so much, that only Intel survives. Then everybody wins, only so that everybody loses in the end.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Liveonc, you really don't like feet, do you? :D Regarding the unlocking: very interesting that this can be done via software. I wonder if nVidia allows it deliberately to boost sales and perception of GTX465 and Fermi in general? And if Asus went full force they'd actually give us a tool where you can switch each shader cluster on/off individually. If the first 96 don't work, just try the other ones. Or just 64 - better than none at all. There might be technical reasons forbidding this, though. @SK: there's also the rumor that the 336 shaders are an artefact of the identification tool not knowing the chip and that it's actually 386. And that chip should definitely be able to hit 150 W and a little below that, but I don't suppose for the fully fledged "high" clocked (1.35 GHz) version. We should see in due time! And a dual chip card should definitely be possible. The only question is clocks and therefore performance, as TDP will surely scratch 300 W. MrS Scanning for our furry friends since Jan 2002 |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
But I do beg to differ. If Asus officially supported this, they'd have to guarantee at least a possibility of success & would Nvidia be interested in selling GTX470 to be sold as GTX465. If Asus guarantees anything & it doesn't deliver, there'll be RMA hell. If Nvidia does it unofficially, what's the point in keeping it shady, & where do they save or gain from using 10 memory chips instead of 8, when only 8 are used? Of course they didn't put silicon to block the unused clusters, but is it even up to Nvidia to do it, if some do it while others don't? Where would they gain from increased sales of GTX465 if it decreases sales of the GTX470/480, unless they have too many of GTX465 & too few GTX470/480? Of course it could all be just a lie designed to promote sales of the Asus GTX465. I don't own an Asus GTX465, so I can't know if it's possible or not, but just reading that someone says it's possible, might motivate me to buy an Asus GTX465, if I can save money, & then I'll know if it's true or BS...
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I understood the article in the way that the soft mod was provided by Asus, but obviously without guarantee. This would be the only way that putting 2 more memory chips on there made any sense. But the article does not actually state this and is suspeciously vague about the origin of the soft mod. They could have just edited the screenshot, made up the benchmarks and shown a photo of a naked GTX470. I couldn't tell the difference. MrS Scanning for our furry friends since Jan 2002 |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I guess you're right that it was Asus. I didn't bother to follow the Chinese link http://translate.google.com/translate?js=y&prev=_t&hl=da&ie=UTF-8&layout=1&eotf=1&u=http%3A%2F%2Fwww.chiphell.com%2Fportal.php%3Fmod%3Dview%26aid%3D57%26page%3D7&sl=zh-CN&tl=en It was Asus itself that offered two different versions of the GTX465. So the extra memory is paid for, & so is the card with the potential. Last time I heard SoftMod being used was to change an 8800GT to a QUADRO FX 3700 or to change an 8800GT to a 9800GT. I didn't understand why they said that it you wouldn't loose your warranty as it's riskier to SoftMod than to Flash. But it's only the software part of unlocking that the Chinese site talks about, & not about flashing a GTX470 Bios to the GTX465.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I think a software mod is normally much less risky than a flash; if a flash goes wrong your card may not work, at all! But, to use all the memory chips you would need to flash and they did not report any findings of a flashed card, but rather suggested it would be the same as a GTX470. I expect there are many GPU's that don't quite make the cut as GTX470's and end up as GTX465's. Intel have underclocked CPUs in the past just to sell them as lesser CPUs for market consumption reasons, so I guess that is a possibility (to create a range of cards), as is upping the reputation of the GTX465, as modifiable. This also reminds me of some AMD CPUs that could have their disabled cores switched back on (X3 to X4). A while back I read reports of 386 cores for GTX460, but a GTX465 only has 352 cores! Even for NVidia it would be awkward to have a GTX460 that completely outperforms a GTX465. If a GF104 does have 386 shaders, perhaps they will disable it down to 336 Cuda Cores for the GTX460, and keep the best chips for a dual card; to square up to ATI's 5970 dragon (it also breaths fire), or whatever turns up in the near future. We are of course still missing a GTX475, but I expect that might not turn up until late next year (similar to the 65nm to 55nm move). With only 2 weeks to go before the release starts, we will not have to wait long for confirmation. Should be rather intersting... |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
WRT the GTX 460 (release date 12th July), and crunching here at GPUGrid, how much difference would that 25% higher memory bandwidth of the 256bit wide GDDR5 memory interface version have over the 192bit version? If the price difference is not too much, I would say get the wider version, as it will have more potential to be sold on, at a later date. If the price difference is significant, say 25%, and the extra RAM makes little difference when crunching here, then it would probably be best to keep the extra money in your pocket. It will be interesting to find out why the 192bit version actually exists. I hope it is not just to have a 150W version, rather than the 'now speculated' 160W of the wider card. When you think that a GT240 with only 96shaders has at least half a gig if RAM to support it, a Fermi with 336cores only being supported by 768MB seems rather weak. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Giving it 386 cores and disabling at least one shader cluster probably makes sense to improve yields. And it's hard to predict the impact of 256 vs. 192 bit memory bus. They probably did it as some chips have defects within the back end containing the memory controllers. It's odd, though, that both, the 192 and 256 bit versions are said to feature the same high-end version of the GF104 chip. Normally you'd want to balance raw horse power and memory bandwidth, so you'd pair the 192 bit chip with either lower clocked or cut-down chips. And the amount of memory doesn't really matter, as long as it's high enough. And low end chips have traditionally had lots of RAM due to simple reasons: - they use slow RAM, which is cheaper - the target audience is more likely to be impressed by big numbers The high end cards need more memory to push higher resolutions and to some extend to push more demanding games. but more shaders does not automaitcally require more RAM, luckily - otherwise we'd be screwed ;) The thing is that anything GPUs can do well has to be massively parallel, i.e. you can cut the task into many independent pieces, throw them all at the shaders and collect the results at some point. And for memory useage it does not really matter how many of these pieces you can compute at any single time, i.e. how many shaders you have (it matters for the caches, though), as all results have to be collected in memory at some point anyway. This argument looses validity if you'd start to execute different programs simultaneously - something Fermi can do, but is so far completely unknown in the BOINC world. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
On my GTX470 I ran a Folding WU and a GPUGrid WU at the same time! Just one mind you :) Then I realised GPUGrid was back online. The folding WU was a Beta app run from a command console. So it's not just theory. Both tasks finished OK. I read that there are two types of GDDR5; one that is only guaranteed for up to 4000 and one goes up to 5000MHz (quartered really). I wonder if NVidia will use the 4000 with the 192version and the 5000 with the 256 version of the GTX460 (the OC'rs card). We will know in about 8 days. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Ups, you're right: I only had in mind that Fermi can split shader clusters between kernels / programs. Which might at some point lead to much better overall GPU utilization and the inclusion of several of the "low utilization" projects like Einstein besides heavy hitters like GPU-Grid. In that case you really need to keep them all in memory simultaneously. However, you can currently also run several programs. They share time slices between each other and in sum are not any faster (if not slower) than compared to being run one after the other. You don't have to run them like that.. but that doesn't reduce memory requirements if you do :p MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The GTX460, it is due out in 2days and some sneak peeks are starting to appear, http://www.overclockers.com/wp-content/uploads/2010/07/slide2.jpg http://www.overclockers.com/wp-content/uploads/2010/07/msi_gtx460.jpg Chip clock: 675MHz, Memory clock: 900MHz, Shader clock 1350MHz • Chip: GF104 • Memory interface: 192-bit • Stream processors: 336 • Texture units: 42 • Manufacturing process: 40nm • Max. power consumption: 150W • DirectX: 11 • Shader model: 5.0 • Construction: dual-slot • Special features: HDCP support, SLI If the prices are correct (From 175 Euro inc) then it looks like a much better card than the GTX465 in terms of value for money. The performance is about 3% shy of the GTX465, but it uses 50W less power at stock and should clock much higher. I would still prefer the 1GB version, and I am interested in seeing if these will actually work here at GPUGrid, straight out of the box (given the architecture changes). |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
So much for Reference speeds, Palit NE5X460HF1102 GeForce GTX 460 Chipset Manufacturer NVIDIA GPU GeForce GTX 460 (Fermi) Core Clock 800MHz Shader Clock 1600MHz Stream Processors 336 Processor Cores That factory OC'd card is 18.5% faster than reference design; which would of course make it about 15% faster than a GTX465 (going by some reviews). This card design reminds me of some of the better GT240 designs, |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Haha - that clock sure sounds nice! I'd expect it to exceed 150 W at these settings, but it's probably well worth it. It's also nice that the card is the 1 GB 256 bit version. I suppose this is really the one to get. And I'm curious about the number of TMUs. I'd expect there to be 50+ rather than 42, but it all depends on what nVidia decided some months ago. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GTX460's review is up! Abstract sounds positive for a 200$ card. Too bad I don't have time to read it tight now, gotta go to work *doh* And it's got 56 TMUs :) MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra