Message boards :
Graphics cards (GPUs) :
Fermi
Message board moderation
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 16 · Next
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
In terms of rac, I would expect a dual core GTX295 to be on the same level of a single core GTX480. gdf |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That's along the same line as my babbling; "A GTX480 could get 90K" and "a GTX295 could get 85K". As I also explained that GTX480 is a lab card, so a Fermi using CUDA 3.0 would be about 11% slower (82K). I dont think it is babbling to point out to people that CUDA 3.1 will improve things by 11% and it will be released next month. As for that GTX295 you say "is outproducing any GTX 480 to date" http://www.gpugrid.net/results.php?hostid=56900 2368139 1498742 21 May 2010 3:53:39 UTC 21 May 2010 8:41:48 UTC Error while computing 10.95 8.80 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2368121 1498726 21 May 2010 3:53:39 UTC 21 May 2010 3:56:37 UTC Error while computing 3.09 3.00 0.02 --- Full-atom molecular dynamics v6.72 (cuda) 2368111 1498720 21 May 2010 3:53:39 UTC 21 May 2010 3:56:37 UTC Error while computing 9.92 8.72 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2368109 1498719 21 May 2010 3:49:36 UTC 21 May 2010 3:53:39 UTC Error while computing 11.02 8.83 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2368098 1498712 21 May 2010 3:49:36 UTC 21 May 2010 3:53:39 UTC Error while computing 10.06 8.78 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2368090 1498706 21 May 2010 3:49:36 UTC 21 May 2010 3:53:39 UTC Error while computing 10.06 8.69 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2368050 1498678 21 May 2010 3:35:32 UTC 21 May 2010 3:38:56 UTC Error while computing 10.63 8.84 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2368049 1498677 21 May 2010 3:35:32 UTC 21 May 2010 3:38:56 UTC Error while computing 9.94 8.73 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2368043 1498674 21 May 2010 3:34:54 UTC 21 May 2010 3:38:56 UTC Error while computing 10.81 8.75 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2368020 1498659 21 May 2010 3:30:04 UTC 21 May 2010 3:33:14 UTC Error while computing 10.55 8.75 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2368017 1498656 21 May 2010 3:30:04 UTC 21 May 2010 3:33:14 UTC Error while computing 10.97 8.84 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2368003 1498649 21 May 2010 3:23:13 UTC 21 May 2010 3:26:44 UTC Error while computing 10.94 8.80 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2367976 1498629 21 May 2010 3:22:34 UTC 21 May 2010 3:24:36 UTC Error while computing 9.98 8.67 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2367960 1498617 21 May 2010 3:23:13 UTC 21 May 2010 3:26:44 UTC Error while computing 9.95 8.77 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2367933 1498599 21 May 2010 3:16:32 UTC 21 May 2010 3:20:26 UTC Error while computing 10.11 8.86 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2367905 1498576 21 May 2010 3:20:26 UTC 21 May 2010 3:22:34 UTC Error while computing 9.94 8.77 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2367895 1498567 21 May 2010 3:16:32 UTC 21 May 2010 3:20:26 UTC Error while computing 10.78 8.69 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2367865 1498546 21 May 2010 3:16:32 UTC 21 May 2010 3:20:26 UTC Error while computing 11.00 8.63 0.06 --- Full-atom molecular dynamics v6.72 (cuda) 2367814 1498513 21 May 2010 2:31:10 UTC 21 May 2010 2:34:10 UTC Error while computing 10.28 8.81 0.07 --- Full-atom molecular dynamics v6.72 (cuda) 2367795 1498502 21 May 2010 2:31:10 UTC 21 May 2010 2:34:10 UTC Error while computing 11.00 8.84 0.07 --- Full-atom molecular dynamics v6.72 (cuda) Unfortunately, this demonstrates my point quite well; reliability is a significant factor in RAC. - That 75K RAC wont last long going at that rate. http://www.gpugrid.net/results.php?hostid=71363 No errors here! As for my low RAC. Again, you need to be running the project for weeks to average out on RAC. My Fermi RAC is still under 40K as it has only been in that box for a week. As I only have a GTX470, it could only reach about 70K anyway. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,355,003,875 RAC: 5,388,046 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Wow, my GTX 260 is averaging 16,200 seconds for a 6,755.61 credit TONI_HERGunb WU, but under Vista 64. What are the best clock setting for an optimized yet stable output for a GTX 260 (216)? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The reference clocks for a GTX260sp216 are 1.242GHz. The card you are referring to has clocks at 1.55GHz; making it about 24% faster than reference. Cant see what your shaders are at. Perhpas your card is slightly Factory Over Clocked? There is also a difference between using Vista/Win7 and using XP (expecially XP SP2). Basically, XP is slightly faster, which would at least account for the extra 4% being achieved by that card. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,355,003,875 RAC: 5,388,046 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My card was clocked at 694/1215(Memory)/1512(Shader) MHz. Ran absolutely stable. Just checked with 718/1548/1550 MHz. After I applid the setting for GPU clock and shaders everything looked normal, after the new setting for the meemory the application crashed. Factory clocking was 576/1000/1242MHz. What setting should I select? Was it a mistake to overclock the memory? |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My card was clocked at 694/1215(Memory)/1512(Shader) MHz. Ran absolutely stable. Just checked with 718/1548/1550 MHz. After I applid the setting for GPU clock and shaders everything looked normal, after the new setting for the meemory the application crashed. Factory clocking was 576/1000/1242MHz. That's my card. It's an MSI, factory OCed to 655 core. All I did was increase the shaders to 1550, so it's at 655/1550/1050. A big difference is that it's running XP. Vista and Win7 are slower in GPUGRID since XP runs the GPU at a higher percentage of usage. Strangely, I've found no other projects that seem to be affected by this slowdown. |
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@skgiven ... The numerous errors you posted are a very small view of that card's performance. It crunched 6.03 without error for a very long time. Since switching to 6.72 it was returning 80% sucessfully until last night when those errors occurred. That leads me to believe we need more guidance from the project staff into what REALLY works for thier application because the only thing I have done to that card recently is turn the clocks down. Please consider taking more time to do a thourough and impartial analysis instead of posting incomplete data. @GDF - Can you please tell us what card series with what driver + OS combinations you have sucessfully tested the non-fermi version in the lab? I know you are all working very hard at testing new technologies, optimizing the applications, trying to raise funds, publish papers, along with taking on a public project but I imagine you will see a substantial drop in throughput now that the new version has become the primary app. <sigh> ... I guess I'll have to stop posting for a while ... </sigh> Thanks - Steve |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
The new application uses more GPU RAM and your card seems to run out of it for some reason. A GTX295 should have sufficient RAM to crunch everything. Are you doing anything special while crunching? gdf @skgiven ... The numerous errors you posted are a very small view of that card's performance. It crunched 6.03 without error for a very long time. Since switching to 6.72 it was returning 80% sucessfully until last night when those errors occurred. That leads me to believe we need more guidance from the project staff into what REALLY works for thier application because the only thing I have done to that card recently is turn the clocks down. Please consider taking more time to do a thourough and impartial analysis instead of posting incomplete data. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am still having problems with the KASHIF_HIVPR tasks, so I would second that request for a list of working drivers. I get this a lot, ERROR: file ntnbrlist.cpp line 63: Insufficent memory available for pairlists. Set pairlistdist to match the cutoff. called boinc_finish Steve is getting the same error, and I have seen this same error on several different cards and for several different users. For me, 6.10.56, Vista x64, 19062 works for most tasks, just not KASHIF_HIVPR. I have tried many versions of Boinc and many drivers. The success is not consistent across platforms and cards. This suggests that the choice driver will be platform dependant and card dependent. It could come down to firmware. For example, I have XP x86 with a GT240, using driver 19745 and Boinc 6.10.51 (yes, the one that is supposed to stop crunching GPU tasks randomly, but works fine for me on one system). No failures of any type ever! With the 19745 driver I could not crunch any 6.72 tasks on a GT240 Vista x64 system, or any 6.72 tasks on a GTX260 Win7 x64 system. It is also failing on Steve's system (XP SP3) with a GTX295 that is also a known good card. A confusing picture that needs to be cleared up. PS. It should be insufficient |
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The new application uses more GPU RAM and your card seems to run out of it for some reason. A GTX295 should have sufficient RAM to crunch everything. Sounds like something is leaking memory or is just not realeasing it quickly/ properly before the next WU starts executing. Maybe introduce a small delay at the beginning of the app or do someting at the end of the app to try and force it to release/ recover memory better? The machine in question is dedicated to GPUGrid and WCG. I don't even have a monitor or mouse hooked up to it so rebooting regularly (which usually helps memory issues) is a pain because the motherboard (x58 EVGA SLI LE) will not boot without a monitor. With that in mind I have moved my GTX480 (which never had problems on Win7 64) onto that machine (WinXP 32) and will monitor it for "insufficient memory" issues. On my second machine, due to summertime heat comming soon and the added electrical costs for air conditioning I am now only going to run a GTX285. Thanks - Steve |
BikermattSend message Joined: 8 Apr 10 Posts: 37 Credit: 4,428,457,619 RAC: 508,956 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So I just noticed a few GTX 465s on newegg for US $280. I thought I was going to be really tempted by this card but now I am reading reports of a 200W TDP and I am suddenly very unimpressed. If I am going to have a 200W TDP I would rather go for one of the 470s with a better heatsink so it runs cooler. I was hoping for <150W so I guess I will wait for for the 450s. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The card is in spirit rather similar to the 5830: you get the fat chip, but it's got quite some units disabled to recycle some of the bad chips. And since it's recycling they don't bin the chips for low voltage requirements / high clock speed / low leakage, hence the high power consumption. And they don't make them too cheap so they don't generate too much demand for something which is essentially an unwanted byproduct. MrS Scanning for our furry friends since Jan 2002 |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Reminds me of my first Celeron (I call it Sillyron), but they dumped those, they don't dump these. Even though I'm a great fan of the GTX260(216), the original one didn't impress me. But isn't it just a matter of time before they mount better (more expensive heatsink/fans) & drop the prices on these "byproducts"? The original hardcore OC'ers used Celerons & got them to do quite well, despite being handicapped, & the Intel chips that get the best bang for the buck is at the lower end of the price spectrum. Maybe a GTX465 with a Coolermaster V10 or a Corsair H50 GPU alternative might get that sick kitten up & running?
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
You can switch the heatsink and clock the card up, but you can't make up for the disabled shaders and / or the voltage and leakage. And before you put an expensive heatsink onto a GTX465 it might be better to go for a GTX470 with stock cooling ;) For CPUs it's different because you can't disable parts of them as fine-granular as GPUs. And because they're not as power consumption bound as high end GPUs. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Good point about the original GTX260 being a poor product, and the later GTX260's (55nm) being much better. My last task with my GTX260sp216 (now on XP), suggests that I could get about 40K a day with that card (with modestly overclocked shaders): 2419426 1523141 30 May 2010 10:48:28 UTC 30 May 2010 19:29:12 UTC Completed and validated 14,236.77 472.39 4,428.01 6,642.02 ACEMD2: GPU molecular dynamics v6.05 (cuda) (86400/14236)*6642=40300 The reported GTX465 TDP of 200W might (or not) actually be slightly higher than what is observed; they will no doubt have some GTX465s that are better than others. So it will be hit and miss if you get a good GPU or not. Lots of leakage, unlucky, high TDP. Not so much, lucky, lower TDP, and more reasonable value for money. When you think that a GTX260 sp216 (55nm) has a TDP of 171W, and a GTX285 (240shaders) has a TDP of 204W, the GTX465 with 352shaders at 200W is not that bad! Basically, it should perform 25 to 30% slower than a GTX470, as it has 352 shaders, 96 less than the GTX470. We will have to wait and see if the RAM is the same or if any possible difference in the RAM affects performance here. The thermal threshold for this 40nm chip is 105C, the same as the GTX470 and GTX480 (they are all Fermi’s). As I mentioned before, the release retail price is likely to fall over a period of a couple of months. It is in NVidias interest to have as few of these cards as possible, as they are basically reject 512, 480 and 470 cards. Frankly, I don’t see why NVidia just did not release 512, 480, 448, 416, 384, 352, 320, 288 and 256 cards (with the numbers being the shader count). Perhaps they are trying to defy reality or perhaps the other numbers will appear within the evils of OEM systems? I’m guessing an OEM Fermi will be one to especially avoid! As with the original GTX260, I expect many of these cards will be viewed with distain in the future, but such hindsight might be some way off; I can’t see a GTX465 Ver 2 being released for many months, perhaps a year! So if you are considering one now, don’t let the prospect of a Ver2 card deter you. Given my GTX470 (overclocked to 704MHz) could get 70K per day (or about 60K stock), a GTX465 could only get about 50 to 55K per day if similarly overclocked (or 45 to 50K stock). When Cuda 3010 is release in a couple of weeks that will rise by about 11% which would make it 55 to 60K (overclocked) – about 45% faster than my overclocked GTX260sp216. In terms of the shader count, I think this performance is a bit shy! Hopefully Global Foundaries will improve yields, prices will drop, there will be slight core mods and card design restrictions will be lifted to encourage competition and performance. For now, I know these generic cards have to use the (presently) relatively slower CUDA 3, and the apps are only beginning to be honed towards these new cards, so it is way too early to judge this or any other Fermi card, in terms of its crunching ability here; It would be like comparing the GTX200 range by looking at the original GTX260 cards performance on the original ACEMD app! The present question should be, Is the GTX465 worth the money compared to the GTX470? Well, if it is 33% less in terms of price and performs 25 to 30% less, then there is not much in it (a slight positive). The slight negative is that it might use close to 200W, and a GTX470 only has a TDP or 215W. My general (ball-park) advice is that if you are going to spend £300 on a base unit then £200 on a GPU is about right. If you are going to spend more on the base unit then get a GTX470 or 480 if you want to crunch here, now. If you buy a base unit for around £200, a GT240 would be the best option, or possibly a cheap second hand GTX260sp216 (55nm) if you come across one. Remember, a high frequency dual core is much better than a sluggish quad (3.3GHz IC2D is better than a 2.1GHz quad Opteron, for example), as long as you keep a core/thread free for the GPU, and optomise (swan_sync=0)! I would really like NVidia to produce some other non-Fermi 40nm cards, but it seems they have put all their eggs into the same basket and intend to just push out more shader reduced cards such as the GTS430. Unless this uses significantly less power (under 120W – and no more than one power connector), it will be a flop if managed well, if not a disaster (the loss of the lower and mid range market sector), hence my wish for an alternative! As MrS just said, get a GTX470 before you try to pimp up a GTX460 with a specialized heatsink and fan; you will not be able to make up for the missing 96 shaders! |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
HOT, HOT, HOT! http://www.hexus.net/content/item.php?item=24299&page=8 Thought once that there was hope for a dual Fermi using one of these: http://www.coolitsystems.com/index.php/en/omni.html but it was struggling with a 16% OC'ed GTX480. Still sub 90's, but I can't see how it could a dual even at stock.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Zotac are about to launch their own design versions of the GTX480 and GTX470, GTX480 756MHz GPU, 1512MHz shaders GTX470 656MHz GPU, 1312MHz shaders Hexus were also able to show that the actual observed power draw from the GTX465 is about 35W less than the GTX470 (not quite as bad as the suggested 20W TDP differnce) |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This looks like the best GTX470 deal in the UK at £285. The palit's dual fan system keeps it 10 degrees C cooler, 4dB less noisy, and its also about £14 less expensive than the next cheapest card, which uses the original NVidia design! My Asus card was and still is £37 more expensive :( This Inno3D GTX 470 Hawk is set to take temps down by as much as 20deg C and 8dB. http://wccftech.com/2010/05/30/gainward-releases-geforce-gtx-470-gs-golden-sample/ |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GTX465 is an interesting beast: Anand reports that nVidia can use a range of voltages for their chips. They did this before, but it's just becoming more important with a huge chip like Fermi. To illustrate the point: Anands 470 and 480 run at 0.96 V whereas their 465 gets 1.025 V delivered. It may not sound much, but would result in a 14% power consumption increase (assuming usual P ~ V^2 scaling). The result is that their 465 draws even a little more power than their 470! Other reviews show more of a difference, so I assume they got cards which are able to hit the target clock speed at lower voltages. -> If you're in for a GTX465 you could save yourself some money (15 - 30W) if you could get a card which runs at lower voltage. Not sure how to find it out, though, except if you're buying used / from a private person. @SK: they didn't go for 512 shaders because they're still stocking up on them. Until they get enough to finally launch such a card. And they probably didn't go for the smaller steps because they didn't want to confuse the consumer. Well, not confuse him with him noticing ;) And regarding smaller chips based on Fermi architecture: the usual birds are singing we should see the first one in about a month. They've been pretty accurate regarding the 465. Regarding a better 465: I could imagine them enabling one more shader cluster (is they did with GTX260), but instead of a major redesign I'd rather expect the high end model of the next msaller chip to take over this spot. It's probably "only" got 256 shaders but should be able to clock them quite a bit higher (I'd expect 1.5+ GHz). And it should be more balanced for games, i.e. have more texturing units. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I think it may be open design with this one: The builders will do what they like. The 200W TDP, could be well out for any given card; Green version and black editions? Best to read the small print on the box for all the info. The GF100 with half the GPU disabled would not be competitive; too much leakage – better off in the bin! There are too many rumours about the GF104. I heard 384, 750MHz and 130W, which would do nicely doubled up for the fall, but then I also saw a Photoshoped image of a streached GF100! |
©2025 Universitat Pompeu Fabra