Fermi

Message boards : Graphics cards (GPUs) : Fermi
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 16 · Next

AuthorMessage
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15905 - Posted: 22 Mar 2010, 11:33:20 UTC - in response to Message 15904.  
Last modified: 22 Mar 2010, 12:01:31 UTC



The Adventure is a PCIe expansion board based on a derivative of the HYDRA 100 ASIC (LT12102). The system allows easy, high speed connection to any PC or Workstation platform using the PCIe bus. The system is optimized for GPUs and provides superb connectivity to different PCIe devices such as SSDs, HDs and other peripherals. The board has four PCIe slots and is powered by a standard PC power supply. The Adventure is best for your multi displays, broadcasting, digital signage and storage solution. The Adventure board is typically deployed within a 4U rack mount case together with a standard power supply and up to four slots.

That's why I'm interested in a Lucid Adventure! A 1000W PSU, x2 GTX480, for every PCIe slot I've got in my PC. GPUGRID runs WU's that use 1 core pr GPU, that would require an Core i9, then we're crunching!
ID: 15905 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 15906 - Posted: 22 Mar 2010, 11:43:00 UTC - in response to Message 15904.  

It's actually a rebranded Tyan FT72B7015 http://www.tyan.com/product_SKU_spec.aspx?ProductType=BB&pid=412&SKU=600000150. Difficult to source though.

For you keen GPUGRID crunchers, it might be better find the minimum-cost host system for a GPU. CPU performance isn't an issue for our apps, so a very cheap motherboard-processor combination would do, perhaps in a wee box like this one: http://www.scan.co.uk/Product.aspx?WebProductId=982116

MJH
ID: 15906 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15907 - Posted: 22 Mar 2010, 12:21:09 UTC - in response to Message 15884.  

I "guess", that it won't just be Workstation & Mainstream GPU's, Compute deserves it's own line. Not now, but soon...

And if ATI tried to build a Fermi-1-killer (i.e. be significantly faster, not "just" more economic at a comparable performance level) they'd run into the same problems Fermi faces. They'd have more experience with the 40 nm process, but they couldn't avoid the cost / heat / clock speed problems. The trick is not to try to build the biggest chip.


If Nvidia concentrate on Compute with Fermi & Redo Mainstream/Workstation. They're already ahead with Compute...

I'm also "curious" to when a GPU can do without the CPU. Is an Nvidia a RISC or a CISC? Would Linux be able to run a GPU only PC?


Is the Compute market big enough for Nvidia to earn enough on it without a very large reduction in their sales?

If I understand the GPU architectures correctly, they do NOT include the capability of reaching memory or peripherals off the graphics boards. Therefore, they cannot reach any BOINC projects.
ID: 15907 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15909 - Posted: 22 Mar 2010, 12:33:47 UTC - in response to Message 15886.  

I finally found what I was looking for! http://www.lucidlogix.com/product-adventure2000.html One of these baby's can put that nasty Fermi outside the case & give me 2 PCIe x16 or 4 PCIe x8 for every one PCIe on my mobo. If a cheap Atom is enough, one of these on the mobo would make it possible to use 2-4 High End CUDA GPU's to play crunchbox: http://www.lucidlogix.com/products_hydra200.html But I don't make PC's, I buy them. So maybe if the price ain't bad, & I can find out where i can get an Adventure 2000, I'd be able to run an external multi Fermi GPU box...

Maybe if VIA supplies the CPU, Nvidia can do the rest, with or w/o Lucid.


Looks like a good idea, if GPUGRID decides to rewrite their application to requires less communication between the CPU section and the GPU section.
ID: 15909 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15911 - Posted: 22 Mar 2010, 13:50:57 UTC - in response to Message 15907.  
Last modified: 22 Mar 2010, 14:45:17 UTC

Are you 100% sure about this? Lucid claims that:

The Adventure is a powerful PCIe Gen2.0 expansion board for heavy graphic computing environment. The platform allows connection of multiple PCIe based devices to a standard PC or server rack.The Adventure 2000 series is based on the different derivative of the HYDRA 200 series targeting a wide range of performance market segments. It provides a solution for wide range of applications such as gaming (driver required), GPGPU, high performance computing, mass storage, multi-display, digital and medical imaging.

Adventure 2500
The Adventure 2500 platform is based on Lucid’s LT24102 SoC, connecting up to 4 heterogeneous PCIe devices to a standard PC or Workstation using standard PCIe extension cable. The Adventure 2500 overcomes the imitations of hosting high-end PCIe devices inside their Workstation/PC systems. Those limitations are usually associated with space, power, number of PCIe ports and cooling.
The solution is seamless to the application and GPU vendor in order to meet the needs of various computing and storage markets.


That said, what would be require are:

x1 Lucid Adventure 2000 http://www.lucidlogix.com/product-adventure2000.html

x1 4U rack mount

x1 850W-1000W PSU

x2 GTX480

With that in place, you're supposed to just connect the thing to one of your PCIe slots within your PC. That's "maybe" $2000 a pop...

I'm also thinking about if x8 PCIe 2.0 doesn't effect performance, x4 GTX480 & maybe a 1500W PSU might be possible for "maybe" $3000 a pop...
ID: 15911 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15917 - Posted: 22 Mar 2010, 15:26:22 UTC - in response to Message 15905.  
Last modified: 22 Mar 2010, 15:27:25 UTC

GPUGRID runs WU's that use 1 core pr GPU, that would require an Core i9, then we're crunching!


That’s not the case:
GPUGrid tasks do not require that the CPU component of a process to be explicitly associated with one CPU Core (or logical core in the case of the i7). So, a low costing single core CPU could support two (present) high end GPUs!

As quad core i7’s use Hyperthreading, they have 8 logical cores. So, even if GPUGrid tasks were exclusively associated to one core an i7 could support 8 GPUs! Remember the faster the CPU, the less CPU time required!
I normally crunch CPU tasks on my i7-920, leaving one logical core free for my two GT240s. Overall my system only uses about 91% of the CPU, so bigger cards would be fine!

As my two GT240s (equivalent to One GTX260 sp216) only use 3.5% of my CPU then a GTX 295 would use about 7% and two GTX 295’s would use about 14%.
Therefore two GTX480 would use about 33% - so an i7-920 could support 6 Fermi cards crunching on GPUGrid!
Most people would be better off with a highly clocked dual core CPU (3.33GHz) than say a Q6600 (at only 2.4GHz), or just overclock it and leave a core or two free.

PS. There is no i9; Intel ended up calling it the i7-980X. But fortunately you don’t need it, as it costs £855. Better to build a dual GTX470 based system for that.
ID: 15917 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15918 - Posted: 22 Mar 2010, 15:38:28 UTC - in response to Message 15917.  
Last modified: 22 Mar 2010, 15:47:36 UTC

But can you answer if it's robertmiles, or Lucid who's right? Is it possible to build an "affordable" Supercomputer? Consisting of 4x 4U rack mounts, one for a Single or Dual CPU i7 & 3 for 2-4 GTXGPU's? If my "guess" is right, such a system would cost $8-12.000 & nobody says that you have to use it for crunching GPUGRID.net projects...

Do you know if there's still prize money going to whoever finds the highest Prime Number, if so, can CUDA help to find it, if putting together that "affordable" Super Computer is possible?
ID: 15918 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15919 - Posted: 22 Mar 2010, 16:01:44 UTC - in response to Message 15918.  

On the list of things to investigate to get your supercomputer project off the ground I would like to suggest that you look into how many GPUs in one system the NVidia drivers will support properly. Are there OS limits? How about motherboard bios limits?

Talk to the people with multiple GTX295 cards and you will see they had to do unconventional things regarding drivers and bios.

skgiven ... I think that the current linux version of gpugrid app is using up a full cpu core per gpu core. That's probably the single biggest reason I have not tried linux yet. I have seen some fast runtimes which always interest me but I am just not willing to take that much away from WCG.
Thanks - Steve
ID: 15919 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15920 - Posted: 22 Mar 2010, 16:34:46 UTC - in response to Message 15919.  
Last modified: 22 Mar 2010, 16:36:43 UTC

Its still pretty raw, but this was an article I found from PC Persctive http://www.pcper.com/article.php?aid=815&type=expert&pid=1

They showed & tested the Hydra 200 & the Adventure. There was a mention about folding@home.
ID: 15920 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15923 - Posted: 22 Mar 2010, 20:14:11 UTC - in response to Message 15919.  

Snow Crash, your right. 6.04 is still running over 90% CPU time on Linux, but it is saving 50min per task - Which means 5h rather than 5h 50min, or 16.5% better in terms of results/points.
My GTX 260 brings back about 24500 points per day, so under Linux I would get about 28500 per day (~ 4000 more). The quad CPU on my system only gets around 1300 Boinc points (my GTX 260 does almost 20 times the work). So using Linux I would get 2600more points per day on that one system, assuming I could not run any other CPU tasks.

Liveonc, I would say that you would really have to talk to someone who has one, to know what it can do - if the Lucid Hydra is even available yet? You would also, as Snow Crash said, need to look into the GPU drivers, other software (application), and especially the motherboards limitations. You would also need to be very sure what you want it for; crunching, gaming, rendering or display.

I guess that a direct PCIE cable would allow the whole GPU box to function 'basically' like a single GPU device and the Hydra is essentially an unmanaged GPU Layer 1&2 switch. The techs here might be able to tell you if it could at least theoretically work.
Although I design and build bespoke systems and servers this is all new and rather expensive kit. It is a bit of a niche technology, so it will be expensive and of limited use. That said if it could be used for GPUGrid, I am sure the techs would be very interested in it as it would allow them to run their own experiments internally and develop new research techniques.

For most, one Fermi in a system will be plenty! Two will be for the real hard core enthusiast or gamer. For the rare motherboards that might actually support 3 Fermi's you really are looking at a 1200W PSU (or 2 PSU's) in a Very well ventilated Tower System (£2000+).
ID: 15923 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15924 - Posted: 22 Mar 2010, 20:50:03 UTC - in response to Message 15923.  
Last modified: 22 Mar 2010, 21:02:15 UTC

Sorry for asking too many questions that can't really be answered unless you actually had the thing. It just fascinated me with the possible potential of the Hydra. Not just for use with GPUGRID.net, I myself "sometimes", like to play games on my PC. That most of the GPU's used in GPUGRID.net are mainstream GPU's & that Hydra allows mixing Nvidia with Ati cards, Nvidia & Ati GPU's of different types. Brought to mind someone in another thread who looked at a GTX275 with GTS250 physics card. That Hydra "might" allow mixing different GPU chips on the same card, "might" also enable Dual GPU chip cards with fx a Cypress & a Fermi, instead of putting the Hydra on the mainboard or going external.

Also, Nvidia had abandoned the idea of Hybrid-SLI GeForce® Boost, & decided to go with NVIDIA Optimus instead. If notebooks had a Hydra, they "might" be able to use both Integrated & Discrete, instead of just switching between the two.
ID: 15924 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15926 - Posted: 22 Mar 2010, 21:50:24 UTC - in response to Message 15924.  
Last modified: 22 Mar 2010, 21:53:55 UTC

Hydra is very interseting - it has unknown potential ;) I am sure many event organisers (DJs) would love to see it in a laptop or a box that could be attached to a laptop, for performances! It could even become a must have for ATI and NVidia alike, when demonstrating new GPU's to audiances.


Back to Fermi.

I was thinking about the difference between DDR3 and GDDR5. In itself it should mean about a 20 to 25% difference. So with improved Fermi architecture (20%) and slightly better clocks (~10%) I think the GTX480 will be at least 2.6 times as fast as a GTX295 but perhaps 2.8 times as fast.

I think they are keeping any chips with 512 working shaders aside, for monster dual cards. Although the clocks are not likely to be so high, a card that could do 4.5 times the work of a GTX295 would be a turn up for the books.

Should any lesser versiosn of Fermi turn up with DDR3, for whatever reason, avoid at all costs!
ID: 15926 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15927 - Posted: 22 Mar 2010, 22:36:53 UTC

Hydra: it's not there yet for games, they still have to sort out the software side. Load balancing similar GPUs is difficult enough (see SLI and Crossfire), but with very different GPUs it becomes even more challenging. They'd also teach their software how crunching with Hydra had to work.

Back to Fermi: there won't be any DDR3 GF100 chips - that would be totally insane. Really bad publicity and still a very expensive board which people almost won't buy at all. Manufacturers play these tricks a lot with low end cards (which are mostly bought by people who don't know / care about performance), and sometimes mid-range cards.

I was thinking about the difference between DDR3 and GDDR5. In itself it should mean about a 20 to 25% difference.


No, it doesn't. If you take a well balanced chip (anything else doesn't get out of the door at the mid to high end range anyway) and increase its raw power by a factor of 2 you'll get anything between 0 and 100% as a speedup, depending on the application. In games probably more like 30 - 70%.
If you doulbe raw power and double memory bandwidth (and keep relative latency constant) then you'll see 100% speedup among all applications. The point is: higher memory speed doesn't make you faster by definition, because the memory bandwidth requirements scale with performance.
So if for example Fermi got 3 times as much raw crunching power as GT200 and only 2 times the bandwidth, this is not going to speed things up, regardless of the memory being GDDR5 or whatever.

However, Fermi is more than just an increase of raw power: there's also the new caching system. Which should alleviate the need for memory bandwidth somewhat, but doesn't change the fundamental issue.

MrS
Scanning for our furry friends since Jan 2002
ID: 15927 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 15928 - Posted: 23 Mar 2010, 1:54:38 UTC - in response to Message 15926.  
Last modified: 23 Mar 2010, 1:59:42 UTC

I think the GTX480 will be at least 2.6 times as fast as a GTX295 but perhaps 2.8 times as fast.

I think they are keeping any chips with 512 working shaders aside, for monster dual cards. Although the clocks are not likely to be so high, a card that could do 4.5 times the work of a GTX295 would be a turn up for the books.


from where u took this? AFAIK from various forums, here's some "facts":
- GTX480 and GTX470 are faster then GTX285 on 40% and 25% respectively
- GTX480 is weaker then 5870 in Vantage
- OCing is really poor and in the same time - make card "critically hot"
- VERY noisy
- if the quantity of good 512 core will be enough, it will be for Tesla only
- fermi2 will no be earlier then summer 2011, most probably - fall 2011, if to start redesigning today
- there are plans for GTX495. BUT: it will be based on 275 chip (retail name GTX470 with 448 cores), but not on 375 (GTX480 - 480 cores). Another problem - PCI certification (300W), which will be really hard to meet. Power consumption gonna be "mind blowing".

Sure, let's wait for a week and see if all these true or not
ID: 15928 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15929 - Posted: 23 Mar 2010, 7:07:24 UTC - in response to Message 15911.  
Last modified: 23 Mar 2010, 7:18:31 UTC

Are you 100% sure about this? Lucid claims that:

The Adventure is a powerful PCIe Gen2.0 expansion board for heavy graphic computing environment. The platform allows connection of multiple PCIe based devices to a standard PC or server rack.The Adventure 2000 series is based on the different derivative of the HYDRA 200 series targeting a wide range of performance market segments. It provides a solution for wide range of applications such as gaming (driver required), GPGPU, high performance computing, mass storage, multi-display, digital and medical imaging.

Adventure 2500
The Adventure 2500 platform is based on Lucid’s LT24102 SoC, connecting up to 4 heterogeneous PCIe devices to a standard PC or Workstation using standard PCIe extension cable. The Adventure 2500 overcomes the imitations of hosting high-end PCIe devices inside their Workstation/PC systems. Those limitations are usually associated with space, power, number of PCIe ports and cooling.
The solution is seamless to the application and GPU vendor in order to meet the needs of various computing and storage markets.


That said, what would be require are:

x1 Lucid Adventure 2000 http://www.lucidlogix.com/product-adventure2000.html

x1 4U rack mount

x1 850W-1000W PSU
[img]
x2 GTX480

With that in place, you're supposed to just connect the thing to one of your PCIe slots within your PC. That's "maybe" $2000 a pop...

I'm also thinking about if x8 PCIe 2.0 doesn't effect performance, x4 GTX480 & maybe a 1500W PSU might be possible for "maybe" $3000 a pop...


Is who 100% sure about what?

Looks like the GPUGRID project scientists and programmers need to say more about just how fast the CPU-GPU communications needs to be, and how powerful a CPU is needed to keep up with the needs of two or four GTX480s.
ID: 15929 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 15930 - Posted: 23 Mar 2010, 8:43:29 UTC - in response to Message 15929.  



[quote]Are you 100% sure about this? Lucid claims that:


Looks like the GPUGRID project scientists and programmers need to say more about just how fast the CPU-GPU communications needs to be, and how powerful a CPU is needed to keep up with the needs of two or four GTX480s.


It does not matter at all. The code always run on the GPU only apart from IO.
gdf
ID: 15930 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15932 - Posted: 23 Mar 2010, 8:56:15 UTC - in response to Message 15930.  
Last modified: 23 Mar 2010, 9:07:50 UTC

Well, I "guess" that maybe after Lucid sorts itself out, it "might" just be the One Chip to rule them all, One Chip to find them, One Chip to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie. ;-) With that said, I hope the Orcs get it before the Hobits destroys it. BTW, I read somewhere that Intel has a stake in Lucid.
ID: 15932 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15935 - Posted: 23 Mar 2010, 11:12:06 UTC - in response to Message 15932.  

I bet its a pointy stake with a Lawyer behind it.
ATI and NVidia will hadrdly be bending over backwards to make their kit work with Hydra then!
ID: 15935 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15948 - Posted: 23 Mar 2010, 21:53:33 UTC - in response to Message 15928.  

AFAIK from various forums, here's some "facts":
- GTX480 and GTX470 are faster then GTX285 on 40% and 25% respectively
...
- if the quantity of good 512 core will be enough, it will be for Tesla only


I second that: reserving fully functional chips for ultra expensive Teslas is a smart move (if you don't have many of them ;)

Regarding the performance estimate: I don't doubt someone measured this with some immature hardware and driver, but I don't think it's going to be the final result. Consider this:

When ATI went from DX10 to DX11 they required 1.10 times as many transistors per shader (2200/1600 : 1000/800) and got 11% less performance per shader per clock (see 4870 1 GB vs. 5830).

If the 40% for GTX480 versus GTX285 were true that would mean nVidia spent 1.07 times as many transistors per shader (3200/512 : 1400/240) and got 0.74 times as much performance per shader per clock, i.e. 35% less [140%/(480*1400) : 100%/(240*1476)]. nVidias strength so far has been design, so I don't think they'd screw this one up so badly. Especially since they already screw up manufacturing and dimensioning.

MrS
Scanning for our furry friends since Jan 2002
ID: 15948 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 15957 - Posted: 24 Mar 2010, 20:22:14 UTC - in response to Message 15948.  


Regarding the performance estimate: I don't doubt someone measured this with some immature hardware and driver, but I don't think it's going to be the final result.
MrS


I understand what u r talking about. But remember - this time nvidia changed the “dispatching scheme” and now they sending cards and drivers directly by themselves, but not from manufacturers, as it was before. In fact, nvidia already had sent cards and the latest driver to some close to them reporters and these results came from one of them.

Saying frankly, me personally expected and nvidia promised way better performance and frequences, so if this is true I gonna stay on my GTX275, may be I’ll try to get the 2nd one for cheap. But no Fermi until Fermi2 will be available, sorry…


ID: 15957 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 16 · Next

Message boards : Graphics cards (GPUs) : Fermi

©2025 Universitat Pompeu Fabra