Fermi

Message boards : Graphics cards (GPUs) : Fermi
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 16 · Next

AuthorMessage
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15591 - Posted: 3 Mar 2010, 23:11:15 UTC - in response to Message 15588.  

The Internet was meant to show everybody naked, including nVidia. Don't blame people for being curious or those with sight for not being blind. A German site I checked out yesterday used inches instead cm & got me really confused yesterday. Either that or Google translate, didn't work... Isn't that worse? I sat in silence when I read that the Fermi would be 4.2 x 4.2 inches!
ID: 15591 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15593 - Posted: 3 Mar 2010, 23:21:25 UTC - in response to Message 15591.  

Is that 12 X 256MB RAM i see?
ID: 15593 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15596 - Posted: 4 Mar 2010, 1:20:18 UTC - in response to Message 15593.  

I think it is 12 * 128 = 1536 (480) and 10 * 128 = 1280 (470)
Thanks - Steve
ID: 15596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15598 - Posted: 4 Mar 2010, 8:59:27 UTC - in response to Message 15591.  

Well, that's true.. but didn't anyone consider the feelings of this poor chip? If he's not exhibitionist he'll be seriously ashemed by now and probable ask himself the same question over and over again: "why, oh why did they do it?" Like the Rumpelwichte in Ronja Räubertochter. No wonder he's too distracted to run any world records now!

MrS
Scanning for our furry friends since Jan 2002
ID: 15598 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15599 - Posted: 4 Mar 2010, 9:15:59 UTC - in response to Message 15598.  

Did I see what I think I saw? On youtube?! Somehow I don't think it was a coincidence that the scene ended where it ended. I'm more a fan of stupid humor, rather than sick humor. This is more me http://www.youtube.com/watch?v=VpZXhR1ibj8 Just pretend not to speak German, & it'll be like reading an Arabic translation on CNN ;-)
ID: 15599 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15607 - Posted: 4 Mar 2010, 19:17:14 UTC - in response to Message 15596.  
Last modified: 4 Mar 2010, 19:39:57 UTC

I think it is 12 * 128 = 1536 (480) and 10 * 128 = 1280 (470)


You may be talking about the shaders, which are on the main GPU (under the tin) or saying that 1.5GB DDR for the GeForce versions?
Anyway, I was refering to the 12 small dark chips surrounding the core, I think they are GDDR5 ECC RAM chips.

I just checked and Fermi will have up to 6GB GDDR5, so the 12 RAM chips could each be up to 512MB. If there are 3GB & 1.5GB version as well, then they could use 256MB (or 128MB, as you said) chips. Mind you, if they are going to have 3GB or 1.5GB versions, perhaps they could leave chips off (6x512MB, or 3x512MB). The PCI would have to change a bit for that, but in turn it would reduce power consumption slightly.


http://www.reghardware.co.uk/2009/10/01/nvidia_intros_fermi/
Explains why it will be good for double-precision.
ID: 15607 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15609 - Posted: 4 Mar 2010, 20:12:23 UTC - in response to Message 15607.  

I was referring to memory ...
http://www.xtremesystems.org/forums/showthread.php?t=244211&page=66
http://en.wikipedia.org/wiki/Comparison_of_NVIDIA_graphics_processors#GeForce_400_Series
Thanks - Steve
ID: 15609 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15611 - Posted: 4 Mar 2010, 22:12:02 UTC - in response to Message 15609.  

Thanks Steve,
So the GTX 470 will have 448 shaders @ perhaps 1296MHz(?), the GPU will be at 625MHz and the GDDR RAM @ 1600MHZ (3200MHZ). It will use 2555MB RAM total and because it uses 1280MB onboard, that tells us that the shaders depend directly on the RAM (and therefore bus width too) – disable a shader and you disable some RAM as well. I expect this means it has to use 12 RAM chips or disable accordingly?

Asus - an opportunity beckons!

The 220 TDP is a lot more attractive than the 300!

Not too many will be able to accommodate two 512 cards, but two 448 cards is do-able (a good 750W PSU should do the trick).
ID: 15611 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15625 - Posted: 5 Mar 2010, 23:59:19 UTC - in response to Message 15611.  

So the GTX 470 will have 448 shaders @ perhaps 1296MHz(?), the GPU will be at 625MHz


The "slow core" will run at 1/2 the shader clock.

It will use 2555MB RAM total and because it uses 1280MB onboard


Where is the other half of that 2.5 GB if it's not on board?

that tells us that the shaders depend directly on the RAM (and therefore bus width too) – disable a shader and you disable some RAM as well.


No - correlation does not imply causality ;)

I expect this means it has to use 12 RAM chips or disable accordingly?


For a full 384 bit bus - yes. Unless someone makes memory chips with a 64 bit interface rather than 32 bit.

MrS
Scanning for our furry friends since Jan 2002
ID: 15625 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15639 - Posted: 7 Mar 2010, 15:31:12 UTC - in response to Message 15625.  

If a GTX480 will ship with 1.5GB RAM onboard the card, and can use 2.5GB, then it would need to be using 1GB system RAM to get to 2.5GB.

OK, so for a full 384 bit bus all 12 RAM chips are needed.
This would suggest that the bus for a GTX470 will actually be 320bit, as two RAM chips will be missing (unless thay are going to ship cards with RAM attached for no good reason).
ID: 15639 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15644 - Posted: 7 Mar 2010, 19:13:58 UTC - in response to Message 15639.  

If a GTX480 will ship with 1.5GB RAM onboard the card, and can use 2.5GB, then it would need to be using 1GB system RAM to get to 2.5GB.


Adding the system RAM available to the card to the actual GPU memory is a bad habit of the OEMs to post bigger numbers. Any modern card can use lots of system memory, but it does not really matter how much as it's too slow anyway. It's like saying your PC has 2 TB of RAM because you just plugged in that 2 TB Green HDD.

This would suggest that the bus for a GTX470 will actually be 320bit, as two RAM chips will be missing.


Yes. Technically it's the other way around: it has 10 chips because of the 320 bit bus, but never mind. (disclaimer: if the specs are correct)

MrS
Scanning for our furry friends since Jan 2002
ID: 15644 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15645 - Posted: 8 Mar 2010, 9:00:49 UTC - in response to Message 15644.  



Adding the system RAM available to the card to the actual GPU memory is a bad habit of the OEMs to post bigger numbers. Any modern card can use lots of system memory, but it does not really matter how much as it's too slow anyway. It's like saying your PC has 2 TB of RAM because you just plugged in that 2 TB Green HDD.

MrS


I know it is more of an advertisement scam than a reality, but I think games still tend to get loaded into system RAM (not that I play games), rather than sit on a DVD or the hard drive.

The Green HD's are better as a second drive ;) or if your systems stay on most of the time. The last one I installed had 64MB onboard RAM.
ID: 15645 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15650 - Posted: 8 Mar 2010, 22:34:20 UTC - in response to Message 15645.  

Sure, the game gets loaded into system mem - otherwise the CPU couldn't process anything. However, if the GPU has to swap to system memory, that's entirely different. It's a mapping of the private adress space of the GPU into system memory, so logically it can access this memory just as if it was local. What it stores there is different from what's on the HDD and different from what the CPU processes, though. And as soon as the GPU has to use system mem your frame rate / performance takes a huge hit on everything but the lowest of the low end cards.. that's why using system mem for the GPU is 2practically forbidden". Consider this: on high end GPUs we've got 100 - 150 GB/s of bandwidth, whereas an i7 with 3 DDR3 channels can deliver ~15 GB/s, more than would be available over PCIe 16x.

MrS
Scanning for our furry friends since Jan 2002
ID: 15650 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15651 - Posted: 8 Mar 2010, 23:42:52 UTC - in response to Message 15650.  

I see your point, and 3 times present PCIe (2 & 2.1). I dont think PCIe3 would make too much difference either!
ID: 15651 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15749 - Posted: 14 Mar 2010, 4:38:19 UTC
Last modified: 14 Mar 2010, 4:52:12 UTC

ID: 15749 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15753 - Posted: 14 Mar 2010, 11:35:19 UTC

Looks like the fan on the 470 could use a diameter boost.

MrS
Scanning for our furry friends since Jan 2002
ID: 15753 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15841 - Posted: 19 Mar 2010, 22:25:11 UTC
Last modified: 19 Mar 2010, 22:26:31 UTC

Might not be such a flop. http://www.fudzilla.com/content/view/18147/65/ Geforce GTX 480 has 480 stream processors and works at 700MHz for the core, 1401MHz for shaders and features 1536MB of memory that works at 1848MHz and paired up with a 384-bit memory interface.
If they can have an aprox 80% improvement in performance, fx 9800GTX+ vs GTX280 http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality-update-3/Sum-of-FPS-Benchmarks-1920x1200,1702.html Sum of FPS Benchmarks 1920x1200
Then the GTX480 "could" score 315FPS which is what the ATI Radeon HD 5970 scores. It's too rich for me though, I'm going to wait for a GTX460-SP432 (whenever that comes out)...
ID: 15841 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zydor

Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 15851 - Posted: 20 Mar 2010, 10:09:41 UTC - in response to Message 15841.  
Last modified: 20 Mar 2010, 10:13:47 UTC

Its already clear that a 5970 will outperform a 480 in its current incarnation, and thats without letting loose the 5970 to its full legs - the latter has another 25-30% performance in it above stock. One of the 480's problems is powerdraw, and it could interesting to see how they get round that issue when going for a Fermi x 2 next year.

For now though, its looking more like whether or not they have produced a card that gives sufficient real world advantage above a 295 to be commercially viable. The pricing will be constrained by the clear real world lead of the 5970 (a lead it will have for at least 12 months, and thats without thinking around a 5970 successor), the defacto 480 floor price of 295's and 5870's, and Firmi's voracious power needs. The real battle will be next year with a re-engineered Fermi2 and whatever the 5970 successor looks like.

Meanwhile I personally find myself caught waiting, I am in the market to replace my stalwart 9800GTX+. The only reason I am still waiting on what the reality of Firmi turns out to be, is the need to run CUDA apps. In my case its the new GPGrid Project about to startup, for that I need CUDA, and I am reluctant to go down the road of a 295. I tend to buy a card and use it for years, jumping generations. Without that need for CUDA, that box would be ATI already. So in a sense, another saving grace for NVidia is the CUDA dimension. Many people/organisations will have far more serious CUDA needs than me out there in the real commercial world - outside of benchmarks and ego trips - but the motivation will be similar, the need to run CUDA (for now).

NVidia have no chance of overtaking ATI on this release, but I hope they have got a sufficiently enticing real world performance/price improvement to make it worthwhile for Fermi1 to be seen as a worthy 295 successor.

Regards
Zy
ID: 15851 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 15852 - Posted: 20 Mar 2010, 10:45:07 UTC - in response to Message 15851.  
Last modified: 20 Mar 2010, 10:45:34 UTC

I don't think that is so clear as you depict it right now. It is likely that Fermi will be a factor two faster compared to a GTX 285 for what it matters GPUGRID. For gaming, I would think that the 480 will be the fastest single GPU card out there. Let's wait another week and see.

gdf

Its already clear that a 5970 will outperform a 480 in its current incarnation, and thats without letting loose the 5970 to its full legs - the latter has another 25-30% performance in it above stock. One of the 480's problems is powerdraw, and it could interesting to see how they get round that issue when going for a Fermi x 2 next year.

For now though, its looking more like whether or not they have produced a card that gives sufficient real world advantage above a 295 to be commercially viable. The pricing will be constrained by the clear real world lead of the 5970 (a lead it will have for at least 12 months, and thats without thinking around a 5970 successor), the defacto 480 floor price of 295's and 5870's, and Firmi's voracious power needs. The real battle will be next year with a re-engineered Fermi2 and whatever the 5970 successor looks like.

Meanwhile I personally find myself caught waiting, I am in the market to replace my stalwart 9800GTX+. The only reason I am still waiting on what the reality of Firmi turns out to be, is the need to run CUDA apps. In my case its the new GPGrid Project about to startup, for that I need CUDA, and I am reluctant to go down the road of a 295. I tend to buy a card and use it for years, jumping generations. Without that need for CUDA, that box would be ATI already. So in a sense, another saving grace for NVidia is the CUDA dimension. Many people/organisations will have far more serious CUDA needs than me out there in the real commercial world - outside of benchmarks and ego trips - but the motivation will be similar, the need to run CUDA (for now).

NVidia have no chance of overtaking ATI on this release, but I hope they have got a sufficiently enticing real world performance/price improvement to make it worthwhile for Fermi1 to be seen as a worthy 295 successor.

Regards
Zy
ID: 15852 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15853 - Posted: 20 Mar 2010, 10:55:13 UTC - in response to Message 15841.  

That's certainly an improvement compared to the earlier rumors! And especially the price of the GTX470. That could bring the HD5870 down to the 300$ it was supposed to cost a half year ago :p

MrS
Scanning for our furry friends since Jan 2002
ID: 15853 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 16 · Next

Message boards : Graphics cards (GPUs) : Fermi

©2025 Universitat Pompeu Fabra