Message boards :
Graphics cards (GPUs) :
GTX580 specifications
Message board moderation
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Not confirmed. http://vr-zone.com/articles/nvidia-geforce-gtx-580-specifications-leaked/10184.html gdf |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This may just be a paper launch to deflect away from ATI’s 6000 series cards, due out later in Nov. Even the change to the 500 series suggests they want people to think they have released a new series of card. I think it will be a while (Jan) until we see how these perform on the new ACEMD 3.2 app. If the reportedly leaked specs (512 cores, 1544MHz shaders and 4Gbps data rate) are correct it should be about 18% faster than a reference GTX480. It is also reported to be cooler and quieter than the original GTX480, and I will buy that story as I noticed my second GTX470 was slightly cooler and quieter, due to using lower voltages. May have stemmed from improved crystallization, silicon refinement, engraving methods, energy mapping... They have had 9 months to improve manufacturing methods. The suggestion seems to be that some factory OC versions may eventually appear, pushing those GTX580's to about 25% faster than the present GTX480. However, with limited or no availability and high prices these are well out of most people’s reach. Fortunately there are other Fermi's to choose from and their prices are falling. A GTS455 should also appear on the Fermi list soon. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
So far it loks like the same architecture (which is fine) with a general overhaul regarding manufacturing, improving speed and/or leakage (that's how they manage slightly higher clocks at slightly reduced power consumption) and reliability (otherwise they still couldn't provide a 512 SP part). TSMCs 40 nm process should be almost mature by now and nVidia should have had enough experience to handle it. In my opinion these rumors look quite plausible. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GTX 580 Specs: Full complement of GPU cores @ 700MHz 512 Cuda Cores (Shaders) @ 1400MHz 1.5GB of GDDR5 @ 4GHz 128 TMUs (needed) GF110 244W 290mm (11.5in) long... An 8-pin and a 6-pin power supply Release date 8th or 9th Nov; not that you will be able to get your hands on one, even if you do have a spare £500+. http://vr-zone.com/articles/report-nvidia-geforce-gtx-580-tdp-is-244w-includes-128-tmu-benchmarks-leaked/10202.html Also noticed this, ENGTX470/2DI/1280MD5/V2. I expect they will also release a GF110 based GTX470. Might explain why you can pick up a GTX470 for £190 and makes sense; not every chip is going to make the GTX580 cut. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That looks like everything the GTX480 should have been :) And if nVidia wanted at least to try a little bit to put some consistency into their naming scheme they'd call a GF110 based "GTX470" a GTX570. The savings in power consumption alone should justify it, let alone the improved gaming performance due to more TMUs. But then consistent naming is not exactly nVidias strength. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Techpowerup are giving these preliminary specs, 512shaders, 48 ROPs, GF110, 1536MB, 384bit, Transistors – 3000M (is this down from 3200M)? 772MHz core (that’s a bit more like it), pushing the shaders up to a modest 1544MHz. 1002MHz memory clock (not bad). I hope Techpowerup are closer to the mark. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
They also said "about 3 billion" transistors for GF100 prior to the launch, so I wouldn't read more than "between 3.2 and 3.49 billion" into this. Apart from that and the clock speeds there's not much difference between these sources, or did I miss anything? MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The "gone" pre-review did say 3.0M, and that they trimed some leakage. It uses a vapor-chamber heatplate technology, It uses DDR5 that can run up to 5000 MHz The card’s power draw is limited to 300W, so Furmark and OCCT cannot take it past this, which might be an issue for high overclockers - hopefully not for crunching on GPUGrid. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The GTX580 is 15.4% faster for crunching CUDA tasks than the GTX480, as is. £380 to £430 Review by Ryan Smith, http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/16 There are overclocked versions available too, |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The memory chips are probably at least as high-quality as the previously used ones. But the memory controller is the same, and that's the one who's limiting the OC here. And the throtteling seems to be based on purely on software and to be non-transparent, i.e. the card still reports normal clocks and voltages. A software-implementation means that it's not going to work for unknown workloads (=BOINC), which is probably good for us. Judging by Anands power consumption measurements the throttling actually forces it below the power draw of heavy games (Crysis). Is it throttled down to its TDP of 250W and the game causes a higher load? I don't know yet and I really dislike that nVidia is not doing this transparently or implementing a hardware protection for the VRMs like ATI did. That one would actually benefit us, whereas the current solution only benefits nVidias marketing. Regarding transistor count: they improved z-culling and enhanced the TMU capabilities and otherwise didn't change any logic that I know of. So usually transistor count should have gone up a little, not down. And I'm not going to give them the benefit of the doubt here: rest assured that if the transistor count was actually 3.0 billion, they'd say so rather than "3 billion" ;) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just started running a couple of the new GTX 580's here, they haven't finished any Wu's yet though. The Wu's that are finished already on that Host are from a couple GTX 460's that were in the Box. Their EVGA GTX 580 Black Ops Editions with a Stock Clocking of 1.088 Volt's, 797 Core, 1594 Shader & 2025 Memory. I have them running now @ 1.088 Volt's, 875 Core, 1750 Shader & 2150 Memory. I could go higher with a Voltage Tweak but they run Hot enough already @ 80c-88c with about a 70%-75% Fan Speed on AUTO. I don't want to up the Voltage or the Fan Speeds so if they finish the Wu's at those clock speeds that's fine with me. Their in a i7 920 Box that's clocked to 3.80GHZ on Air, Enermax 850 85+ PSU... :) |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's great to have someone crunching with the new top GPU here so soon! A piece of advice: You should dedicate one CPU core to each GPU (using the SWAN_SYNC=0 environmental setting) to achieve maximum performance of the GTX580s. Right now your GTX580s are performing a litte less than my overclocked GTX480s. Note that their temperatures will rise if you apply the settings I've suggested, and maybe you have to raise the fan speed to keep the GPUs stable. |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ... |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Run the GPU's at reference frequencies (stock), and keep the fans high. |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Run the GPU's at reference frequencies (stock), and keep the fans high. The Temp's on the GPU didn't go up any, it was the PU that took a Jump. The i7 920's don't like anything over 4 Cores I've found, at least not at 3.80GHZ anyway on Air |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ... This is interesting, a test of SWAN_SYNC on the very fastest NVidia GPU. Looking at your results there are 2 WU types that completed with SWAN_SYNC both on and off. On one type the speedup was 8% and on the other 21%. It would be interesting to see results with SWAN_SYNC off and the priority boosted to High or even just to Above Normal. The GPUGRID process can be boosted automatically with a program such as eFMer Priority 64: http://www.efmer.eu/boinc/download.html |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The i7 920's don't like anything over 4 Cores I've found Well.. I'd say that's because HT actually increases hardware utilization. There's no free lunch here, though: throughput at the same clock speed increaes, but you have to pay for it in terms of energy. Running GPU-Grid it's a little different, as the CPU is not directly doing any crunching, it's just asking the GPU "are you done yet?" all the time. MrS Scanning for our furry friends since Jan 2002 |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
This is the fastest result ever seen on GPUGRID 3.9 ms/step on the DHFR workunit: http://www.gpugrid.net/result.php?resultid=3326415 However, it should be able to get to 3.3 ms/step (just guessing, as we don't have any GTX580 yet). Maybe, boosting the priority as suggested. gdf Run the GPU's at reference frequencies (stock), and keep the fans high. |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Okay, I'll try some of the ideas out when I bring the 580 back over here to GPU Grid, right now I'm running the PrimeGrid Sieve Wu's with it ... |
|
Send message Joined: 26 Nov 10 Posts: 9 Credit: 13,246,151 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hello - I am new here with GPUGRID.net. I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not. The card is an EVGA SC. I have it clocked at 797/1594 currently but the card runs stable at 900 MHz stock voltage with a bit warmer temp. |
©2025 Universitat Pompeu Fabra