Message boards :
Graphics cards (GPUs) :
NVidia GPU Card comparisons in GFLOPS peak
Message board moderation
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
For those interested in buying a CUDA card or adding one to a GPU project, I collected some reported Boinc GPU ratings, added some I tested and create a Boinc GFLOPS performance list. Note. These are hopefully ALL Native scores only! CUDA card list with Boinc ratings in GFLOPS The following are mostly compute capability 1.1: GeForce 8400 GS PCI 256MB, est. 4GFLOPS GeForce 8400 GS PCIe 256MB, est. 5GFLOPS GeForce 8500 GT 512MB, est. 5GFLOPS Quadro NVS 290 256MB, est. 5GFLOPS GeForce 8600M GS 256MB, est. 5GFLOPS GeForce 8600M GS 512MB, est. 6GFLOPS Geforce 8500 GT, 512MB PCIe, 6GFLOPS GeForce 9600M GT 512MB, est. 14GFLOPS GeForce 8600 GT 256MB, est. 14GFLOPS GeForce 8600 GT 512MB, est. 15GFLOPS GeForce 9500 GT 512MB, est. 15GFLOPS GeForce 8600 GTS 256MB, est. 18GFLOPS GeForce 9600 GT 512MB, est. 34GFLOPS GeForce 9600 GT 512MB, est. 37GFLOPS GeForce 8800 GTS, 640MB, est. 41GFLOPS [compute capability 1.0] Geforce 9600 GSO, 768MB (DDR2) 46GFLOPS Geforce 9600 GSO, 384MB (DDR3) 48GFLOPS GeForce 8800 GT 512MB, est. 60GFLOPS GeForce 8800 GTX 768MB, est. 62GFLOPS [compute capability 1.0,] (OC)? GeForce 9800 GT 1024MB, est. 60GFLOPS GeForce 9800 GX2 512MB, est. 69GFLOPS GeForce 8800 GTS 512MB, est. 77GFLOPS GeForce 9800 GTX 512MB, est. 77GFLOPS GeForce 9800 GTX+ 512MB, est. 84GFLOPS GeForce GTX 250 1024MB, est. 84GFLOPS Compute capability 1.3: GeForce GTX 260 896MB (192sp), est. 85GFLOPS Tesla C1060 1024MB, est. 93GFLOPS (only)? GeForce GTX 260 896MB, est. 100GFLOPS GeForce GTX 260 896MB, est. 104GFLOPS (OC)? GeForce GTX 260 896MB, est. 111GFLOPS (OC)? GeForce GTX 275 896MB, est. 123GFLOPS GeForce GTX 285 1024MB, est. 127GFLOPS GeForce GTX 280 1024MB, est. 130GFLOPS GeForce GTX 295 896MB, est. 106GFLOPS (X2=212)? You should also note the following if you’re buying a new card or thinking about attaching it to a CUDA project: Different cards have different numbers of shaders (the more the better)! Different speeds of shader and RAM will effect performance (these are sometimes factory over clocked and different manufacturers using the same GPU chipset and speed can tweak out slightly different performances)! Some older cards use DDR2 while newer cards predominately use DDR3 (DDR3 is about 20% to 50% faster but varies, faster is better)! The amount of RAM (typically 256MB, 384MB, 512MB, 768MB, 896MB and 1GB) will significantly affect performance (more is better)! Some older cards may be PCI, Not PCI-E (PCI-E is faster)! Mismatched pairs of PCIE cards will likely underperform. If you overclock your Graphics card, you will probably get more performance, but you might get more errors and you will reduce the life expectancy of the card, motherboard and PSU - you probably know this already ;) If you have a slower card (say under 10GFLOPS) don’t attach it to the GPU-Grid; you are unlikely to finish any tasks in time, so you will not produce any results or get any points. You may wish to attach to another project that uses a longer return deadline (Aqua-GPU for example). With a 20GFLOPS card most tasks will probably timeout. Even with a 9600 GT (about 35GFLOPS) your computer would need to be on most of the time to get a good success/failure ratio. Please post your NATIVELY CLOCKED Boinc GFLOPS Ratings here, or any errors, to create a more complete list. You can find them here; Open Boinc (Advanced View), select the Messages Tab, about the 12th line down it will say CUDA Device... or No CUDA devices found. Include Card Name, Compute Capability (1.0, 1.1 or 1.3 for example), RAM and est. GFLOPS. Even if it is already on the list, it will confirm the ratings, and help other people decide what graphics card they want to get. PS. If you want more details about an NVIDIA card look here, http://en.wikipedia.org/wiki/Comparison_of_Nvidia_Graphics_Processing_Units Thanks, FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi, that's quite some work you put into collecting this. Let me add a few points / comments: - we have a comparable list here, including prices (somewhat outdated) and some power consumption numbers - we found GT200 to be 41% faster per GFLOP than G9x, so the BOINC benchmark underestimates this influence (it could not possibly reflect it correctly unless it used the actual GPU-Grid code) - that 8800GTX is probably not OC'ed, as it has more raw power than a 9800GT - 9800GX2 would also get that value times 2 (2 chips) - the Tesla 1060 is just a GT200 in a slightly different config, so that score is reasonable - GPU-Grid is not terribly limited by gpu memory speed, so DDR2 / GDDR3 doesn't matter much.. and any card with DDR2 is likely too slow anyway - the amount of GPU memory does not affect GPU-Grid performance and likely will not for a long time (currently 70 - 100 MB used). See e.g. the 9600GSO (384 / 368) or the 9800GTX+ / GT250 (512 / 1024) - PCIe speed does not matter as long as it doesn't get extremely slow - any card with PCI is likely too slow for GPU-Grid anyway - "mismatched" pairs (I'd call them mixed ;) of PCIe cards do not underperform. Folding@home has this problem, but it has not been reported here, even in G9x / GT200 mixes - if you overclock you will get more performance - just increasing clock speed does not decrease GPU lifetime much. Temperature is much more important, so if you increase the fan speed slightly you'll easily offset any lifetime losses due to higher frequencies. Just don't increase GPU voltage! MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for your input MrS. You made many good points, and well spotted with the mistakes: The GeForce 9800 GX2 has 2 X 69GFLOPS = 138GFLOPS, GPU-Grid performance will not improve with more RAM (GPU_GRID uses 70-100MB), Different card pairings do not imper GPU-GRID Performance. The lower rated CUDA capable cards are for reference, I did not mean to suggest anyone should use a DDR2 or PCI card (5GFLOPS) on GPU-GRID. Dont do it! Can I ask you to clarify something regarding the G200 GPU core range? The NVIDIA GeForce GTX 250, despite appearing to be part of the 200 range, actually uses a G92 Core (it’s almost identical to the GeForce 9800 GTX+), so am I correct in thinking that Boinc rates this correctly as 85GFLOPS, and that the cards name is just an oddity/misnoma? The NVIDIA GeForce GTX 260 (192sp) on the other hand does use a G200 Core (as does the denser 260, the 270, 275, 280, 285, 290, and 295 cards). So does Boinc under rate this GTX 260 (192sp) as a 85GFLOPS card? Would it be more accurate for Boinc to rate this card as 85 X 1.41 = 120GFLOPS ? FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi, The NVIDIA GeForce GTX 250, despite appearing to be part of the 200 range, actually uses a G92 Core (it’s almost identical to the GeForce 9800 GTX+), so am I correct in thinking that Boinc rates this correctly as 85GFLOPS, and that the cards name is just an oddity/misnoma? It's actually the GTS 250, not GTX 250. NVidia apparently thinks this single unassuming letter is enough for people to realize that what they are going to buy is performance wise identical to the 9800GTX+. Or they just want to screw customers into thinking the GTS 250 is more than it actually is. The NVIDIA GeForce GTX 260 (192sp) on the other hand does use a G200 Core (as does the denser 260, the 270, 275, 280, 285, 290, and 295 cards). So does Boinc under rate this GTX 260 (192sp) as a 85GFLOPS card? In short: yes :D You can see in the post I linked to that the GTS 250 is theoretically capable of 705 GFlops, whereas "GTX 260 Core 192" is rated at 715 GFlops. So the BOINC benchmark is quite accurate in reproducing this. However, due to advanced functionality in the G200 design GPU-Grid can extract more real-world-GFlops from G200 than from G92 (these 41%). You could say the GTX 260 and all other G200-based cards deserve their rating to be multiplied by 1.41. And since the BOINC benchmark uses a different code it can not reproduce this accurately, or not at all. If it was changed to include this effect it might get inaccurate for seti or aqua, as in their case G92 and G200 may be as fast, on a "per theoretical GFlop" basis. That's why I think a single benchmark number is not any more useful than the theoretical values. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 18 Jul 07 Posts: 67 Credit: 43,351,724 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think I found the reading Boinc uses for its GFLOPS count... From CUDA-Z: 32-bit Integer: 120753 Miop/s From Boinc: 6/20/2009 12:55:51 CUDA device: GeForce GTX 260 (driver version 18585, CUDA version 1.3, 896MB, est. 121GFLOPS) Bob |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Perhaps you are correct? I only have a GeForce GTX 260 (192). Boinc rates it as 85GFLOPS. CUDA-Z, Performance, 32-bit Integer, rates it as about 86000 Miop/s (but it fluctuates). Is your card overclocked, as the other GTX 260 cards I listed were between 100 and 111GFLOPS? |
|
Send message Joined: 6 May 09 Posts: 34 Credit: 443,507,669 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The influence of the CPU If you look at the top computers that are crunching GPUGRD the CPU times are low between 1000 to 2000 for 4000 to 5000 credits. Most are i7 CPUs and are using a couple of 295s So has anyone done some research into what CPU setup is doing the best? While the GPU cards are a known factor "we found GT200 to be 41% faster per GFLOP than G9x, so the BOINC benchmark underestimates this influence (it could not possibly reflect it correctly unless it used the actual GPU-Grid code)" here is some huge differences in the amount of CPU time to do WUs Assuming the WU is done within 24hrs I have had diffences of 1000 cpu time for 4500 credits to 6000 for 4500 credits. Ross |
|
Send message Joined: 18 Jul 07 Posts: 67 Credit: 43,351,724 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Is your card overclocked, as the other GTX 260 cards I listed were between 100 and 111GFLOPS? Very much so... current clocks 702/1566/1107 I'm tempted to use the voltage tuner to up the speed more though :) Temps at 67c (fan @ 70%) so lots of room there... Bob |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Below is an updated CUDA Performance Table for cards on GPU-GRID, with reported Boinc GPU ratings, and amended ratings for G200 cores (in brackets) -only for compute capable 1.3 cards. (MrS calculated that G200 core CUDA cards operate at 141% efficiency compared to the reported Boinc GFLOPS). This is a guide to Natively clocked card performance on GPU-GRID only (not for other projects)! The following are mostly compute capability (CC) 1.1: Don’t use with GPU-GRID, won’t finish in time! GeForce 8400 GS PCI 256MB, est. 4GFLOPS GeForce 8400 GS PCIe 256MB, est. 5GFLOPS GeForce 8500 GT 512MB, est. 5GFLOPS Quadro NVS 290 256MB, est. 5GFLOPS GeForce 8600M GS 256MB, est. 5GFLOPS GeForce 8600M GS 512MB, est. 6GFLOPS Geforce 8500 GT, 512MB PCIe, 6GFLOPS Not Recommended for GPU-GRID, unless on 24/7 GeForce 9600M GT 512MB, est. 14GFLOPS GeForce 8600 GT 256MB, est. 14GFLOPS GeForce 8600 GT 512MB, est. 15GFLOPS GeForce 9500 GT 512MB, est. 15GFLOPS GeForce 8600 GTS 256MB, est. 18GFLOPS Entry Performance cards for GPU-GRID GeForce 9600 GT 512MB, est. 34GFLOPS GeForce 9600 GT 512MB, est. 37GFLOPS GeForce 8800 GTS, 640MB, est. 41GFLOPS [CC 1.0] Geforce 9600 GSO, 768MB (DDR2) 46GFLOPS Geforce 9600 GSO, 384MB (DDR3) 48GFLOPS Average Performance Cards for GPU-GRID GeForce 8800 GT 512MB, est. 60GFLOPS GeForce 8800 GTX 768MB, est. 62GFLOPS [CC 1.0] GeForce 9800 GT 1024MB, est. 60GFLOPS Good Performance Cards for GPU-GRID GeForce 8800 GTS 512MB, est. 77GFLOPS GeForce 9800 GTX 512MB, est. 77GFLOPS GeForce 9800 GTX+ 512MB, est. 84GFLOPS GeForce GTX 250 1024MB, est. 84GFLOPS Compute capability 1.3 [mostly]: High End Performance Cards for GPU-Grid GeForce GTX 260 896MB (192sp), est. 85GFLOPS (120) Tesla C1060 1024MB, est. 93GFLOPS (131) GeForce GTX 260 896MB, est. 100GFLOPS (141) GeForce GTX 275 896MB, est. 123GFLOPS (173) GeForce GTX 285 1024MB, est. 127GFLOPS (179) GeForce GTX 280 1024MB, est. 130GFLOPS (183) GeForce 9800 GX2 512MB, est. 138GFLOPS [CC 1.1] GeForce GTX 295 896MB, est. 212GFLOPS (299) I would speculate that given the 41% advantage in using compute capable 1.3 (G200) cards, GPU-GRID would be likely to continue to support these cards’ advantageous instruction sets. For those that have compute capable 1.0/1.1 cards and 1.3 cards and participate in other GPU projects, it would make sense to allocate your 1.3 cards to GPU-GRID. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
current clocks 702/1566/1107 I would be happy enough with that performance - its about the same as a Natively clocked GeForce GTX 275! If you are going to up the Voltage, select No New Tasks, and finish your existing work units first ;) |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Ross, please don't start the same discussion in 2 different threads! popandbob wrote: I'm tempted to use the voltage tuner to up the speed more though :) Temps at 67c (fan @ 70%) so lots of room there... You may want to take a look here. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 1 Apr 09 Posts: 2 Credit: 17,884,639 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
CUDA device: GeForce GTX 285 (driver version 18585, compute capability 1.3, 2048MB, est. 136GFLOPS) This is an EVGA "FTW Edition", or factory overclocked to: Core - 702MHz Shader - 1584MHz Memory - 2448MHz |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The influence of the CPU. If you look at the top computers that are crunching GPUGRD the CPU times are low between 1000 to 2000 for 4000 to 5000 credits. Most are i7 CPUs and are using a couple of 295s Yes, I did a bit of research into this and found some interesting results! Ultimately any given Work Unit will require a set amount of CPU processing and the overall time to complete this CPU Processing will vary with different CPU performances (or even an over clocked CPU). So, on the face of it, the faster the CPU, the faster you will complete a CUDA Work Unit (everything else being equal). However, typical WU completion times of systems with fast CPUs V’s slow CPUs are not massively different. This is because the typical amount of CPU usage (running GPU-GRID) is only about 0.12 for a good CPU (the CPU runs 12% of the time), and because most systems are reasonably well balanced in terms of hardware. Even if a slow CPU (Celeron 440) ran GPU-GRID 40% of the time, there would still be plenty of unused CPU time. It wouldn’t quite be the bottleneck you might think because the CPU is continuously doing small amounts of processing, waiting nanoseconds and then doing more small amounts... It does not have to run all the CPU calculations from start to finish before running the GPU CUDA calculations or vice versa; so there is not a massive bottleneck with slower CPU’s. The entire architecture of the CPU is not being exploited/stressed 100% of the time. My guess is that the differences (in terms of getting through a single GPU-GRID work unit on an average card) between an i7 and a Celeron 440 would be mainly to do with FSB speeds, Cache and instruction sets rather than CPU frequency or having 4 cores, and it would not be much! If you take an extreme example of a Celeron 440 with a GTX 295, the Video card is obviously going to fly through its calculations, and ask more of the CPU than a GeForce GT 9600 would, over any given time. Obviously not many people are going to have such an unbalanced system, so it would be difficult to compare the above to a Q9650 (same socket) and a GTX 295. Add another GTX 295 and the Celeron 440 would probably struggle to compute for 4 GPU tasks. A Q9650 on the other hand would do just fine. If you had a Q9650 and just the GT 9600, the impact on the CPU by running GPU-GRID would be negligible – but again this would be an imbalanced system (just like having an i7 with 512MB RAM would)! Moving back into the world of the common sense systems, most people with Quad core (or better) CPU systems that crunch GPU-GRID WU’s also crunch CPU tasks, such as WCG, Climate Change... So the research I did was to work out if it was overall more beneficial to use 1 less CPU when crunching such tasks and GPU-GRID tasks. I actually looked at a more extreme example than CPU-GRID tasks. Aqua was running tasks that required 0.46 CPU’s + 1 CUDA. As I was using a Quad core, this actually meant that Aqua would use 46% of one core + the Graphics card. After calculating the credit I would get for the Aqua WU compared to the credit for a WCG work unit of approximately the same time to complete, I did find that it would be beneficial to manually configure Boinc to use 3 cores, and basically use 1 for Aqua. I found that when doing this that there was also some improvement in the other 3 CPUs throughput! So, Aqua sped up noticeably (on a card with either 60 or 77GFLOPS) and the other 3 WCG tasks sped up slightly, offsetting some of the 4th CPU loss. Given the variety of CUDA cards, CPU’s, Projects and Work Units, you would probably have to do the analysis yourself, on your system. I would guess that if you had a low end Quad CPU and a GTX 295’s you would be better to use no more than 3 CPU cores for crunching other projects and leave one CPU core free to facilitate the CPU processing of the GPU-GRID WU’s. At the minute you would probably need to do this manually by suspending and Resuming Boinc CPU tasks. But, if you had a GeForce 9600 and a Q9750, disabling a CPU core would overall reduce your contributions. |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I've used (so far) 3 cards with BOINC. 9800GT 512Mb, 60Gflops reported by BOINC (as you'd expect same as 1Mb card) GTS250 512Mb, 84Gflops reported by BOINC GTX260 896Mb (216 shaders), 96Gflops reported by BOINC All cards are stock speeds. The only one that appears to be different to the list above is the GTX260. BOINC blog |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GTX260 896Mb (216 shaders), 96Gflops reported by BOINC We established that 85 GFlops is quite correct for the GTX 260 Core 192. Scaling just the number of shaders up should result in 85*216/192 = 95.6 GFlops, which is just what you're getting. The 100 are likely obtained from a mildly factory overclocked card. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
9800GT 512Mb, 60Gflops reported by BOINC (as you'd expect same as 1Mb card)I know you meant 1GB and that you know GPU-GRID uses between 70MB and 100MB [MrS], so for anyone else reading this, if you have 256MB or 1GB it should not make any difference for GPU-GRID. Thanks for your confirmations, especially the 260 (216) :) Someones bound to have a Quadro, come on, own up! FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GeForce 8800 GT 512MB, est. 60GFLOPS GeForce 9800 GT 512MB, est. 60GFLOPS GeForce 9800 GT 1GB, est. 60GFLOPS Reasons, 8800GT and 9800GT are almost identical, 512MB or 1GB makes no difference when crunching for GPU-GRID. |
|
Send message Joined: 16 Aug 08 Posts: 87 Credit: 1,248,879,715 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GeForce 8800 GT 512MB, est. 60GFLOPS The only expected difference would be power consumption and the transition from 65nm to 55nm. |
vitalidzeSend message Joined: 15 May 09 Posts: 20 Credit: 239,712,351 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have got 141Gflops at my GTX 275 with overcklocked shader domain up to 1700Mhz. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
8800GT and 9800GT both never went to 55 nm officially. It's the same G92 chip. Really the only difference is that 9800GT supports hybrid power whereas 8800GT doesn't. Oh, and 9 sells better than 8, of course. There's got to be progress, after all! MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra