Message boards :
Graphics cards (GPUs) :
NVidia GPU Card comparisons in GFLOPS peak
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 17 · Next
| Author | Message |
|---|---|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks Cheech Wizard, I think the GTX 275 cards are the best value for money, at the minute. So, good purchase. The 1.41 improvement factor was all Extra Terrestrial Apes work. I added it into the list as it allows people to compare Compute Capable 1.1 and CC 1.3 cards, and see how they match up where it matters, crunching here on GPUGRID. I doubt that anyone reading this thread would want to buy a high spec CC1.1 card now, and people know to avoid the old CC1.0 cards. Your data is good stuff. Confirmations like that make it clear how much better the 1.3 cards are. I would guess that prices might drop a bit just before Christmas or for the sales. You might see some 300s on sale before then, but who knows, it could be next year. I Wonder if there is a CC1.4 or CC1.5 on the horizon? I checked the Boinc rating of an ION recently. It is only 6GFlops! Your card is about 30times as fast. Mind you it is better than an 8600M at 5GFlops and good enough for HD even on an Atom330 system. But its just not a cruncher. The Corsair 550’s are good kit too, well worth the money. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This is nicked from another GPUGrid thread, but it is relevant here too. tomba reported GT 220 specs: GPU 1336MHz GPU RAM 789MHz GPU Graphics 618MHz With 48 Shaders, 1024MB, est. 23GFLOPS Wait for it, Compute Capable 1.2. I did not even think that existed! First there was 1.0, then there was 1.1, then 1.3 and now there is 1.2. What strange number system you have, Mr. Wolf - Sort of like your GPU names. Aaahhh bite me. Funnies aside, it appears to respond to present work units similar to CC 1.3 cards, getting through work units relitively faster than CC1.1 Cards. In this case I guess it is about 23*1.3% faster. So it gets through 30% extra. So it has an effective Boinc GFlop value of about 30GFlops – not bad for a low end card and it just about scrapes in there as a card worth adding to the GPUGrid, if the system is on quite frequently and you don’t get lumbered with a 120h task! The letter box was open, but this one was snuck under the door all the same! |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I thought i'd throw the following in now that BOINC has standardized the formula between the different brands of cards. You'll notice that they are now shown as "GFLOPS peak". The startup info is from BOINC 6.10.17 and none of the cards listed are overclocked. Sulu: 31/10/2009 9:38:55 PM NVIDIA GPU 0: GeForce GTX 295 (driver version 19062, CUDA version 2030, compute capability 1.3, 896MB, 596 GFLOPS peak) 31/10/2009 9:38:55 PM NVIDIA GPU 1: GeForce GTX 295 (driver version 19062, CUDA version 2030, compute capability 1.3, 896MB, 596 GFLOPS peak) Chekov: 31/10/2009 9:30:14 PM ATI GPU 0: ATI Radeon HD 4700/4800 (RV740/RV770) (CAL version 1.3.145, 1024MB, 1000 GFLOPS peak) Maul: 31/10/2009 2:53:28 AM NVIDIA GPU 0: GeForce GTX 260 (driver version 19062, CUDA version 2030, compute capability 1.3, 896MB, 537 GFLOPS peak) 31/10/2009 2:53:28 AM NVIDIA GPU 1: GeForce GTX 260 (driver version 19062, CUDA version 2030, compute capability 1.3, 896MB, 537 GFLOPS peak) Spock: 31/10/2009 9:58:51 PM NVIDIA GPU 0: GeForce GTX 275 (driver version 19062, CUDA version 2030, compute capability 1.3, 896MB, 674 GFLOPS peak) BOINC blog |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 318 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's interesting that those figures don't correspond with the ones in koschi's FAQ. I had hoped that the new BOINC detection mechanism would eliminate the confusion between what I've called 'BOINC GFlops' and 'marketing GFlops', but it seems we now have a third unit of measurement. Anyone got a formula to consolidate them? |
|
Send message Joined: 7 Aug 09 Posts: 16 Credit: 346,450,067 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@Richard: well, I can report one data point for a possible conversion factor: the same GTX 275 (referenced above) reported 125 GFlops by my BOINC 6.6.36 client, would be 176GFlops using the 1.41x factor for CC 1.3 (per above in this thread), is reported by 6.10.17 as 700 GFlops Peak. So does the 6.6.x number X 5.6 = 6.10.17 number? Does the 6.10.17 number / 3.98 = the old number adjusted for CC 1.3 (in other words, does the new rating system discern between CC 1.1 and CC 1.3 and adjust its rating accordingly?) It will take others reporting old numbers vs 6.10.x numbers to establish consistency/linearity (or lack thereof.) Jeez...just when it looked like we had this straight~! |
|
Send message Joined: 7 Aug 09 Posts: 16 Credit: 346,450,067 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@MarkJ: just to clarify, and correct me if I'm wrong, in your sample data above, user Sulu reports 596 GFlops peak per core for his GTX 295 (1192 total for one card), whereas user Maul's system has a pair of GTX 260s, and is reporting 537 GFlops each. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
MarkJ , thanks for the update info. I updated two of my Boinc clients from 6.10.6 to 6.10.17 I noted that my 64bit Vista Ultimate version was not detected on the web site and it tried to give me the x86 client! On the other hand it spotted my Windows 7 64bit system and allocated the correct x64 client. My GTS 250 use to be reported as 84GFlops, now it is reported as 473GFlops My GTX 260 use to be reported as 104GFlops, now it is reported as 582GFlops Obviously the 1.41 factor for Compute Capable 1.3 cards has not been fully appreciated here, but it has to a smaller extent, and the GFlops are still Boinc GFlops as they do not match the industry standard. Koschi has a GTS 250 at 705 and a GTX 260 at 804 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I think the new rating system is to allow for the comparison of NVidia and ATI cards. Several new ATI cards have been released recently along with entry NVidia cards such as the GT220, with its CC1.2 GPU. So the new system is to prevent the picture becoming cloudier, especially when the G300 range hits the shelves. With the new Boinc GFlops rating system, it would seem that a CC 1.2 Card will have a Factor of about 1.2 and a CC1.3 card will have a factor of about 1.3. If so, that would make good sense, but it does need to be verified. That is an Open Invitation. |
|
Send message Joined: 21 Dec 08 Posts: 8 Credit: 110,281,581 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
here is my data 11/3/2009 11:33:39 Starting BOINC client version 6.10.17 for windows_intelx86 11/3/2009 11:33:39 log flags: file_xfer, sched_ops, task 11/3/2009 11:33:39 Libraries: libcurl/7.19.4 OpenSSL/0.9.8k zlib/1.2.3 11/3/2009 11:33:39 Data directory: C:\Documents and Settings\All Users\Application Data\BOINC 11/3/2009 11:33:39 Running under account User 11/3/2009 11:33:39 Processor: 2 GenuineIntel Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz [x86 Family 6 Model 15 Stepping 11] 11/3/2009 11:33:39 Processor: 4.00 MB cache 11/3/2009 11:33:39 Processor features: fpu tsc sse sse2 mmx 11/3/2009 11:33:39 OS: Microsoft Windows XP: Professional x86 Edition, Service Pack 3, (05.01.2600.00) 11/3/2009 11:33:39 Memory: 2.00 GB physical, 4.85 GB virtual 11/3/2009 11:33:39 Disk: 232.88 GB total, 196.46 GB free 11/3/2009 11:33:39 Local time is UTC -6 hours 11/3/2009 11:33:40 NVIDIA GPU 0: GeForce GTX 260 (driver version 19107, CUDA version 2030, compute capability 1.3, 896MB, 510 GFLOPS peak) |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@MarkJ: just to clarify, and correct me if I'm wrong, in your sample data above, user Sulu reports 596 GFlops peak per core for his GTX 295 (1192 total for one card), whereas user Maul's system has a pair of GTX 260s, and is reporting 537 GFlops each. Yep thats correct. The GTX260's are almost a year old now, so probably not the most recent design. The GTX295's are the single PCB version, so may be different to the dual PCB version. BOINC blog |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My ATI HD 4850 is now rated. Boinc says this 800 shader device with a core @ 625MHz and 512MB DDR3 offers up 1000 GFlops! Thats within 20% of a GTX295. Excellent value for money, if it can be hooked up here. It fairly zips through the Folding@home tasks, but as for GPUGRID, the proof will be in the pudding. I guess the HD 5970 (when released) will weigh in at around 5000GFlops! With its two 40nm cores @ 725Mhz, 2x1600shaders and 2GB DDR5 @4GHz. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well, so much for the comparison list of cards and performance. It now looks like the only cards capable of consistently completing tasks are the G200 GPU based cards. Given the increase in task length, this really narrows the range to 5 expensive top end cards: GTX260 216sp, GTX275, GTX280, GTX285 and the GTX295 Task failure rates for the G92 cards are now so high that for many it is not worth bothering. I have retired one card from the project, a 8800 GTS 512MB. That only leaves my GTX260 and GTS250. The GTX260 is running very well indeed thank you, but as for the GTS250? It lost a total of 45h run time last week, due to task failures. So 25% of the time it was running was wasted! That is a top of the range G92 card and the GPU sits at 65degrees C and is backed by a stable Q9400 @3.5GHz. Perhaps some people with lower end G200 cards (Geforce 210/220) might still white knuckle it for several days to get through the odd task, and the mid range, Geforce GT 240 and Geforce GTS 240 cards can still contribute, but they are a bit tame! If the G300 series does not turn up, its ATI or bye-bye project time! |
|
Send message Joined: 21 Feb 09 Posts: 497 Credit: 700,690,702 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This is nicked from another GPUGrid thread, but it is relevant here too. This is tomba. I've been running my GT 220 24/7 for three months. BOINC sees: 05/12/2009 11:19:05 NVIDIA GPU 0: GeForce GT 220 (driver version 19107, CUDA version 2030, compute capability 1.2, 1024MB, 128 GFLOPS peak) |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for posting these details. The discrepency between both cards is due to the Boinc Version. The older Boinc Versions used a different system to calculate the GPU performance. So the old value of 28GFlops equates to the new value of 127GFlops for the GeForce GT 220 cards. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 318 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for posting these details. The old, lower GFlops figure is always shown as "est. nn GFLOPS" The new, higher figure will always be shown as "nnn GFLOPS peak" Other things (clock rate etc.) being equal, the 'peak' figure will always be 5.6 times the 'est.' figure. (source: [trac]changeset:19310[/trac], coproc.h) |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Below is an updated CUDA Performance Table for cards on GPU-GRID, with reported Boinc GPU ratings, and amended ratings for G200 cores (in brackets) -only for compute capable 1.3 cards. (MrS calculated that G200 core CUDA cards operate at 141% efficiency compared to the reported Boinc GFLOPS). [snip] I would speculate that given the 41% advantage in using compute capable 1.3 (G200) cards, GPU-GRID would be likely to continue to support these cards’ advantageous instruction sets. But what other GPU projects are currently capable of using 1.1 cards? My search for them has not found any I'm interested in that will use the G105M card on my laptop, and I've tried just about all of those suggested except SETI@home. Due to driver availability problems, it's currently limited to CUDA 2.2; the Nvidia site says the 190.* series is NOT suitable for this card. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
You could try Einstein, but dont expect too much from that project. In a way it would be suited to that project. It wont stress the GPU too much! |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I recently found that the 195.62 driver solves the problem with CUDA level for that card, for MOST laptops including that one. It now has both Collatz and Einstein workunits in its queue. For Einstein, helping them develop their CUDA software currently looks more likely than speeding up the workunits very soon. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The following GeForce cards are the present mainstream choice,
GT 240 GT215 40nm Compute Capable 1.2 257 BoincGFlops peak GTX 260 Core 216 GT200b 55nm Compute Capable 1.3 582 BoincGFlops peak GTX 275 GT200b 55nm Compute Capable 1.3 674 BoincGFlops peak GTX 285 GT200b 55nm Compute Capable 1.3 695 BoincGFlops peak GTX 295 GT200b 55nm Compute Capable 1.3 1192 BoincGFlops peak
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Details of a GT240 GV-N240D3-1GI made by GIGABYTE: PCIE2.0 1GB DDR3/128BIT Dual Link DVI-I/D-Sub/HDMI Compute Capable 1.2, 280 Boinc GFlops Full Specs: GPU: GT215 Revision: A2 Technology: 40 nm Die Size: 727 mm² BIOS Version: 70.15.1E.00.00 Device ID: 10DE - 0CA3 Bus Interface: PCI-E x16 @ x16 Subvendor: Gigabyte (1458) ROPs: 8 Shaders: 96 (DX 10.1) Pixel Fillrate: 4.8 GPixel/s Texture Fillrate: 19.2 GTexel/s Memory Type: DDR3 Bus Width: 128 bit Memory Size: 1024 MB Bandwidth: 25.6 GB/s Driver: nvlddmkm 8.17.11.9562 (ForceWare 195.62) / 2008 R2 GPU Clock: 600 MHz 800 MHz 1460 MHz Default Clock: 600 MHz 800 MHz 1460 MHz Comments, WRT crunching for GPUGrid, this offers up just under half the GFlops power of a GTX260 216sp card. In terms of power consumption it is much more efficient and does not require any special connectors. It should therefore appeal to people that don’t want to buy an expensive PSU at the same time as forking out for a new GPU, or a completely new computer. This particular card is also short and should fit many more computers as a result. The one I am testing benefits from a large fan blowing directly onto it, from the front bottom of the case, and having the 3 blanking plates beneath it at the rear removed. The result; it is running GPUGrid at an amazingly cool 37 Degrees C! - GPU Load @ 69% and Memory Controller @ 34% running an m3-IBUCH_min_TRYP task. So with the low power requirements and its 40nm core it runs cool and would no doubt be very quiet. As for the power consumption, my systems total power consumption when crunching on 4 CPU cores @ 100% moved from 135W to 161W when I started running GPUGrid on top of that. So it only added on another 26W. The card also drops its power usage considerably when not in use. It guess it uses about 10W when idle. As for real world crunching performance (completed vs crashed tasks), only time will tell, but it seems to have a lot going for it despite lacking the performance capabilities of top GTX GT200b cards. |
©2025 Universitat Pompeu Fabra