Message boards :
Graphics cards (GPUs) :
What card?
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi! a) What is the cheapest possible graphics card for crunching PS3grid.net workunits? b) What is the best/fastest possible graphics card for crunching PS3grid.net workunits? Thanks! |
koschiSend message Joined: 14 Aug 08 Posts: 127 Credit: 913,858,161 RAC: 15 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi! a) Geforce 8400GS - but better run it on a single core PC, as otherwise the other units won't finish before the deadline, because the card is sooo slow. Only few shaders at a low clock speed b) Geforce GTX280 - if you can afford it |
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi! Thanks. Ok. c) What is the best possible (not too slow, not too expensive) graphics card for crunching PS3grid.net workunits? Henri. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Hi! 8800GT 512MB gdf |
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks! :) So, is this the correct model? Does this card support CUDA 2? Will it work fast enough in PCI Express 1.1? I only have PCI Express 1.1 motherboard. Where to get the latest drivers for that card? I have no experience on using NVIDIA cards (I have an ATI card at the moment). Henri. |
|
Send message Joined: 18 Aug 08 Posts: 121 Credit: 59,836,411 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I recomended 8800 GTS 512 - 128 procesors.... It is 2.0 PCI-E but it should work without problems on PCI-E 1.0 drivers: http://www.nvidia.com/object/cuda_get.html |
Krunchin-Keith [USA]Send message Joined: 17 May 07 Posts: 512 Credit: 111,288,061 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I run 3 8800GT, 1 in each computer. They work very well and I see not much slowdown with normal windows operation. There is some noticed depending on what I'm running, most mostly not too bad. Some people have reported they can't even use their computer when it runs here. My two at work I use all day, heavily with little noticed slowdown and they are both P4-HT, running full boinc using both CPU and GPU. These were well priced for me, not to expensive like the high end cards. Mine are XFX brand and are single slot wide, an important thing to consider as some computers, especially all mine, can not take double wide cards without sacrificing another PCI slot, which I could not do as I have other boards and no empty slots to move them to. These are PCIe x16 2.0 but that is backward compatible with PCIe x16 1.1 slots. They worked fine in my PCIe x 16 1.1 slots. Mine came with a double life-time warranty. I guess that means when I die, I get to take it with me to the afterlife ;) Number of stream processors (shown as cores) is important, less processors means it the application takes longer. 512MB memory would be good. There is 1 device supporting CUDA Device 0: "GeForce 8800 GT" (640MHz version) Major revision number: 1 Minor revision number: 1 Total amount of global memory: 536543232 bytes Number of multiprocessors: 14 Number of cores: 112 Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 16384 bytes Total number of registers available per block: 8192 Warp size: 32 Maximum number of threads per block: 512 Maximum sizes of each dimension of a block: 512 x 512 x 64 Maximum sizes of each dimension of a grid: 65535 x 65535 x 1 Maximum memory pitch: 262144 bytes Texture alignment: 256 bytes Clock rate: 1.62 GHz Concurrent copy and execution: Yes Test PASSED There is 1 device supporting CUDA Device 0: "GeForce 8800 GT" (600MHz Version x 2) Major revision number: 1 Minor revision number: 1 Total amount of global memory: 536543232 bytes Number of multiprocessors: 14 Number of cores: 112 Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 16384 bytes Total number of registers available per block: 8192 Warp size: 32 Maximum number of threads per block: 512 Maximum sizes of each dimension of a block: 512 x 512 x 64 Maximum sizes of each dimension of a grid: 65535 x 65535 x 1 Maximum memory pitch: 262144 bytes Texture alignment: 256 bytes Clock rate: 1.51 GHz Concurrent copy and execution: Yes Test PASSED See some of the other reports by users in other threads. Links to downloads are on the front page. NVIDIA supports their product well on their website. Visit the CUDA Zone section for the CUDA drivers. See the FAQ section for a list of cards that are supported here. |
|
Send message Joined: 24 Aug 08 Posts: 45 Credit: 3,431,862 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I think the FX9800GTX+ 512MB is a good (ot better) solution. You have to pay 10 EURO more but you will get more power. Isn't it? Or am I wrong? |
|
Send message Joined: 18 Aug 08 Posts: 121 Credit: 59,836,411 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
To be honest you can buy 8800GTS512 much cheaper then 9800GTX+ On both cart it is G92 the rally diference betwean this 2 cards is that 9800+ has smaller chip on 55nm and 8800 on 65nm and 9800+ is slighty faster due to aditional Mhz on core and memory. So price/performance better is 8800GTS, but on thermal 9800+ |
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks again! One more thing. I want the card to be as silent as possible. It would be very annoying to crunch ~24/7, if the GPU fan yells all the time. So, what exact make and model is quiet enough? Henri. |
MJHSend message Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level ![]() Scientific publications ![]()
|
So, what exact make and model is quiet enough? There do exist 8800-series cards with passive cooling systems (ie no fan) but do expect to pay a premium for these! So long as the devices conform to Nvidia's reference designs, we'd expect them to be OK for GPUGRID. MJH |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Regarding noise I'll refer to the thread I just created. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
And let's get serious about the "most effective card" question. Buying anything smaller than a G92-based card is not the way to go - they are not that much cheaper but much slower. I'd like to know how fast a GTX260 is "in real world", because it's considerably more expensive than the 9800GTX+, the most expensive G92 card, but has about the same maximum GFlops. Let's consider these cards: 8800GT, 8800GTS 512, 9800GTX+ and GTX280. I collect current pricing for Germany and the GFlops from the wiki (here, here and here). Doing that, the - 8800GT has 504 GFlops for 110€ -> 4.58 GFlops/€ - 8800GTS 512 has 624 GFlops for 130€ -> 4.80 GFlops/€ - 9800GTX+ has 705 GFlops for 155€ -> 4.54 GFlops/€ - GTX280 has 933 GFlops for 350€ -> 2.66 GFlops/€ So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that. On average my 9800GTX+ needs 44569s/WU with the 6.41 client. That means running 24/7 it earns 3850 credits/day or 6265 with the 6.43 client. Since I sacrifice one CPU core I loose about 1000 credits/day (Q6600@3GHz running QMC). Therefore the net gain by running GPU-Grid is 2580 or 5265 credits/day. Assuming linear performance scaling with the GFlops rating a 8800GTS 512 would earn 5545 credits/day which is a net win of 4545 credits/day. Therefore the 8800GTS 512 gives you 35 credits/day/€ and the 9800GTX+ 34 credits/day/€ and the 8800GTS 512 is the efficiency winner. However, are 700 credits/day worth a one time investment of 25€ for you? Your choice.. I certainly made mine ;) Of course you could always overclock either card.. but I don't think the software is already that stable. I'd rather have the additional speed guaranteed. And going with a 55 nm chip doesn't help much but doesn't hurt either. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 24 Aug 08 Posts: 45 Credit: 3,431,862 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
And let's get serious about the "most effective card" question. It is very intresting what you wrote. I have also a Q6600 overclocking at 3 GHz and will buy me a 9800GTX+ to morrow. The contigent of 1 WU per CPU and day seams me very small. In your calculation it should be 2 WUs |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, actually I have a hard time to establish a 2 days cache.. but the GPU did not yet run dry :) MrS Scanning for our furry friends since Jan 2002 |
[FVG] baxSend message Joined: 18 Jun 08 Posts: 29 Credit: 17,772,874 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Once upon a time... 2 weeks ago... we all happy owners of: # Geforce 8800 GTS - 320/640M - 96 shader units # GeForce 8800 GTX - 768M - 128 shader units # GeForce 8800 Ultra - 768M - 128 shader units cruched 6.25 application on happy linux OS.... Do U think is possible make us happy in the future? Now we can't help the project but we want !!! sorry but... I was so happy 2 weeks ago :-)) |
Stefan LedwinaSend message Joined: 16 Jul 07 Posts: 464 Credit: 298,573,998 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
...I'd like to know how fast a GTX260 is "in real world", because it's considerably more expensive than the 9800GTX+, the most expensive G92 card, but has about the same maximum GFlops. Well, I was able to run a few tasks on my GTX 260 with an earlier app version in the first tests under Linux64, but couldn't crunch more than one WU in a row because of driver problems, therefore I switched it to the Vista box... But as for the speed comparision - My EVGA GTX 260 was as fast as my EVGA 9800 GTX SC (super clocked), actually a little bit slower! pixelicious.at - my little photoblog |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
But as for the speed comparision - My EVGA GTX 260 was as fast as my EVGA 9800 GTX SC (super clocked), actually a little bit slower! Thx! So the architectural fine tuning (more registers etc) of the GT200 doesn't yield any benefits (yet) for GPU-Grid and these cards have thus a rather bad performance per money. If I put the numbers in for the GTX280 I get 2000 credits/day more than a 9800GTX+ for 200€ more. Not a terrible deal, but I wouldn't recommend it. And I forgot the 9800GX2! 1TFlops for 260€ -> 8900 credits/day, 1600 credits/day more than the 9800GTX+ (assuming 1000 cr/day for both CPU cores) for 100€ more. Downside of this card is that it needs a 6 pin and an 8 pin power plug and aftermarket cooling solutions likely won't work due to the 2 chip architecture. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Aug 08 Posts: 55 Credit: 1,475,857 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
And let's get serious about the "most effective card" question. I just sold my 8800 GS and went for a 280, I dont regret the upgrade despite many complaining of the affordability! Real world benchmarks with folding@home and ps3grid and future mark showed me a 3x gain since the upgrade. I've sold my 8800 GS for 1/3 rd the price of the 280. "Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs. Well.. no. Flops are calculated as "number of shaders" * "shader clock" * "instructions per clock per shader". The latter one could be 2 (one MADD) or 3 (one MADD + one MUL), but it's constant for all G80/90/GT200 chips. So Flop are a much better performance measure than "number of shaders", because they also take the frequency into account. And SLI.. yeah, just forget it for games. And for folding you'd have to disable it anyway. MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra