Message boards :
Graphics cards (GPUs) :
What card?
Message board moderation
Previous · 1 · 2 · 3 · Next
| Author | Message |
|---|---|
KyleFLSend message Joined: 28 Aug 08 Posts: 33 Credit: 786,046 RAC: 0 Level ![]() Scientific publications
|
Hello I just got a 9800GT for 110€ (the same price as the 8800GT and the same G92Chip & Clockspeed) Running time for one WU is ~21h on 6.43 on a Core2 Duo ~2.1Ghz (E6300) I have it running together with a Seti WU on the other Core. (Last night I stopped the Seti Projekt to see, if it has an impact on the GPUgrid WU time, but it doesn´t seem so.) Regards, Thorsten "KyleFL" |
KokomikoSend message Joined: 18 Jul 08 Posts: 190 Credit: 24,093,690 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?
|
|
Send message Joined: 12 Jul 07 Posts: 100 Credit: 21,848,502 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-) |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Kokomiko, both of your cards are rather fast. Are they overclocked? What are the shader clocks on both? Are you running Win or Linux? Using my values as reference a stock GTX280 would need ~9:20h. MrS Scanning for our furry friends since Jan 2002 |
KokomikoSend message Joined: 18 Jul 08 Posts: 190 Credit: 24,093,690 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-) Both machines runs with v6.43, what's wrong?
|
KokomikoSend message Joined: 18 Jul 08 Posts: 190 Credit: 24,093,690 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Kokomiko, Both are not overclocked. The 8800GT (Gigabyte, 112 shader, 1728 MHz) runs under Vista 64bit on a Phenom 9850 BE with 2.5 GHz, the GTX280 (XFX, 240 shader, 1296 MHz) runs also under Vista 64bit on a Phenom 9950 with 2.6 GHz.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The last WU finished by your 8800GT was still using 6.41, hence the lower credits. So your 8800GT is not overclocked by you, but is clocked way higher than the stock 1500 MHz. Interesting though, it's clearly faster than my 9800GTX+ with fewer shaders and a lower shader clock. Which driver are you using? MrS Scanning for our furry friends since Jan 2002 |
KokomikoSend message Joined: 18 Jul 08 Posts: 190 Credit: 24,093,690 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Which driver are you using? The newest, 177.84.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Same for me. Now the only differences are that you use Vista 64 vs XP 32 for me and my Q6600 @ 3 GHz on a P35 board versus your Phenom. But this shouldn't have such strong effects. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 18 Aug 08 Posts: 121 Credit: 59,836,411 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The last WU finished by your 8800GT was still using 6.41, hence the lower credits. It is whell known that some 3D prodecents make 3D cards with higher clocks that references... |
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that. I do not understand. Cannot I run two SETI@home WUs + gpugrid all at once with my dual core Pentium D 920 (and NVIDIA)? Henri. |
|
Send message Joined: 18 Aug 08 Posts: 121 Credit: 59,836,411 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that. No you can not. Eg. I have Q6600 and 8800GTS, and i crunch Rosetta@home and PS3Grid in TomaszPawelTeam :) So on 3 cores runs Roseta and on 1 core runs PS3Grid. At your computer on 1 core will be Seti@home and on second core will be PS3Grid .... I know to me it is strange too, and it shows that GPU is very powerful, but it need help from CPU to crunch.... So one core is always wasted to one GPU... P.S. If I have more $$$ i will buy 280GTX... but i dont't have so i crunch at 8800GTS512... If you have $$$ :) you should buy 280GTX :) |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It is whell known that some 3D prodecents make 3D cards with higher clocks that references... Sure. The point is that he's about 30 min faster than me with 112 shaders at 1.73 GHz, whereas I have 128 shaders at 1.83 GHz. That's a difference worth investigation. My prime candidate would be the Vista / Vista 64 driver. And, yes, currently you need one CPU core per GPU-WU. It's not doing any actual work, just keeping the GPU busy (sort of). MrS Scanning for our furry friends since Jan 2002 |
KokomikoSend message Joined: 18 Jul 08 Posts: 190 Credit: 24,093,690 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It is whell known that some 3D prodecents make 3D cards with higher clocks that references... My wife has on a Phenom 9850 BE a MSI 8800GT running under XP32bit, shader is running with 1674 MHz and she need 13:40h for one WU. Seems also to be faster then the stock frequency, but much slower then my card under Vista 64.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GDF said going with the 2.0 CUDA compilers had a 20% performance hit under Win XP, which can be improved by future drivers. The Vista driver is different from the one for XP. So it seems the Vista driver got less than a 20% performance hit. MrS Scanning for our furry friends since Jan 2002 |
Krunchin-Keith [USA]Send message Joined: 17 May 07 Posts: 512 Credit: 111,288,061 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs. Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle, unless you just have an application adding and multiplying useless numbers to maintain the maximum. |
|
Send message Joined: 1 Nov 07 Posts: 38 Credit: 6,365,573 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the info once again, guys! It's a bit sad that one CPU core is wasted even if the GPU is used. Can they change this someday? Henri. |
Stefan LedwinaSend message Joined: 16 Jul 07 Posts: 464 Credit: 298,573,998 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It is not wasted. If I understood it right the CPU is needed to feed the GPU with data... It's the same with Folding@home on the GPU, but they only need about 5% of one core and they are planning to distribute an application that only uses the GPU without the need to use the CPU in the future. Don't know if this would also be possible with the application here on PS3GRID... pixelicious.at - my little photoblog |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle Yes, we're only calculating theoretical maximum Flops here, the real performance is going to be lower. This "lower" is basically the same factor for all G8x / G9x chips, but GT200 received a tweaked shader core and could therefore show higher real world GPU-Grid-performance with the same Flops rating. That's why I asked for the GTX260 :) Edit: and regarding CPU usage, F@H also needed 100% of one core in GPU1. The current GPU2 client seems tremendously improved. Maybe whatever F@H did could also help GPU-Grid? MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 2 Jun 08 Posts: 25 Credit: 0 RAC: 0 Level ![]() Scientific publications
|
Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle I really hope improvements can be made in the future so more and more GPU computing will be available. I also hope more projects would try to build GPU applications so we are able to use full hardware potential for calculations. |
©2025 Universitat Pompeu Fabra