What card?

Message boards : Graphics cards (GPUs) : What card?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Profile KyleFL

Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 1933 - Posted: 30 Aug 2008, 9:49:19 UTC - in response to Message 1932.  
Last modified: 30 Aug 2008, 9:50:54 UTC

Hello

I just got a 9800GT for 110€ (the same price as the 8800GT and the same G92Chip & Clockspeed)
Running time for one WU is ~21h on 6.43 on a Core2 Duo ~2.1Ghz (E6300)
I have it running together with a Seti WU on the other Core.
(Last night I stopped the Seti Projekt to see, if it has an impact on the GPUgrid WU time, but it doesn´t seem so.)


Regards, Thorsten "KyleFL"
ID: 1933 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kokomiko
Avatar

Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1937 - Posted: 30 Aug 2008, 10:50:31 UTC

I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?
ID: 1937 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Temujin

Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 1939 - Posted: 30 Aug 2008, 11:01:50 UTC - in response to Message 1937.  
Last modified: 30 Aug 2008, 11:02:46 UTC

I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?
It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-)
ID: 1939 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1940 - Posted: 30 Aug 2008, 11:11:34 UTC

Kokomiko,

both of your cards are rather fast. Are they overclocked? What are the shader clocks on both? Are you running Win or Linux? Using my values as reference a stock GTX280 would need ~9:20h.

MrS
Scanning for our furry friends since Jan 2002
ID: 1940 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kokomiko
Avatar

Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1941 - Posted: 30 Aug 2008, 11:11:51 UTC - in response to Message 1939.  

It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-)


Both machines runs with v6.43, what's wrong?
ID: 1941 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kokomiko
Avatar

Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1943 - Posted: 30 Aug 2008, 11:17:23 UTC - in response to Message 1940.  
Last modified: 30 Aug 2008, 11:18:57 UTC

Kokomiko,

both of your cards are rather fast. Are they overclocked? What are the shader clocks on both? Are you running Win or Linux? Using my values as reference a stock GTX280 would need ~9:20h.

MrS


Both are not overclocked. The 8800GT (Gigabyte, 112 shader, 1728 MHz) runs under Vista 64bit on a Phenom 9850 BE with 2.5 GHz, the GTX280 (XFX, 240 shader, 1296 MHz) runs also under Vista 64bit on a Phenom 9950 with 2.6 GHz.
ID: 1943 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1944 - Posted: 30 Aug 2008, 11:21:01 UTC

The last WU finished by your 8800GT was still using 6.41, hence the lower credits.

So your 8800GT is not overclocked by you, but is clocked way higher than the stock 1500 MHz. Interesting though, it's clearly faster than my 9800GTX+ with fewer shaders and a lower shader clock. Which driver are you using?

MrS
Scanning for our furry friends since Jan 2002
ID: 1944 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kokomiko
Avatar

Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1945 - Posted: 30 Aug 2008, 11:25:59 UTC - in response to Message 1944.  

Which driver are you using?

MrS


The newest, 177.84.

ID: 1945 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1946 - Posted: 30 Aug 2008, 11:33:16 UTC

Same for me. Now the only differences are that you use Vista 64 vs XP 32 for me and my Q6600 @ 3 GHz on a P35 board versus your Phenom. But this shouldn't have such strong effects.

MrS
Scanning for our furry friends since Jan 2002
ID: 1946 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TomaszPawel

Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1952 - Posted: 30 Aug 2008, 14:03:43 UTC - in response to Message 1944.  

The last WU finished by your 8800GT was still using 6.41, hence the lower credits.

So your 8800GT is not overclocked by you, but is clocked way higher than the stock 1500 MHz. Interesting though, it's clearly faster than my 9800GTX+ with fewer shaders and a lower shader clock. Which driver are you using?

MrS


It is whell known that some 3D prodecents make 3D cards with higher clocks that references...
ID: 1952 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
HTH

Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1954 - Posted: 30 Aug 2008, 15:31:40 UTC - in response to Message 1921.  

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.


I do not understand. Cannot I run two SETI@home WUs + gpugrid all at once with my dual core Pentium D 920 (and NVIDIA)?

Henri.
ID: 1954 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TomaszPawel

Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1955 - Posted: 30 Aug 2008, 16:25:46 UTC - in response to Message 1954.  
Last modified: 30 Aug 2008, 16:28:36 UTC

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.


I do not understand. Cannot I run two SETI@home WUs + gpugrid all at once with my dual core Pentium D 920 (and NVIDIA)?

Henri.


No you can not.

Eg. I have Q6600 and 8800GTS, and i crunch Rosetta@home and PS3Grid in TomaszPawelTeam :)

So on 3 cores runs Roseta and on 1 core runs PS3Grid.

At your computer on 1 core will be Seti@home and on second core will be PS3Grid

....

I know to me it is strange too, and it shows that GPU is very powerful, but it need help from CPU to crunch.... So one core is always wasted to one GPU...
P.S.
If I have more $$$ i will buy 280GTX... but i dont't have so i crunch at 8800GTS512... If you have $$$ :) you should buy 280GTX :)
ID: 1955 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1956 - Posted: 30 Aug 2008, 16:36:40 UTC - in response to Message 1952.  

It is whell known that some 3D prodecents make 3D cards with higher clocks that references...


Sure. The point is that he's about 30 min faster than me with 112 shaders at 1.73 GHz, whereas I have 128 shaders at 1.83 GHz. That's a difference worth investigation. My prime candidate would be the Vista / Vista 64 driver.

And, yes, currently you need one CPU core per GPU-WU. It's not doing any actual work, just keeping the GPU busy (sort of).

MrS
Scanning for our furry friends since Jan 2002
ID: 1956 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kokomiko
Avatar

Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1957 - Posted: 30 Aug 2008, 16:49:08 UTC - in response to Message 1956.  

It is whell known that some 3D prodecents make 3D cards with higher clocks that references...


Sure. The point is that he's about 30 min faster than me with 112 shaders at 1.73 GHz, whereas I have 128 shaders at 1.83 GHz. That's a difference worth investigation. My prime candidate would be the Vista / Vista 64 driver.


My wife has on a Phenom 9850 BE a MSI 8800GT running under XP32bit, shader is running with 1674 MHz and she need 13:40h for one WU. Seems also to be faster then the stock frequency, but much slower then my card under Vista 64.

ID: 1957 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1959 - Posted: 30 Aug 2008, 17:31:48 UTC

GDF said going with the 2.0 CUDA compilers had a 20% performance hit under Win XP, which can be improved by future drivers. The Vista driver is different from the one for XP. So it seems the Vista driver got less than a 20% performance hit.

MrS
Scanning for our furry friends since Jan 2002
ID: 1959 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Krunchin-Keith [USA]
Avatar

Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1960 - Posted: 30 Aug 2008, 18:36:08 UTC - in response to Message 1932.  

"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs.



Well.. no. Flops are calculated as "number of shaders" * "shader clock" * "instructions per clock per shader". The latter one could be 2 (one MADD) or 3 (one MADD + one MUL), but it's constant for all G80/90/GT200 chips. So Flop are a much better performance measure than "number of shaders", because they also take the frequency into account.

And SLI.. yeah, just forget it for games. And for folding you'd have to disable it anyway.

MrS

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle, unless you just have an application adding and multiplying useless numbers to maintain the maximum.
ID: 1960 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
HTH

Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1963 - Posted: 30 Aug 2008, 19:04:21 UTC

Thanks for the info once again, guys!

It's a bit sad that one CPU core is wasted even if the GPU is used. Can they change this someday?

Henri.
ID: 1963 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Stefan Ledwina
Avatar

Send message
Joined: 16 Jul 07
Posts: 464
Credit: 298,573,998
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwat
Message 1965 - Posted: 30 Aug 2008, 19:17:26 UTC
Last modified: 30 Aug 2008, 19:18:21 UTC

It is not wasted. If I understood it right the CPU is needed to feed the GPU with data...
It's the same with Folding@home on the GPU, but they only need about 5% of one core and they are planning to distribute an application that only uses the GPU without the need to use the CPU in the future.

Don't know if this would also be possible with the application here on PS3GRID...

pixelicious.at - my little photoblog
ID: 1965 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1967 - Posted: 30 Aug 2008, 19:58:11 UTC - in response to Message 1960.  
Last modified: 30 Aug 2008, 19:59:59 UTC

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle


Yes, we're only calculating theoretical maximum Flops here, the real performance is going to be lower. This "lower" is basically the same factor for all G8x / G9x chips, but GT200 received a tweaked shader core and could therefore show higher real world GPU-Grid-performance with the same Flops rating. That's why I asked for the GTX260 :)

Edit: and regarding CPU usage, F@H also needed 100% of one core in GPU1. The current GPU2 client seems tremendously improved. Maybe whatever F@H did could also help GPU-Grid?

MrS
Scanning for our furry friends since Jan 2002
ID: 1967 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Robinski

Send message
Joined: 2 Jun 08
Posts: 25
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 1969 - Posted: 1 Sep 2008, 11:58:12 UTC - in response to Message 1967.  

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle


Yes, we're only calculating theoretical maximum Flops here, the real performance is going to be lower. This "lower" is basically the same factor for all G8x / G9x chips, but GT200 received a tweaked shader core and could therefore show higher real world GPU-Grid-performance with the same Flops rating. That's why I asked for the GTX260 :)

Edit: and regarding CPU usage, F@H also needed 100% of one core in GPU1. The current GPU2 client seems tremendously improved. Maybe whatever F@H did could also help GPU-Grid?

MrS


I really hope improvements can be made in the future so more and more GPU computing will be available. I also hope more projects would try to build GPU applications so we are able to use full hardware potential for calculations.
ID: 1969 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Graphics cards (GPUs) : What card?

©2025 Universitat Pompeu Fabra