Message boards :
Graphics cards (GPUs) :
Various GPU's Performance
Message board moderation
Previous · 1 · 2 · 3
| Author | Message |
|---|---|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Note that I do run 25/7 *watching you enviously* ... of course I suspended GPU Grid there because of the high CPU use from the 6.55 Application. I wouldn't say "of course". As we concluded somewhere here in this thread a 9800GT delivers almost 5k credits/day, with the recent drop due to the 1888 credit WUs maybe 4k. I don't know about your Q9300, but my Q6600@3GHz can certainly not make up for such an amount with just a half core ;) New thought ... I wonder if the high pull of the 6.55 application is affecting overall pull because the CPU though running full speed is not heavily loaded Yes, it does affect power draw. Although the load is 100% it's less stressful than real calculations (GPU-Grid currently just polls the GPU). When I switched from 4+0 to 4+1 I've seen a slight drop in my CPU temperature. MrS Scanning for our furry friends since Jan 2002 |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I wouldn't say "of course". As we concluded somewhere here in this thread a 9800GT delivers almost 5k credits/day, with the recent drop due to the 1888 credit WUs maybe 4k. I don't know about your Q9300, but my Q6600@3GHz can certainly not make up for such an amount with just a half core If I were all about credits and credits alone ... well, then my project mix would be a lot different and I would of course (to steal your words) be running all GPUs 24/7... I cannot display my signature here because of the problem with the preference page I noted inthe approprite forum lo these many days ago ... but, as you can see were you to click through is that I have a few projects undergoing work ... if you look at this you can see that I dump some work into almost all live projects going ... My only restriction is that I tend not to do pure Alpha test projects that will never get out of alpha state. That is a TEND .. as is obvious ... Anyway, my years goals include getting the bulk of the production projects and a few Beta projects above the totals I built up for SaH when there were few projects and it seemed like the thing to do. Now, I have choices and I tend to lean towards projects that are actually doing SCIENCE ... not that the exploration of the universe and looking for others is not something that needs to be done, it does, I just don't think it deserves so much computing power ... just me ... SO, I have this long term goal I have been working on for YEARs and have only managed 3 projects to this point ... so ... it is painful to surrender some power when I could be using it do do other things ... plenty of time to rev up GPU Grid when the next generation application comes out ... so ... WCG COsmology LHC ABC Genetic Life Malaria Control SIMAP Prime Grid Mind Modeling GPU Grid AI are all targets to get above 388,000 CS ... or as many as possible ... besides, the 9800 only does one, at best two (of the short tasks) a day ... and with so much trouble getting them queued up ... I only have to babysit one system instead of two ... And, as I have said before, Bio stuff does not really lift my skirts ... something about cutting up frogs biased me forever ... so, I lean towards Physics as my PREFERENCE ... Not that I am not willing to support frog killer types when that is the choices ... |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I guess I dropped out of seti a long time ago for similar reasons as you did. I also like to support other projects, preferrably from the realm of physics. And I also like to contribute as much as I can, without spending too much. That means I want to make good use of my hardware, i.e. only machines with a 64 Bit OS run ABC etc. And this means I'd always want to use that 400 GFlops Coprocessor, even if that means loosing 10 GFlops somewhere else. And about the babysitting.. well, if the GPU runs dry I wouldn't loose anything compared to not running GPU-Grid in the first place. But that's just me (who, by the way, absolutely doesn't care about credit goals). So if what you're doing is what you want, by all means go for it :) MrS Scanning for our furry friends since Jan 2002 |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I guess I dropped out of seti a long time ago for similar reasons as you did. I also like to support other projects, preferrably from the realm of physics. And I also like to contribute as much as I can, without spending too much. That means I want to make good use of my hardware, i.e. only machines with a 64 Bit OS run ABC etc. And this means I'd always want to use that 400 GFlops Coprocessor, even if that means loosing 10 GFlops somewhere else. And about the babysitting.. well, if the GPU runs dry I wouldn't loose anything compared to not running GPU-Grid in the first place. I knew you understood all a long ... :) But it is nice to "talk" about it ... I kinda do want to keep that GPU busy ... but, it has been idle so long that, well, what does a couple more days mean ... :) As to the rest, well, home bound as I am, I have to find something to amuse myself ... were there another project that used the GPU be sure that it would be turning and burning as we speak ... and when that next project comes out, be assured that even the 8500 card I have will likely be mounted again until I can afford to go out and buy again ... :) That aside, my GTX 280 is more than pulling in the credits to make me smile ... Even better, I took a nap (another side effect of my illness is I seem to only get to sleep ~3 hours at a pop) and was down to my last task ... lo and behold, it had finished that one and has sneaked out another and was working on it when I woke. When I reported the one in work when I went to bed I got two more ... almost as if it was working right ... :) |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Sorry for a delayed response...have been stuck at the relatives for the holidays with poor internet access and just got back home. Anyway, I'd generally agree with what you have said, but since I did say "really" under powered systems I thought I'd give the example that I have that may make some sense. Specifically, I have a second system (a freebie from my father's company that upgraded to a quad-core) which is a woefully underpowered stock HP desktop (Athlon x2 3800+ with 250w PS). I use it as a machine for my kids (educational software mostly) and to web browse, etc. when I can't get my wife off our main PC. Upgrading the PS (or just about any other component) makes no sense given the "budget construction" of such a desktop. With the 250w PS, there isn't much headroom, so the extra 30-35 watts that a 9600 GSO could use very well might exceed what such a system can handle. A similar scenario might also be evident with some of the media/home theater PC builds. Anyway, I just installed the OC'd 9500GT and will report back with some run times (I'd estimate around 2 days based on the few other 32 shader cards I've noticed running here). As for the power of the OC, I think I have some evidence that suggest that shader clock may be more important that you are suggesting. Specifically, my 9600 GSO (96 shaders - GPUGRID reported shader speed of 1674000) is flatly faster than an FX3700 Quadro that I got to test in December (112 shaders - GPUGRID reported shader speed of 1242000) with about a 2 hour per workunit advantage. Wonder if anyone has been able to do comparisons on their cards at different shader speeds to see how much of a difference can be obtained? Last, the "new" 9600 GSO (with 512mb rather than 384mb or 768mb) is actually a scaled back 48 shader card. Performance should be just shy of the 9600GT. I would guess that the card is lower powered, but I have not seen any hard figures yet on the generalized wattage? If comparable to the power usage of the 9500GT, then this would of course nullify my arguments regarding the slower card. |
K1atOdessaSend message Joined: 25 Feb 08 Posts: 249 Credit: 444,646,963 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Anyway, I just installed the OC'd 9500GT and will report back with some run times (I'd estimate around 2 days based on the few other 32 shader cards I've noticed running here). I have factory OC'd 9500GT (700 core, 1750 shader, 800 memory <no OC>). If just that card works on a WU, it will take about 2 days 6 hours or so to work on the big 3232 WU's. The 24xx WU's take < 2 days. I do have two 8800GT's as well, so if I close BOINC they can take over working on the WU the 9500 was previously working on. This will lower the overall time of a single WU, but my numbers above were for a WU processed strictly with the 9500. |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
11 hours into its first workunit with a bit over 27% done. That equates to about 40 - 41 hours for the full unit (I'd guess it is a 24xx one). It will be an interesting comparison to your card since the only difference between the two (assuming like me that you are not heavily gaming, etc.) is the memory clock (mine is 700 core, 1750 shader, 2000 memory...all factory OC). |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi Scott, it's not that shader clock wouldn't matter, but the 9600GT is clocked quite high already and the GSO is.. at least not slow. The 9500GT is produced in the same 65 nm process, so you can't expect any miracles from it. Maybe 1.9 GHz is possible, but I doubt much more. That gives you an advantage of 1.9/1.67 = 1.14 -> 14%. To make up for the lack of shaders you'd need 100 to 200%. There's no way an OC can give you this, both numbers are in completely different leagues. In your example the difference in the numbers of shaders is rather small (112/96 = 1.17 -> 17%). That's why the clock speed difference of 1.67/1.24 = 1.34 -> 34% does matter. And to consider your example: an X2 3800+ with a 250W PSU. The CPU is specified at 95W and runs at unnecessarily high voltages. If you lower the voltage (any of them will do 1.2V, most 1.1V) you should get the entire system below 100W. If you're unlucky and got on which runs at the highest stock voltage and really draws 95W than your system should draw something in the range of 130 - 150W without a GPU. If we assume the worst (150W) and add a 80W GPU we're at 230W - certainly not a comfortable load, if the PSU is a quality unit and can handle it. We don't want to run our PSUs at more than 80% load as this decreases the life span. Ideally the load should sit around 50%, not much more than 60%. 80% load is just what we'd get in this example by going for a 9500GT instead of a 9600GT / old GSO (250W*0.8 = 200W). Still.. I'm not really convinced that this is a good idea. A system which can't run a 80W card will still be pushed to it's limits by a 50W card. A difference of 30W is just 12% of 250.. that's reducing PSU load from 80 to 68%.. I can see your point, but still I wouldn't want to give up that much performance by going with 32 shaders instead of 96. I'd rather use the somewhat bigger card and would not run any CPU projects. That should get cpu power consumption below 30W and the PSU would be fine. Of course the PSU would have to be able to handle an occasional spike in CPU usage. Oh, and don't forget: the cards draw less power in GPU-Grid than in games, so the 80 / 50W figures are too high.. which makes the difference between them smaller ;) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Thanks for the reply MrS...these types of conversations are the ones that I really enjoy on the forum. :)
Not running any CPU projects is a good point for power savings. Still, it is those spikes that worry me most, especially on the stock power supply (I am really doubtful that HP in particular went with any real quality on the PS).
Unfortunately, even those educational games for the kids have some rather taxing video activity at times, so when they are on the GPU wattage can spike fairly high. BTW, you keep referring to the 64 shader 9600GT, but I thought that the max wattage for it was 100 - 105? |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Just to update...the 9500GT completed the first workunit (an 18xx one) in about 40.72 hours. |
K1atOdessaSend message Joined: 25 Feb 08 Posts: 249 Credit: 444,646,963 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That sounds pretty much dead on with my experience. My OC'd 9500GT is about 3.3x slower than my 8800GT's (based on several WU's completed solely with the 9500). I've also seen about 40 hr runtimes on the 18xx credit WU's on the 9500GT, which equates to just over 12 hours on a 8800GT (just like I have experienced). This is good because it shows you can process two of these (small) WU's in 48 hours. However, while the 24xx WU's only took me a couple hours longer, if you get a 29xx or 3232 size WU, both will not be able to finish in 96 hours (4-day deadline). I advise you keep an eye on this and abort any that are no where close to finishing on time. If it will be within 6-8 hours or so late based on your estimates, then you might as well let it finish because only a 260 or 280 card will receive it (after the 96 hour deadline passes) and be able to process / return that fast. |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Since the run times are nearly identical, it also looks like memory clock is irrelevant for the GPUGRID apps... |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
YOu may want to keep an eye on those and if they get re-issued send a note to your wingman to monitor the situation so that they don't process it half way through and have it canceled by the server ... |
K1atOdessaSend message Joined: 25 Feb 08 Posts: 249 Credit: 444,646,963 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've read/heard that the memory clock does not make much difference as you said. I have overclocked the memory slightly (4%) on my 8800GT's, but given the very small increase any benefit might not be apparent anyhow. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Since the run times are nearly identical, it also looks like memory clock is irrelevant for the GPUGRID apps... Just a quick reply.. see here plus the next couple of posts. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Oct 08 Posts: 6 Credit: 443,525 RAC: 0 Level ![]() Scientific publications ![]()
|
Well after deciding a 260GTX Core 216 would be my best bet (price to performance wise), a recent post on sale prices (thx JAMC) led to a rather hasty purchase of an overclocked EVGA 280GTX (at only $211)... I don't think I'll be able to beat that price to performance ratio in a while. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi Scott, sorry for the late response, work got me again.. Unfortunately, even those educational games for the kids have some rather taxing video activity at times, so when they are on the GPU wattage can spike fairly high. Ah, now that's a different point. Games have the bad habit of using 100% (at least of one core) and 100% GPU, no matter if they need them or not. So in that case you'd be drawing considerably more than during GPU-Grid crunching, especially if you're kind to the CPU. When I still had the stock cooler on my 9800GTX+ I could hear the air movement due to the fan when GPU-Grid was running, but when I launched 3D Mark the fan started to scream.. so that's a diffeent kind of load. BTW, you keep referring to the 64 shader 9600GT, but I thought that the max wattage for it was 100 - 105? Not sure where I got my numbers from, I think it was in the beginning of this thread. I'm using 9600GT and 9600GSO (old version) and 64 and 96 shaders almost synonymical for the faster cards. That goes along with "somewhat more than 80W". Taking a look here I find a TDP of 84W for the 9600GSO and 95W for the 9600GT. So, yes, the 9600GT draws a bit more (and is a bit slower) than the old 9600GSO, but not that much. MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra