Message boards :
Graphics cards (GPUs) :
Anyone tried the superclocked GT240?
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I notice evga has a superclocked version of the GT240 out. About the only difference seems to be the memory clocks and the price tag. Has anybody tried one of these? I was looking at replacing a couple of old GTX260 (65nm) with say 3 of these. Not as fast I know with only 96 cuda cores compared to the 216 of the GTX260. The GTX260 is getting rather hard to find these days. Also the nvidia drivers seem to have issues with more than 3 gpus under Win7. BOINC blog |
|
Send message Joined: 4 Sep 08 Posts: 44 Credit: 3,685,033 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]()
|
I don´t know more about the GT240 as I read in articles, but I think it must much more slower than my old 8800 GT which is my companion to my GTX260 (55nm). Crunching a "normal" WU with 6.03 app lasts about 13 to 14 hours each compared to my GTX260 which needs 5.5 to 6.5 hours. It´s up to you, if the speed is enough. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Here's my 2 GT 240 cards that are OCed a bit: 625MHz in Win7-64: http://www.gpugrid.net/results.php?hostid=55407&offset=0&show_names=1&state=0 600MHz in WinXP-32: http://www.gpugrid.net/results.php?hostid=63615&offset=0&show_names=1&state=0 That might give you an idea of the speed. Both are the GDDR5 version which is faster than GDDR3. Notice that WinXP is faster even though the card is clocked lower. Win7 is just slower in GPUGRID than WinXP. Here's one of the threads about the problem: http://www.gpugrid.net/forum_thread.php?id=1729#14527 One thing is that Win7 runs the GPU at a lower percentage of utilization for some reason. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The only thing EVGA overclocked was the memory, http://www.legitreviews.com/article/1144/1/ I would not call 1800MHz superclocked! 1700MHz is normal. All my GT240 cards with GDDR5 are clocked at 1800MHz or more. My GPUs are clocked to between 610MHz and 640MHz (depending on the card) and the shaders are linked (about 1490 to 1500MHz). At these speeds they operate slightly better than half the speed of a GTX260. So three would do about 55% more work that one GTX260 and use about the same power - might cost slightly more though. The setup is also important. A good CPU backing the GPU makes some difference. On my i7 I only found a 3 or 4% difference in using GDDR5 rather than GDDR3 (when both are overclocked), but on other systems the difference was more. I think a good CPU hides some of the shortcomings of GDDR3. By the way, there is DDR3 as well as GDDR3! If anyone gets the DDR3 card it will be noticeably slower than a GDDR5 card. Two GT240s in an i-7 (W7 x64) (GPU 612MHz, GDDR5 1800MHz, Shaders 1491MHz) RAC 28K. GT240 (GPU 615, GDDR5 1824, Shaders 1498) on a Quad Opteron @ 2.1GHz Tasks take around 11h, but there are a few that finish around 9h30min. |
|
Send message Joined: 16 Aug 08 Posts: 87 Credit: 1,248,879,715 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The MSI gt240 I picked up for $60 after rebate a few weeks ago is also factory overclocked at the same 550/1800. I have not tried to overclock it further. The gt240 is compute architecture 1.2 so gets the 40% architecture bonus over compute architecture 1.1 cards like the 8800gt. They are good little cards, probably the best value for gpugrid. As has been mentioned, make sure you get the gddr5 version and avoid the gddr3 version. Also, the cards lack a PCIe power plug. This is both a plus and a minus. The plus is you don't need to wire it for power. The bad news is it gets all its power from the motherboard. Be careful that your motherboard can feed multiple PCIe slots at near full load. I seem to recall the PCI spec saying you can feed 75 watts from a single PCIe slot and 200w total. You should be ok with three if your board is built to spec, your power supply has enough capacity on +12v1 and my memory is correct. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
but I think it must much more slower than my old 8800 GT It's much faster due to the newer architecture (CUDA compute capability 1.2) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
As has been mentioned, make sure you get the gddr5 version and avoid the gddr3 version. Why is everyone against DDR3 version of this card for GPUGRID? Take a look at my machines 3 are DDR3 and 1 is DDR5 (QUAD CPU) 2 are OCed 1 DDR3 as GPUZ 630 core 740 Memory and 1580 shaders (6300 Core2) and same for (QUAD) with DDR5 except memory is 2000 as per GPUZ. The other 2 are stock and DDR3. Don't take their RAC as proof of anything as they're all used differently. Radio Caroline, the world's most famous offshore pirate radio station. Great music since April 1964. Support Radio Caroline Team - Radio Caroline |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Any GT240 is a good card for crunching here, but it is about their relative values. If you can get a GT240 DDR3 for 20 or 30% less than a GT240 GDDR5 then it’s a good deal. If there is not much price difference then the GDDR5 card would be the one to go for. But some of the cards are better designed than other, so that has to be considered too. The best designed card I have seen uses DDR3; it is a Gigabyte GBG016. It has a Large fan compared to many cards and the GPU is steady at 640MHz. It shipped with a Factory Overclocking of 600MHz (up from 550). I have not seen another GT240 reach that for GPU speed, but I am sure a few people will have managed it. The Gigabyte's FOC offset much of the performance loss it would have by not using GDDR5, especially when compared to a native GDDR5 card. That said it is slightly slower than my overclocked GT240s with GDDR5. It is presently very difficult to compare the cards when in different systems crunching one of many different tasks, and I think performance varies from WU to WU anyway. All in I think we are only talking about 15% performance difference between native DDR3 and GDDR5 cards. Even a highly overclocked GDDR5 card might only be 30% faster than a natively clocked DDR3. In my opinion the top GDDR5 cards, that are 30% more expensive are not worth the money. I picked up my last GT240 for £50 – a real bargain given that my first card was £65 and used DDR3. It has GDDR5 and overclocks reasonably well (GPU 615MHz, GDDR5 1800MHz, Shaders 1498MHz). Perhaps someday when I am really bored I will strip a GT240 and put silly heatsinks and fans on it (rather than the cheap tin that comes with most of them) just to see what can be got out of the card. I think that just using better adhesive and putting small heatsinks on the RAM could make a difference. In the past I found that positioning a 12cm case fan to blow directly onto a card made a lot of difference; temps dropped by over 10degC. When I am overclocking in this way I try to get the temps down and then up the clocks until I come close to (but under) the previous temps, then tweak. For GPUGrid a good margin of error is required – There is no point overclocking a card by 15% only to find that 10% of tasks fail. You would be better off only overclocking to 10% and getting no failures. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Why is everyone against DDR3 version of this card for GPUGRID? It's just that the GDDR5 version is faster. Simply look at similar WUs on your machines and you can see that your GDDR5 version is noticeably quicker at completing them. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
http://www.gigabyte.com.tw/Products/VGA/Products_Overview.aspx?ProductID=3183 - a superclocked GTX260, 96MHz faster GPU core; performs as well as a GTX275! I guess they have stopped manufacturing GTX275 and GTX285 GPU cores so there will be no such versions? If it only uses the same power, then in terms of power usage vs performance it would be on a power with a GT240 (but crunch around 2.5 times as many tasks)! Pity none of the manufacturers managed to use GDDR5 (to reduce the heat & power usage) - the GDDR5 prices are no longer too high. Such cards might have been useful over the last 6months, as would GTX275 and GTX285 versions! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GT200 doesn't know what GDDR5 is ;) MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I thought GDDR5 might have been usable if it could mimic GDDR3; with 25% higher clocks, so the card would benefit from reduced heat & power usage? But on second thoughts, even if GT200 could be introduced to GDDR5 (perhaps by GT215), the limited heat and power benefits might not be worth the effort - higher GDDR5 latency at only 125% GDDR3 rates would actually reduce performance. The latency is only outweighed at much higher rates. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
NVidia would have to redesign the memory controller to support GDDR5. That's not something GPU manufacturers generally do, since they normally update their chips quite frequntly anyway (refresh 6 months, new 1 year .. slowing down recently). If they update the silicon and card design, they can just as well update the architecture.. which, for nVidia, is Fermi. And remember: Fermi was not intended to be hot, slow and late ;) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well I ordered a few today. Not the overclocked version, but the normal (supposedly GDDR5) version. Hopefully the motherboard can cope with 3 in the one box. BOINC blog |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
What did you order and which board is it? - I'm guessing you mean 3 GT240 cards. Hope you read the Motherboard manual, and that you actually have three PCIE x16 slots. |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
What did you order and which board is it? Yes 3 x GT240's. Evga part number 512-P3-1240-LR. Its an ASUS P6T motherboard. It has 3 slots each double spaced. I gather they drop to x8 speed when there are more than 2 devices. Power supply is a 1000w Corsair. BOINC blog |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes 3 x GT240's. Evga part number 512-P3-1240-LR. The drop to 8x speed has no consequence for DC. The 1000w power supply is WAY more than you need so you're good there. The 3 cards together won't draw much more than 200 watts. My problem in trying to put 3 GPUs on a board such as this (PCIe slots doublespaced) is that the fans are so close to the adjacent cards that airflow is dramatically reduced and the temps skyrocket. Of course waterblock cooling would solve the problem but that has it's own set of troubles. A few MBs have triple spaced slots but they're unusual. If anyone has a solution I'd appreciate knowing about it. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
You should be OK with that setup. I tried a board with 3 but the third PCIE dropped to 1X and prevented any from working. Two ran on the same board, but it beeped on restart every time and occasionally restarted itself (but it was an MSI K9A12 Plat board, 790FX & SB600)! I have two on a different system working well. Good Luck, |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Beyond, I noticed that the top card I have, that sits very close to a chipset heatsink, is about 10 degrees warmer than the lower card. Confirmed it is the heatsink/airflow by swapping the cards. Essentially the chipset heatsink is radiating heat onto the back of the card (GT240). As the GT240 fan is on the front this cannot cool the back down, so the chipset is heating up the GPU from the rear. So in one respect having a fan at the back of the GPU is therefore preferable, even if the fan is that of another GPU! |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Beyond, Have you actually tried 3 cards in a system where the cards are double spaced? I have, and the temps in the middle and upper card increased dramatically. Removing the middle card dropped the temps to normal. I tried different cards of varied length and fan configurations to try to alleviate the problem, to no avail. What you're talking about is entirely different. It has nothing to do with my post / question. |
©2026 Universitat Pompeu Fabra