Message boards :
Number crunching :
Credit per € / $
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 6 · Next
| Author | Message |
|---|---|
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Is the GT240 still the best bang for the buck? I'm looking at a 9800GT "Green", which has 112 Stream Processors, a 550Mhz Core Clock, no PCIe power connector (like the GT240). But I'm also looking at the GTS240 which I "suspect", is just a 9800GT "Rebranded Non-Green AKA Original 8800GT/9800GT" version of the 9800GT "Green". The GTS240 also has the missing PCIe power connector & a 675Mhz Core Clock, instead of the "Green" 550Mhz Core Clock. BTW, what makes most sense? The GT240 with 96 Stream Processors a 550Mhz Core Clock - with the potential of a higher OC, or the 9800GT "Green" - with 112 Stream Processors - but a possible lower OC potential???
|
[AF>Libristes] DudumomoSend message Joined: 30 Jan 09 Posts: 45 Credit: 425,620,748 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I will say that the number of SP more important than the freq. (Obviously we have to check both) but in such a difference, I will prefer a 9800GT. But Liveonc, I really recommend you not to buy a G92 anymore, but to go for a GT200 ! GTX260 is around 90-100€. Or the best GPU is, IMO the GTX275...(150€). But I bought mine at this price in september I guess..and the price is still around 150€...(GTX260 is may be know more worthy) |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I will say that the number of SP more important than the freq. (Obviously we have to check both) but in such a difference, I will prefer a 9800GT. But I do like the G200, but I can find a single one that I can fit into a single slot nano-BTX casing, nor the micro-ATX HTPC casing. I've got 3 GTX260-216, & I do love them! Even more then the GTX275, maybe because I'm so cheap.
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Guys.. Neither SPs nor frequency is more important, it's the product of both which decides performance (everything else being equal). BTW it's mainly the shader clock, not the core clock. And the GT240 is CUDA hardware capability level 1.2, so is >40% faster per flops (flops = # of shaders * frequency * operations per clock per shader) than the old CUDA hardware 1.1 chips like G92. Completely forget about buying GTS240/250 or 9800GT Green for GPU-Grid! And, sure, GT200 cards are faster than a single GT240. but the latter is more efficient due to its 40 nm process compared to the older 65 and 55 nm chips. MrS Scanning for our furry friends since Jan 2002 |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Guys.. Now that Nvidia is so good at rebranding, reusing, & slightly changing old chips. Would there be any sense in a 40nm version of the Asus Mars 295 Limited Edition? http://www.techpowerup.com/95445/ASUS_Designs_Own_Monster_Dual-GTX_285_4_GB_Graphics_Card.html It's old-tech, yes, but so is the GTS250/9800GTX/8800GTX. Maybe a 32nm flavor with GDDR5? If a 32nm is something Nvidia "might" want to do, would it be small enough, & possible to "glue" it together like what Intel did with their Core2 Quad? Two steps forward & one step back, just like Intel with PIII & P4...
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
They should be shipping the improved 40 nm version of GT200 in less than a month. It's called GF104 and is even better than a shrink of GT200: it's a significantly improved, more flexible and more efficient design due to its Fermi heritage. Remember: GF100 is not very attractive because it's insanely large and because it doesn't have enough TMUs. A half GF100 + double the amount of TMUs can fix both. See the Fermi thread in the other sub forum :) MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This is worth a quick look, http://www.bit-tech.net/hardware/graphics/2010/08/05/what-is-the-best-graphics-card-for-folding/4 |
|
Send message Joined: 16 Apr 09 Posts: 163 Credit: 921,733,849 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
One of the calculations that needs to be included in a credit per dollar table is the frequency in GPUGrid of medium to long run computation errors which GPUGrid work units are very much prone to. This has been something of a long running problems (months, perhaps years). After a hiatus of a couple of months, I tried GPUGrid again, this time with a 9800GT installed on a Windows 7 64bit system with the latest drivers for the 9800GT, no other GPU project running and no overclocking. Things looked OK for the first three work units (though I was a bit surprised by the 24 hour plus run time), then the fourth work unit went computation error after over 8 hours. I don't see this sort of problem running the same cards with SETI, Collatz or Dnetc (though with them the run times are a lot shorter). For that matter, long CPU run times don't kick up computation errors (say on Climate or Aqua) or when then they do on Climate, the trickle credit approach there provides interim credit anyway. In any event, the medium/long run computation errors that plague GPUGrid seem to me to be either application or work unit specific and act as a major disincentive to run GPUGrid for me. |
|
Send message Joined: 23 Feb 09 Posts: 39 Credit: 144,654,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I was running 4x 9800GT (G92 rev A2) on 4 different system with WinXP-32bit, Win Vista 32bit/64bit and Win7 64bit. I had to "micro-manage" the workcache most time. During the last weeks I wasn't able to crunch one single *-KASHIF_HIVPR_*_bound* (*_unbound*) without error. A few days ago I decided to pull my G92 cards from GPUGrid - my 2x GTX 260 and 1x GTX 295 will remain. |
|
Send message Joined: 20 May 11 Posts: 16 Credit: 86,798,974 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]()
|
Well, I finished a couple of those LONG tasks and good grief they are REALLY long. I'm burning in two brand new EVGA GTX 560 Ti cards in my little 4.1GHz crunch macine before I install water blocks on them. They are clocked at 951MHz and running the latest 275.27 Beta Nvidia drivers. First long task took 42511 seconds (11.8 hrs) and the second LONG task took 69849 seconds (19.4Hrs). First run gave me 3722 points per hour and the second longer run gave me 2721 points per hour on same boards. Something seems fishy here... one would think the points would be more standardized and scaler when the same task is run on the same hardware... 8-) Tex1954 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Different task types take different amounts of time to complete. For example, p5-IBUCH_6_mutEGFR_110419-17-20-RND6259_2 and A39-TONI_AGG1-20-100-RND3414_1 Claimed 42,234.77 and Granted 52,793.46 suggests you did not return the task within 24h for the 50% bonus and instead received 25% for completing the task inside 2 days. If you overclock too much, tasks will fail and even before they fail completion times can be longer due to recoverable failures. Another problem can be with the drivers forcing the card to downclock. |
|
Send message Joined: 20 May 11 Posts: 16 Credit: 86,798,974 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]()
|
The EVGA GTX 560 Ti's versions I have are the Super Clocked varient and run 900MHZ normally and that's where they sit burning in before I install waterblocks on them. They pass all the hardest 3D tests and Memory tests I can throw at them, so no problems there I think. I'm new to GPUGRID and didn't know about the bonus thing. My little crunching computer gets turn off and on a lot lately for hardware changes, software changes, water loop updates and such. I especially have problems with the GPU clocks switching to lower frequencies and never coming back! I've reported it to the Forum and Tech Support at Nvidia. http://forums.nvidia.com/index.php?s=9f29a996e0ac9d6ea44a506f6631f805&showtopic=200414&pid=1237460&st=0&#entry1237460 However, for now, seems something I did worked and the clocks haven't shifted all night... but it's an ongoing problem. That last slow work unit is a result of the clocks shifting to slow speed all night and I didn't notice. The latest Beta drivers are the same... major problems keeping the clocks at full speed. Sigh... such is life... :D Tex1954 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This under-clocking is becoming one of the biggest self-inflicted problems I have seen in GPU computing - an untreated plague. Only equaled by the unusable shaders (2 warp schedulers per 3 shader groups), and heat/fan issues of present/recent past - avoidable design/driver flaws. Back on topic, the GTX570 is better here than the GTX560 Ti, and that will remain the case unless superscalar execution can be utilized by ACEMD to access the presently unusable shaders (33%). |
Mad MattSend message Joined: 29 Aug 09 Posts: 28 Credit: 101,584,171 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This under-clocking is becoming one of the biggest self-inflicted problems I have seen in GPU computing - an untreated plague. Only equaled by the unusable shaders (2 warp schedulers per 3 shader groups), and heat/fan issues of present/recent past - avoidable design/driver flaws. A quite reliable solution I found is setting power management from adaptive to maximum performance. This way in case of computation errors the clocks won't fall back to 2D level and you can increase them again using Afterburner without a reboot. What I found as most frequent reason for this downclocking - at least on Primegrid - is not enough voltage causing the computation error first. In my few efforts here GPUGRID was even more sensitive and hardly could run the same clocks. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Unfortunately, under XP you can no longer set the GPU to Maximum Performance; it's stuck at Adaptive. |
|
Send message Joined: 17 Mar 11 Posts: 2 Credit: 691,286 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]()
|
I hope it's the right thread - I'm mostly trying to optimise Credit/Watt on that next machine i'm building. Initial price is a minor issue. Obviously an efficient PSU and SSD will play a big role for the system. My question here: "Is any info on support and perfomance of those new APUs (Sandy Bridge / Llano) out there yet? How do they work with additional GPUs?" Manufactured in the new processes, they should be at least more efficient than an old CPU/GPU combo, right? Or not (yet) because...? Any answers appreciated! |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Sandy Bridge processors are more energy efficient than previous generations by Intel; a stock i7-2600 uses around 65W when crunching 8 CPU projects, an i7-920 (previous generation) uses over 100W and does less work. In terms of CPU crunching per Watt an i7-2600 is about 50% more efficient than an i7-920. The Llano has a TDP of either 65W or 100W, including the on-die GPU (an A8-3850 has an integrated 6550D). CPU performance is similar to an Athlon II X4 @3.1GHz. AMD GPU's are not supported here at GPUGrid, so the APU cores of the Llano cannot be used either. I think they might be useable at MilkyWay. Llano GPU performance is around that of the HD 6450 or 5570. Intel's GPU cores cannot be used to crunch with here (or at any other project that I am aware of). While this might change in the future, at present Intel's graphical cores are basically a waste of die space for crunching. The SB's are still more efficient than Intel's previous CPU's and can do more CPU crunching than the Llano. For crunching here I think a SB is a good match for a Fermi 500 series GPU; it's efficient and can do lots of CPU work as well as support the GPU, and with just one CPU thread. Which CPU to get depends on what you want to use the system for and what you want to crunch. All that said, it's much more important to get a good GPU |
|
Send message Joined: 17 Mar 11 Posts: 2 Credit: 691,286 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks, skgiven. Very informative answer! I love your term "a waste of die space". That's just what i thought it would be. So in CPU-Terms I'll wait for 32nm CPUs without Graphics (Bulldozer and ?). And, like you said, care more about a good Graphic Card. However: Power Consumption is an even bigger issue there... Just to be perfectly clear: For any given Graphic Card, the CPU firing it makes no difference? Background: I want to do "something useful" - and this should exclude checking for primes oder doing yet another 10k tries for "3x+1" (Collatz). I loved Virtual Prairie, but that's only CPU. Then again, those "less world improving" projects score the big points for each KWh spent. At least with my current Hardware, GT430. In the end, I want to GPU-crunch useful projects (GPUGrid, Milky-Way, SETI) and still keep my Position in Top-10.000... and since the useful projects are "less generous" with points, I consider improving hardware. However, not at the cost of wasting more electricity! |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The Intel Core i7-970, i7-980, i7-980X, i7-990X six core socket 1366 CPUs are also made on 32nm technology, without on-die GPU. You should consider them also, especially if you want to crunch with two GPUs. While the X58 chipset for socket 1366 CPUs have two native 16x PCIe 2.0 connectors, Socket 1156 and Socket 1155 CPUs have only one 16x PCIe 2.0 bus integrated into the CPU. So if someone uses two GPUs with these CPUs, the GPUs will have only 8x PCIe 2.0 per GPU, and it will lower the performance of the GPUGrid client. However, ASUS made a special MB for socket 1156 CPUs, with two native 16x PCIe 2.0 connectors: the P7P55D WS Supercomputer, and for socket 1155 CPUs (SandyBridge): P8P67 WS Revolution. Maybe other manufacturers have similar motherboard designs. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I still don't think X8 makes all that much difference; well not in itself for 2 cards. For two GTX590's perhaps. There are other factors such as the 1366 boards having triple channel memory, 36 PCIE lanes, and 1156/1155 only having dual channel memory. The chipset that controls the PCIE lanes also controls USB, PCI, SATA and LAN, so what else is going on might influence performance. So it's really down to the motherboards implementation and what the system is being used for. Not sure about that P8P67 WS Revolution LGA 1155 Motherboard (2 x PCIe 2.0 x16 (x16, x8)), but the GA-Z68X-UD7-B3 is a bespoke implementation of an LGA 1155 motherboard that offers two full PCIE X16 lanes (x16, x16). Don't know of a monitoring tool that measures PCIE use. If the memory controller load of ~20% on my GTX470 is anything to go by it's not exactly over taxed. Anyway when the Sandy Bridge E type CPU's turn up their boards will have support Quad Channel Memory, PCI-e 3.0 and multiple x16 PCI-e lanes. Obviously these will become the CPU/board combination to have. |
©2025 Universitat Pompeu Fabra