Advanced search

Message boards : Number crunching : Hardware questions

Author Message
TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30643 - Posted: 2 Jun 2013 | 22:04:19 UTC
Last modified: 2 Jun 2013 | 22:07:10 UTC

It is me again..

I thought to make a new thread as it only has sideways to do with GPU's.
It has to do with the new GTX660 not working at maximum load in my PC when I replaced it for the GTX285.
This is the thread where the problems are discused: http://www.gpugrid.net/forum_thread.php?id=3373

I found a refurbished T7400 with a PSU of 980 Watt, 2 PCI-x16 G2, 1 PCI-x8, 3 PCI-x64 and 1 PCI 32 bit slots, so room for some GPU's. It has two Xeon processors and 2 years warranty and 2 Quadro FX4600 graphics cards, and Win7 professional. The GTX660 should fit easily.

Or I could build a new one with components. But that will cost a lot more.
What is suggested, an AsusRock, EVGA or Intel MOBO?
I live in the Netherlands and all is not so cheap as in the US. I buy often hardware in Germany.
I am interested in ideas for a system for crunching primarily GPUGRID on the GTX660 (two later) and Rosetta and/or Docking on the CPU.

Thanks.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30672 - Posted: 5 Jun 2013 | 18:56:50 UTC - in response to Message 30643.

I wouldn't buy that old workstation - the running costs will be far too high! The T7400 can take up to Penryn 45 nm Core 2 Quads (not sure what's inside now, could still be 65 nm C2Qs). The performane and efficiency increases have been massive since then. But maybe worst is the mainboard and chipset: I expect 200 - 300 W power draw at idle.

And the GPUs are basically 8800GTS or 8800GS, based on the old G80 chip with CUDA compute capability 1.0, i.e. the very first CUDA capable chip. They can hardly be used in any project at all.

You might get the system for cheap, but a newer system should pay for itself quickly.

Suggestion: get a small Ivy or Sandy, either 2 cores (Celeron, Pentium) or maybe with HT (i3) for 2 more threads. The smallest ones are dirt-cheap (35€), could be replaced with something faster later on if needed, and might be available used. This gives you 2 8x PCIe 2.0 slots, which are sufficient for GPU-Grid, on readily available mainboards. From Ivy-based i5 upwards it's even PCIe 3.0. This will be a very energy- and cost-effective GPU driver. And with 2 GPUs PSU prices and cooling are still fine. The integrated GPU can crunch Collatz since a few days and would bring in a few more credits, if it's Ivy-based (not totally sure about the smallest models, though).
Alternative: AMD with Trinity or Richland 65 W. Still plenty of performance to drive your GPUs, always with 4 threads, OK power efficiency in the 65 W models and an integrated GPU which can actually get some work done (about HD6650 level).

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30709 - Posted: 7 Jun 2013 | 13:27:24 UTC - in response to Message 30672.

I guess you are very right ETA, but to late I already ordered it.
Well it is a big case with a big PSU and a lot of space and quit fans.
I can put a new MOBO in it with an six core i7.

This are the graphics cards:
6/7/2013 3:11:35 PM | | CUDA: NVIDIA GPU 0: Quadro FX 4600 (driver version 320.00, CUDA version 5.50, compute capability 1.0, 768MB, 633MB available, 346 GFLOPS peak)
6/7/2013 3:11:35 PM | | OpenCL: NVIDIA GPU 0: Quadro FX 4600 (driver version 320.00, device version OpenCL 1.0 CUDA, 768MB, 633MB available, 346 GFLOPS peak)

I am not getting new work for GPUGRID, so you where right with that as well.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30715 - Posted: 7 Jun 2013 | 18:46:45 UTC - in response to Message 30709.

Well.. then have fun with your new toy :)

I'd see if I could lower the CPU voltages in the BIOS. They should have plenty of headroom and this would help with power consumption. The old GPUs: not sure if you could sell them on Ebay for anything. They might run POEM, and relatively efficiently so, but there's not much work available. Collatz and PG would probably also run, but I'm not sure it would be worth the electricity.

Yes, you could put in some large mainboards. However, I wouldn't touch a 6-core for BOINC. As Intel it's too expensive and not energy-efficient enough (still 32 nm Sandy Bridge) and as AMD.. well, no need to discuss that :p

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30718 - Posted: 7 Jun 2013 | 19:26:43 UTC - in response to Message 30715.

Yeah thanks, well it runs quit, 270 Watt when idle, 385 Watt when 8 cores crunching Rosetta. I think the case is nice and big that is usable, the rest is... Well you warn me (either did Beyond) but a little to late, the order was on its way. Never mind it was really cheap so no worries.

I thought a six core and then with HT 12, keep 2 for GPUGRID and then 10 cores for Rosetta or Einstein.

What CPU would you suggest? I like a MOBO with room for two GTX660's, 1 SSD and 1 hard disk for the data (BOINC). Simple DVD player and a gold PSU.
And it would be nice if you could give some info about a good CUP cooler/fan. I like Zalman.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30731 - Posted: 8 Jun 2013 | 10:54:36 UTC - in response to Message 30718.

You could just pull the Quadro FX4600's and add two GTX660's (presuming they would work on that motherboard). You would be drawing around 600W though!

My i7-3770K with a GTX660Ti and a GTX660 uses ~375W running GPUGrid WU's and WCG HPFp2 tasks. A stock 3770K and two GTX660's would use ~350W, with a good PSU.

Don't know which Xeons you have, as your systems are hidden again and you didn't say, but some are quite powerful and two such processors might outperform my CPU in terms of crunching - @2.66GHz [stock] my quad LGA775 Xeon can do more than half the work of my 3770K @4.2GHz. If I upped the clocks to 3.1GHz it could do 80% of a stock 3770K (at least at some CPU projects). For you the problem is the high power usage, and the expense of running such a system.

If you just want to concentrate on GPU crunching then two GTX660's in a LGA1155 board with a basic CPU ($50 Intel Celeron G1610 Ivy Bridge) is sufficient.
The purchase price rises sharply when you go to something like an i3-3220 (~£130) and the Haswell's start at ~$190.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30737 - Posted: 8 Jun 2013 | 19:00:42 UTC - in response to Message 30731.

The Xeons are E5430 @2.66 and not to fast. It seems indeed that old stuff uses more power. The system is from May 2008.

I would like a new system to replace my i7 with the "heater". I want two GPU's and an i7 (or Xeon) again with Win7. It will run GPUGRID and Rosetta on the CPU.
But when the system is up and I have to work at home for a few hours, or even a whole day, then I will use that system as well. So it has to be quite decent.
I have already some parts and don't mind if the total price would be 1500-2000 euro in total. I have several systems more than 6 years and run still fine (but not 24/7), that will the new PC don't either.
I need a MOBO a processor, new memory and a good PSU (gold 80+ I guess are good one's) and a good processor cooler.
Perhaps a new case bit I have a Cooler Master with 5 fans (2 150mm and 3 120mm).

Suggestions are welcome, I will not buy the parts tomorrow and will highly consider your thoughts this time...
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30739 - Posted: 8 Jun 2013 | 20:33:33 UTC - in response to Message 30737.

The Xeons are E5430 @2.66 and not to fast. It seems indeed that old stuff uses more power. The system is from May 2008.

That's better than my Xeon - same frequency, but 45nm, 1333MHz FSB, 12MB Cache, 80W TDP.
http://ark.intel.com/products/33081/Intel-Xeon-Processor-E5430-12M-Cache-2_66-GHz-1333-MHz-FSB
Is the system RAM DDR2 or DDR3?

Why not pull the CC1.0 GPU's and test a different GPU?

I would like a new system to replace my i7 with the "heater". I want two GPU's and an i7 (or Xeon) again with Win7. It will run GPUGRID and Rosetta on the CPU.
But when the system is up and I have to work at home for a few hours, or even a whole day, then I will use that system as well. So it has to be quite decent.
I have already some parts and don't mind if the total price would be 1500-2000 euro in total. I have several systems more than 6 years and run still fine (but not 24/7), that will the new PC don't either. I need a MOBO a processor, new memory and a good PSU (gold 80+ I guess are good one's) and a good processor cooler.
Perhaps a new case bit I have a Cooler Master with 5 fans (2 150mm and 3 120mm).

Suggestions are welcome, I will not buy the parts tomorrow and will highly consider your thoughts this time...

What parts do you have?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30740 - Posted: 8 Jun 2013 | 23:13:59 UTC - in response to Message 30739.

I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison.
I checked Intel indeed before I ordered it and saw that they use 80 Watt, that's less than the i7 (960) 144.33 Watt. How is it than possible that it is drawing almost 300Watt, when idle (doing nothing)? Even with the plug in the mains it draws 3 Watt.
The system has 8 1GB hyundai DDR2.

The only GPU I have that can do BOINC projects are AMD HD 5870 (2) they were in the system where now the GTX660 is running.

The T7400 has a PSU of 1000 Watt but only 2 6 pins power for GPU, so only one AMD can be powered.
There are 4 SATA connectors free and 5 SATA power plugs (small, long, black), and only 1 white (large) 4 pin plug. Weird.

I have an SSD, 2 HD's, several fan's and Win7 professional all new, and a case with 5 fan's.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30744 - Posted: 9 Jun 2013 | 12:20:31 UTC - in response to Message 30740.
Last modified: 9 Jun 2013 | 13:00:11 UTC

I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison.

Would be a reasonably good comparison if you mention how long the i7 takes; if it's 30min then the Xeon's are not great, but if it's 10h then the 10min difference is negligible.

I checked Intel indeed before I ordered it and saw that they use 80 Watt, that's less than the i7 (960) 144.33 Watt. How is it than possible that it is drawing almost 300Watt, when idle (doing nothing)? Even with the plug in the mains it draws 3 Watt.
The system has 8 1GB hyundai DDR2.

That's 80W for each CPU. Mainly the motherboard, the two GPU's, eight sticks of DDR2 and the PSU, but also drives. The DDR2 may be forcing the FSB to operate at 800MHz, when it could be 1333MHz. This would make the processors slower for computation. I saw a fairly large difference when I moved my Xeon from a DDR2 board to the DDR3 board (not saying this is the way forward though; I think the E5430 isn't DDR3 compatible).

The only GPU I have that can do BOINC projects are AMD HD 5870 (2) they were in the system where now the GTX660 is running.
The T7400 has a PSU of 1000 Watt but only 2 6 pins power for GPU, so only one AMD can be powered.

I would put one back in then - it could do way more work than both CC1.0 cards combined.

There are 4 SATA connectors free and 5 SATA power plugs (small, long, black), and only 1 white (large) 4 pin plug. Weird.

Yeah, a bit of an odd PSU design; only accommodates one big GPU, but is 1000W and has lots of SATA connectors (newer than the 4-pin IDE power connectors). This also prevents you from using two 4-pin IDE power connectors to hook up another GPU!

I have an SSD, 2 HD's, several fan's and Win7 professional all new, and a case with 5 fan's.

Is that to be used for a new build?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30747 - Posted: 9 Jun 2013 | 13:02:18 UTC

Careful with run-time comparisons at Rosetta: you decide how long your WUs shall run (approximately) and your hardware decides how much work is being accomplished during this time, which will be reflected in the amount of credit given for the WUs.

There is only really 1 4 pin molex connector for older devices? Seems weird. These could be used to power GPUs, but you need 2 of them for 1 6-pin GPU onnector (ideally originating from different cables).

How efficient is that 1 kW PSU? Chances are it's not all that bad, if it's been in a high end workstation.

And I second crunching on that HD5870. It's still a very decent card for a few projects, as it packs lot's of raw horse power into a moderately efficient 40 nm chip. Examples: Milkyway, POEM, Einstein (not many credits, though) and probably some more.

If I'd buy a new cruncher now I'D go for a Haswell. By now it won't be much faster than Ivy per clock (until AVX2 and FMA are being used somewhere), but even "just" 10% faster per clock would require 300 - 500 MHz more on an Ivy Bridge core.. which makes the Haswell suddenly look rather nice.

Then the question would be "which model"? If you don't want to OC the 4770 would do the trick (all cores @ 3.9 GHz is possible), or a slightly cheaper and a bit lower clocked Xeon V3 with HT. A slight BCLK-OC could take the 4770 even to 4.0 - 4.1 GHz, but will require some testing and fiddling. The 4770K on the other hand can be OC'ed easily, but won't reach much further than 4.3 +/- 0.1 GHz for 24/7 crunching anyway.

Personally I find the 4770R interesting: it's got the biggest "Iris Pro" GPU (can crunch Collatz now) and that 128 MB eDRAM cache, which probably helps BOINC, and should also be able to reach 3.9 GHz on all cores (maximum Turbo) plus the slight bus-OC. However, it's an OEM-only product to be soldered onto mainboards. Not sure if someone's going to put such a CPU onto a regular OC-Mainboard and sell it to retail.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30748 - Posted: 9 Jun 2013 | 13:18:18 UTC - in response to Message 30744.

I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison.

Would be a reasonably good comparison if you mention how long the i7 takes; if it's 30min then the Xeon's are not great, but if it's 10h then the 10min difference is negligible.


The i7:
586361826 532500775 9 Jun 2013 9:20:22 UTC 9 Jun 2013 12:12:14 UTC Over Success Done 10,118.13 59.06 61.76

The Xeon: (same type of job):
586009637 1619478 7 Jun 2013 13:49:39 UTC 7 Jun 2013 16:45:54 UTC Over Success Done 10,100.83 62.60 76.99
586010293 1619478 7 Jun 2013 13:53:43 UTC 7 Jun 2013 17:14:56 UTC Over Success Done 10,696.68 66.29 80.25
Thus one faster and one slower. But yeah Xeon not to bad.

The only GPU I have that can do BOINC projects are AMD HD 5870 (2) they were in the system where now the GTX660 is running.
The T7400 has a PSU of 1000 Watt but only 2 6 pins power for GPU, so only one AMD can be powered.

I would put one back in then - it could do way more work than both CC1.0 cards combined.


Will do, but this system will only run when cheaper power rate is active and not very often. The i7 with teh GTX285 at 90% load and 6 Rosies running is only using 315 Watt! I could almost run two of these, I should have listen to you all.

This also prevents you from using two 4-pin IDE power connectors to hook up another GPU!

Indeed that was my plan, but nope.

I have an SSD, 2 HD's, several fan's and Win7 professional all new, and a case with 5 fan's.

Is that to be used for a new build?

Yes, all new I will not use it for an old system.

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30749 - Posted: 9 Jun 2013 | 13:34:52 UTC - in response to Message 30747.

Thanks ETA.

There is only really 1 4 pin molex connector for older devices? Seems weird. These could be used to power GPUs, but you need 2 of them for 1 6-pin GPU onnector (ideally originating from different cables).

Two, one is in the DVD-drive

How efficient is that 1 kW PSU? Chances are it's not all that bad, if it's been in a high end workstation.

I don't know. Can I find that out?

And I second crunching on that HD5870. It's still a very decent card for a few projects, as it packs lot's of raw horse power into a moderately efficient 40 nm chip. Examples: Milkyway, POEM, Einstein (not many credits, though) and probably some more.


Yes the two of them did Einstein, Albert and Milkyway nicely. I don't care about the credits that much. The science I find useful/important is what I crunch for.

I will not OC things. I like EVGA and saw nice new MOBO's of them. Would be nice to have a MOBO, PSU and 2 GPU's from EVGA. Would work good together. However not easy to find in the Netherlands.
Zalman has nice cases, not to find here. And the nVidia Tesla case seems very nice, but I can't find that thing even in the US.
However I now from Beyond (cruncher) that a good case with good airflow is important as well.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30751 - Posted: 9 Jun 2013 | 14:41:11 UTC - in response to Message 30749.

Screw that DVD drive, you can access another one over the network ;)
To find out about the efficiency: look for the model number and ask Big G for some review / test or at least specifications. THe 80+ label should already be a rough guideline.

Are you still looking for a case? I thought you'd use the one of the "big box". anyway, for 2 GPUs you really want some airflow!

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30753 - Posted: 9 Jun 2013 | 15:54:09 UTC - in response to Message 30715.

Yes, you could put in some large mainboards. However, I wouldn't touch a 6-core for BOINC. As Intel it's too expensive and not energy-efficient enough (still 32 nm Sandy Bridge) and as AMD.. well, no need to discuss that :p MrS

Why are you always making AMD snipes? The X6 processors are decent and still offer good crunching bang for the buck. At one project I run they are faster than even the fastest Intels that are MUCH more expensive. We all better hope AMD sticks around or we'll be mortgaging our houses to buy CPUs.

Intel on the desktop: ivy bride runs hotter than sandy bride, the new haswell runs hotter and uses more energy than ivy bridge. Aren't they going in the wrong direction?

Here's part of the review from Xbit Labs:

"The Haswell CPU core temperatures are seriously higher than those of the previous generation processors. And although most every-day tasks do not cause the CPU to heat up so dramatically, we should base our conclusions primarily on specialized stability tests, which create heavy but nevertheless quite realistic load.
So, it turns out that overclocking the new CPUs calls for much better coolers than those we could use for Ivy Bridge processors. In other words, it is harder to reach the same results when overclocking Core i7-4770K as we did with the overclocker-friendly Sandy Bridge and Ivy Bridge products in LGA1155 form-factor."

http://www.xbitlabs.com/articles/cpu/display/core-i7-4770k_12.html

"And frankly speaking, this product is not that impressive at all, especially in the eyes of computer enthusiasts. We tested the top of the line desktop Haswell, Core i7-4770K, and drew a number of bitter conclusions. First, Core i7-4770K is just a little bit faster than the flagship Ivy Bridge processor. Microarchitectural improvements only provide a 5-15 % performance boost, and the clock frequency hasn’t changed at all. Second, Core i7-4770K processor turned out a significantly hotter processor than the CPUs based on previous microarchitecture. Even though Haswell allows engineering energy-efficient processors with impressively low heat dissipation, its performance-per-watt has worsened a lot when they adjusted its characteristics to meet the desktop requirements. This resulted into the third item on this list: without extreme cooling Core i7-4770K overclocks less effectively than the previous generation overclocker processors. The specific CPU sample we tested this time allows us to conclude that these processors may get overheated at 4.4-4.5 GHz clock speeds even with high-performance air coolers. And fourth: Haswell processors require new LGA 1150 platform, which doesn’t boast any unique advantages, but merely offers more USB 3.0 and SATA 6 Gbps ports. But currently this platform seems quite raw and awaits a new chipset stepping, which will fix some issues with the USB 3.0 controller."

http://www.xbitlabs.com/articles/cpu/display/core-i7-4770k_13.html

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30754 - Posted: 9 Jun 2013 | 17:58:25 UTC - in response to Message 30753.

Intel on the desktop: ivy bride runs hotter than sandy bride, the new haswell runs hotter and uses more energy than ivy bridge. Aren't they going in the wrong direction?

The chip of the CPU series prior to Ivy Bridge (namely Sandy Bridge, Sandy Bridge-E, Gulftown, Bloomfield) were actually soldered to the IHS (Integrated Heat Spreader, the metal housing of the chip) resulting in good thermal transfer to the IHS, and low CPU temperatures. But Intel using some "cheap" thermal interface material (TIM) on the new series, so if you want lower CPU temperatures to overclock more, you should remove the IHS (voiding warranty), and put the CPU cooler directly onto the chip (very risky), and/or use better and/or thinner TIM. See this video.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30755 - Posted: 9 Jun 2013 | 19:01:04 UTC

Two more questions.
1. I read that GTX660 is not very good at double precision. Suppose GPUGRID has no work for some days or other issues, then I would like to crunch for MilkyWay or Einstein. For MW we need DP, so which cards are good with DP. In other words, which specs I have to look for?

2. Heat. I have RealTemp 3.70 and CPUID Hardware Monitor running and the temperatures RealTemp shows, are 10 degrees lower (colder) of the CPU; 59 57 58 57 to 69 67 68 67. All in Celsius. The CPU is doing Rosetta 1 for GPUGRID and 1 idle.
I also have an Alienware with liquid cooling and temperatures here are even worse, 72 70 71 69. RealTemp and CPUID have the same values though.
This liquid cooling thing is making a lot of noise by the way, a big fan for the radiator. And not cool. Six cores running Rosetta and the GTX660 is doing GPUGRID with 1 core and 1 idle.
The question: what are acceptable CPU temperatures?

I have also good experience with AMD CPU.

@ETA: A brand new computer in a brand new case (if not to expensive), otherwise I have two, a Cooler Master, and the "Big box" indeed.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30756 - Posted: 9 Jun 2013 | 20:11:50 UTC - in response to Message 30755.
Last modified: 9 Jun 2013 | 20:58:32 UTC

Two more questions.
1. I read that GTX660 is not very good at double precision. Suppose GPUGRID has no work for some days or other issues, then I would like to crunch for MilkyWay or Einstein. For MW we need DP, so which cards are good with DP. In other words, which specs I have to look for?

Only GTX Titan, and GTX 780 is good at DP.

2. Heat. I have RealTemp 3.70 and CPUID Hardware Monitor running and the temperatures RealTemp shows, are 10 degrees lower (colder) of the CPU; 59 57 58 57 to 69 67 68 67. All in Celsius. The CPU is doing Rosetta 1 for GPUGRID and 1 idle.

This is very strange. You should try your motherboard's original monitoring software. (or coretemp 32bit or 64bit)

I also have an Alienware with liquid cooling and temperatures here are even worse, 72 70 71 69. RealTemp and CPUID have the same values though.
This liquid cooling thing is making a lot of noise by the way, a big fan for the radiator. And not cool. Six cores running Rosetta and the GTX660 is doing GPUGRID with 1 core and 1 idle.

This is 10-15 degrees higher than a liquid cooler should be able to provide. If it's noisy, perhaps its pump is about to fail, or the level of its coolant is low, these are the worse could happen to a liquid cooler, and to the part which is cooled by it.

The question: what are acceptable CPU temperatures?

Around 70°C is acceptable with air cooling. The lower the better, even more for overclocking.
There are a coulpe of ways lowering the CPU temperature, some of them voids its warranty (removing, or polishing the IHS)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30759 - Posted: 9 Jun 2013 | 21:05:10 UTC
Last modified: 9 Jun 2013 | 21:10:00 UTC

@Beyond: because even if you can find a project where AMD CPU provide good performance (you can, as you said), the energy efficiency is far too bad for general 24/7 crunching, compared to Intel. I know there are places where electricity is much cheaper than where I live, but I don't think the Netherlands belong to these. Otherwise I like some of what AMD is doing and whish they could do it even better (sore points: single threaded integer performance of Bulldozer and children, power efficiency, smarter turbo modes to be more specific).

Generally I respect XBit Labs as one of the better review sites. But their Haswell test just left me shaking my head. They're using LinX-AVX to test power consumption and load temperatures. But this is the perfect showcase of AVX2 performance enhancements. Here Haswell was shown to be ~70% faster then Ivy per clock. And guess what.. there's no free lunch in computing. That additional crunching power has to come from somewhere. Yet they write "the desktop Haswell is no good in terms of performance per watt. Not all Haswell-based CPUs are energy efficient, as we can see." ... and disregard the performance aspect completely. Seriously, WTF? At this point I have to stop and dismiss further judgement of Haswell by XBit. And while Haswell is not a perfect product, the number I've seen tell me that it's a far better product than XBit writes.

@TJ: And speaking of AMD.. for DP you're really better off with a (high end) AMD GPU. Far better bang for the buck there. With nVidia you have so many other projects to choose from, you don't need to run projects which are better suited to AMD hardware.

70°C measured by CoreTemp on an Intel are fine (they report the hottest spot), for AMDs this is generally equivalent to approximately 50°C (they measure elsewhere, sometimes also just plain wrong). But 70°C seems way too high for water cooling. Is the water warm? Sounds like some contact problem.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30760 - Posted: 9 Jun 2013 | 21:34:17 UTC - in response to Message 30754.
Last modified: 9 Jun 2013 | 21:45:14 UTC

Tetchy thread, but I'll dip my toes again.

Since the i7-980X there has been little improvement in 'out and out' processing power from either AMD or Intel. Lots of different generations, revisions, sockets, packaging, hype and waffle, but limited performance gain.

Intel and AMD have largely ignored CPU improvement wants from the high end workstation, gaming and crunching markets. Instead they have sought gains in other areas; more done on the processor, better performance/watt. At the same time a lot of what we are now working with is the product of marketing strategy; on-die controllers, Intel only chipsets, push towards laptops and portable devices, fewer PCIE lanes, and the 'shroud of the cloud' (server processors) - all at the expense of high end desktop improvements. But the same can be said of NVidia and OpenCL - it's typical business maneuvering.

When SB arrived we reached the point where a laptop processor existed that was capable of handling >99% of office performance requirements.

Since Gulftown, Sandy Bridge-E has been Intel's only attempt at a genuine high end workstation/gaming/crunching system, but it failed to deliver PCIE3 and thus failed almost completely - you don't do CAD on the CPU, don't game on the CPU and don't crunch on the CPU (relatively speaking).

I'm not sure if the i7-4930MX is Intel's new best processor for the laptop market, if you want to spend $1000 for Intel graphics, or if it's some Iris?!? Neither are good value for money when it comes to crunching and gaming on laptops. Ditto for the desktop processors and most ix-4000's.

For crunching the i7-4770K has nothing over i7-3770K. In fact I would say it's just an i7-3770K done all wrong! Seriously, we don't want lots of USB3 ports and another crap iGPU (which hiked the TDP from 77W to 84W).

Maybe the pick of the i7-4000 bunch is the i7-4765T - 35W TDP, or the i7-4770T - 45W.

I measured the actual crunching difference at 4.2GHz of an i7-2600K and an i7-3770K for ~14 CPU projects and there was nothing between them. Only one app from one project was significantly faster (7%), and many were slightly (1 to 3%) faster on the 2600K.

Maybe apps need to be recompiled before there is any 'performance per MHz' gain? If that's the case, many project use the same apps for a long time (year or more). Anyway, an i7-4770K shouldn't be considered as an upgrade option to an i7-2700K or i7-3770K.

Has AMD any plans to go to PCIE3.0 boards?

TJ, You might want to try Speccy.

While I suspect some of the pre-reviews of the i7-4000 series might have been done on unlocked 35W samples, there are a few i7-4000 series processors that operate reasonably fast, but at very low TDP's. The problem is their price is not far off the top models! You can't win.

I would not want to spend time lapping a CPU hood (been there), and wouldn't flip the lid unless I could gold coat, had liquid nitrogen coolant, some special thermal conductive (snap) resin and wanted to break some meaningless record - not quite GPU crunching.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30761 - Posted: 9 Jun 2013 | 21:49:11 UTC
Last modified: 9 Jun 2013 | 22:22:47 UTC

Thanks guys, this is good info.
Well this is my first and only liquid cooling. Water feels not that warm.
There where more problems with the Alienware. It is a nice system and my dad bought it for me.
Unfortunately my dad died before the system arrived. I am not into dark magic, but the system had problems from the start. Eventually they sent a complete new one and that ran fine. But temperatures of CPU to high, but 42°C when idle. According to Dell this (high temperatures) is normal...

I will power it down tomorrow and replace the coolant past. I suppose I can safely remove the whole thing on the processor without dismantling the water unit and such? Have never done that before.

Nice program Zoltan, thanks.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30762 - Posted: 9 Jun 2013 | 22:15:47 UTC

There is more information coming, this is good but makes it more difficult to choose right as well for me.
I found this MOBO: EVGA Classified SR-2 - Mainboard - HPTX, 270-WS-W555-E1.
Okay bit expensive but can fit two CPU's and easily 3 or 4 GPU's.
But isn't it a bit "over the top" for crunching?

I will also do CPU crunching for Rosetta, Docking or Fightmalaria. I have chosen these projects for their cause. So need to be rather fast, but not to expensive and not to much energy. (I have a rather high energy bill, but need way less heating in the winter :) But this is my way to help science to eventually cure some highly affecting diseases.) That was why I was thinking of a six core i7 and then with HT 8 cores for CPU crunching and 4 for GPU crunching and 1 or 2 idle.

For a crunching system I need:
MOBO,
CPU,
Memory,
Hard disk,
PSU,
Case with fans
(CD-ROM/DVD-ROM to install drivers)
And that's it, am I correct?

Should be nice to compare if several people give some ideas about combination of MOBO, CPU and PSU. (GPU will be GTX660 or better if prices fall in the coming weeks months).
____________
Greetings from TJ

nanoprobe
Send message
Joined: 26 Feb 12
Posts: 184
Credit: 222,376,233
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30763 - Posted: 10 Jun 2013 | 0:13:22 UTC - in response to Message 30754.
Last modified: 10 Jun 2013 | 0:15:10 UTC


The chip of the CPU series prior to Ivy Bridge (namely Sandy Bridge, Sandy Bridge-E, Gulftown, Bloomfield) were actually soldered to the IHS (Integrated Heat Spreader, the metal housing of the chip) resulting in good thermal transfer to the IHS, and low CPU temperatures. But Intel using some "cheap" thermal interface material (TIM) on the new series, so if you want lower CPU temperatures to overclock more, you should remove the IHS (voiding warranty), and put the CPU cooler directly onto the chip (very risky), and/or use better and/or thinner TIM. See this video.

The reason Intel switched from solder to TIM on IB was because the conductivity of the solder was not compatible with the new tri gate resistor technology.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30780 - Posted: 11 Jun 2013 | 18:37:32 UTC - in response to Message 30760.

For crunching the i7-4770K has nothing over i7-3770K. In fact I would say it's just an i7-3770K done all wrong! Seriously, we don't want lots of USB3 ports and another crap iGPU (which hiked the TDP from 77W to 84W).

Two major updates with no improvement on desktop boxes.

I measured the actual crunching difference at 4.2GHz of an i7-2600K and an i7-3770K for ~14 CPU projects and there was nothing between them. Only one app from one project was significantly faster (7%), and many were slightly (1 to 3%) faster on the 2600K.

Yep, both Intel and AMD have had little performance increase on the desktop. The big difference is price. AMD has made significant strides in onboard graphics performance though. Intel is trying to catch up in that department.

Has AMD any plans to go to PCIE3.0 boards?

PCI Express 3.0 - The Latest Graphics Standard Now on AMD Boards:

http://www.asus.com/Motherboards/SABERTOOTH_990FXGEN3_R20

http://www.maximumpc.com/article/news/ces_2013_look_ma_amds_990fx_does_have_pcie_30_support_video

AMD is one of the major players in PCIe 4.0. I've heard rumors that some of their latest processors already include support, so perhaps they're not too concerned with wholeheartedly supporting version 3, although 4 is still a ways off AFAIK.

PCI Express 4 in the Works: Set to Achieve 16Gb/s per Lane.:

http://www.xbitlabs.com/news/other/display/20110624231122_PCI_Express_4_in_the_Works_Set_to_Achieve_16Gb_s_per_Lane.html

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30781 - Posted: 11 Jun 2013 | 18:47:02 UTC - in response to Message 30759.

@Beyond: because even if you can find a project where AMD CPU provide good performance (you can, as you said), the energy efficiency is far too bad for general 24/7 crunching, compared to Intel. I know there are places where electricity is much cheaper than where I live, but I don't think the Netherlands belong to these. Otherwise I like some of what AMD is doing and whish they could do it even better (sore points: single threaded integer performance of Bulldozer and children, power efficiency, smarter turbo modes to be more specific). MrS

Happens to be my favorite CPU project: Yoyo. Energy efficiency? The current Phenom X6 has a TDP of 95w, Haswell is 84w. Whoopie, 11 watts. What's the Netherlands have to do with anything? I was thinking of trying a Haswell, but after reading many reviews I'd probably go with a used Sandy Bridge at this point and save some bucks. I do like to support AMD as they've historically been the only company pushing Intel and keeping Intel from robbing us blind like they used to. For crunching on AMD the X6 is still the best unless you're also using the built in GPUs on some of the latest parts.

FoldingNator
Send message
Joined: 1 Dec 12
Posts: 24
Credit: 60,122,950
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwat
Message 30782 - Posted: 11 Jun 2013 | 21:50:34 UTC - in response to Message 30781.
Last modified: 11 Jun 2013 | 21:51:53 UTC

For 1 kWh electricity you have to pay 0,23 eurocent ($0,31/kWh) in the Netherlands. 11 Watts is something like 96 kWh for 1 year (theoretical) full load. And thats more expensive compared to other countries... (I've seen a topic about this??? hmm)

But the TDP number isn't a guide to for what the cpu really uses. The Intel TDP-number is a long-term number for maximum power consumption and has nothing to do with the real maximum power.
Only reallife benchmarks with full load for a similar system (with the same PSU, RAM, GPU, HDD) will give you the real and good numbers for your comparison. I guess you dont buy only a CPU!!! All elements on your total system are important for (full load or any) power consumption (power use from chipsets, vrm's and so on).

Intel will give you not only a more efficient cpu with Haswell, but also built in VRM's, built in north bridge/IMC and a efficient southbridge (32nm vs 65nm).

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30785 - Posted: 12 Jun 2013 | 9:10:10 UTC - in response to Message 30782.
Last modified: 12 Jun 2013 | 9:10:31 UTC

I pay only 11 eurocent per 1KW during the day and 5 cent at nighttime, in the Netherlands. And the end of the year when the bill arrives there is a tax surcharge for the total amount of electricity used. I use a lot indeed and after final calculation this means, in my case, I have to pay 16.44 eurocent per 1KW. (I use around 1KW per hour, depending if I am free and have several rigs running).

But I don't care about the power consumption. This threat is to get a lot of information about a good rig for GPUGRID and Rosetta (and Einstein, Docking and Malaria as back-up projects).

So I am still waiting for some replay on some of my messages below. But I am patient.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30792 - Posted: 12 Jun 2013 | 13:30:51 UTC - in response to Message 30785.
Last modified: 12 Jun 2013 | 13:34:59 UTC

I pay only 11 eurocent per 1KW during the day and 5 cent at nighttime, in the Netherlands. And the end of the year when the bill arrives there is a tax surcharge for the total amount of electricity used. I use a lot indeed and after final calculation this means, in my case, I have to pay 16.44 eurocent per 1KW. (I use around 1KW per hour, depending if I am free and have several rigs running).

Electricity is $0.09/ kilowatt here (kind of off peak, fixed rate but allows the power company to cycle the heating and air conditioning). Don't use air conditioning more than 1 or 2 days a year and don't use heating at all (the computers provide more than enough heat [Minnesota]).

So I am still waiting for some replay on some of my messages below. But I am patient.

You've seen more than enough jabbering about CPUs from various fanboys (me included). XFX, Antec, Corsair, Seasonic and Sparkle all make good power supplies. Google some reviews on the particular model you're looking at, although the gold and platinum models of any of these are most likely very good. The Rosewill platinum line seems to be good too. I mostly use low cost Antec 300 cases. Very good air flow. You'll have to add at least a 120mm side fan and a front fan or two. The last case I bought was the NZXT Source 210 and it's very impressive for the cost:

http://benchmarkreviews.com/index.php?option=com_content&task=view&id=804&Itemid=99999999&limit=1&limitstart=4

In my experience ASUS, ASRock, Gigabyte, MSI, Foxconn and Biostar all make some good and bad motherboards. I'd stay away from ECS completely. Read the reviews (including newegg comments) and make sure the PCIe configuration (preferably 2 x16 lanes) and spacing is good for 2 GPUs.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30807 - Posted: 12 Jun 2013 | 22:46:48 UTC - in response to Message 30792.

With ECS you mean: Elitegroup Computer Systems? I don't think it is to find in the Netherlands. There is not one large store that has everything. There are few on the net, but also limited stock/availability.
I have a PSU from Corsair and that one is indeed very good (with a lot of plugs).

I found a 80 gold PSU from EVGA, and a MOBO from EVGA with good space for three GPU's and of course EVGA GPU's and all to order directly from EVGA in Germany.
And as I like EVGA, I will go for them.
For the CPU I am still thinking but I guess an i7 (6 or 4 core) which uses not to much Watt and Zalman cooler.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30809 - Posted: 13 Jun 2013 | 0:47:47 UTC - in response to Message 30807.

With ECS you mean: Elitegroup Computer Systems?

You got it.

I found a 80 gold PSU from EVGA, and a MOBO from EVGA with good space for three GPU's and of course EVGA GPU's and all to order directly from EVGA in Germany.
And as I like EVGA, I will go for them.
For the CPU I am still thinking but I guess an i7 (6 or 4 core) which uses not to much Watt and Zalman cooler.

Sounds like a nice machine.

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30816 - Posted: 13 Jun 2013 | 13:01:27 UTC
Last modified: 13 Jun 2013 | 13:01:48 UTC

Later today (13 June, or 14 June UTC) I will bring online my new PC.

CPU = AMD FX-8350;
MOBO = ASUS Sabertooth 990FX/GEN3 R2.0;
GPU = ASUS GTX 650 Ti (X2).

The plan for June and July is that one of these will be dedicated to GPUGrid, one to Folding@home and the CPUs to rosetta@home and Drug Search for Leishmaniasis.

My AMD Phenom II 1090T machine is also crunching rosetta@home and Drug Search for Leishmaniasis as well as Folding@home.

Can't wait to get the 8 core working!!

John

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30818 - Posted: 13 Jun 2013 | 13:46:55 UTC - in response to Message 30816.

That motherboard should be able to handle the just released FX-9590 CPU, I'd like to see how well it does at CPDN.

FoldingNator
Send message
Joined: 1 Dec 12
Posts: 24
Credit: 60,122,950
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwat
Message 30840 - Posted: 14 Jun 2013 | 6:37:57 UTC - in response to Message 30785.

@TJ

But I don't care about the power consumption. This threat is to get a lot of information about a good rig for GPUGRID and Rosetta (and Einstein, Docking and Malaria as back-up projects).

Yes I know. But my reaction had to do with what Beyond writed:

from Beyond:
Happens to be my favorite CPU project: Yoyo. Energy efficiency? The current Phenom X6 has a TDP of 95w, Haswell is 84w. Whoopie, 11 watts. What's the Netherlands have to do with anything?


Your 16,44ct/kWh is minus energy tax? (heffingskorting in Dutch)

@ your hardware I can't give you a good advice. What I would to do is buy maximum performance for as little money as possible... but you know that too, you aren't stupid! haha ;)
Quadro sounds very good, but two cards of FX4600 are only some renamed 8800GTX cards (or sort of). I should get some second hand 660Ti or 670 now @ Tweakers.net or something.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30844 - Posted: 14 Jun 2013 | 11:36:07 UTC - in response to Message 30840.

@TJ
Your 16,44ct/kWh is minus energy tax? (heffingskorting in Dutch)

Yes indeed, I have added all costs, like meter costs, network costs, energy tax, then subtracted the benefits (a few euro because I pay automatic and the part of the energy tax we get back). Then the total divided by the amount of kW I haves used over the last year. A my prices per kW vary (per year) the mean prices varies also per year.
But I use gas only for cooking and showering, not heating :)

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30960 - Posted: 24 Jun 2013 | 8:34:17 UTC

I have an quad with 2 nVidia cards with cuda 1.0 so they are obsolete. I have two AMD hd5870 cards left. I can use these for Einstein. But only 2 6 pins power plugs and I need 4. Now the PSU (700Watt) has a very short 8 bus connector free (female connector). I guess I can use an adapter here from 1 8 pins to 2 6 pins pcie? How is such a cable called? I need on quite long or perhaps with an extra extension cable. However I can not find on, but I guess because I don't now how it is called.
Does anyone know what I am talking about? Thanks.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30977 - Posted: 24 Jun 2013 | 17:43:36 UTC - in response to Message 30960.

If that 8-pin connector is very short I suppose it's the E-ATX CPU power for the mainboard. I wouldn't try to repurpose this for powering GPUs, as I don't know what side effects this might have.

What I'd do: plug one of the 6-pin PCIe plugs into each GPU. HD5870 hardly exceeds 150 W (mainly with Furmark etc.), so this should cover the basic power use. For the remaining 2 power ports I'd use Y-adapters from 4-pin molex to 6-pin PCIe. Connect each arm of the Y to 2 different strands from the PSU (there are probably 2 of them available). I forgot how many molex connectors your PSU has.. I hope it's not the one with just 2. There are also Y-adaptors from 1 molex to 2.. but at this point it might actually become dangerous (possibly drawing 75 W over a line made for 40 W).

BTW: I was ill for a few days and stopped the previous discussion. I disagree with quite a few things mentioned here.. but this doesn't really match the topic anyway. Anyone interested in furtheer Hasweell and "AMD vs. Intel for BOINC" discussions?

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30979 - Posted: 24 Jun 2013 | 18:29:17 UTC - in response to Message 30977.

Thanks ETA, glad you are better again, it was way to warm to be ill a few days ago in our surroundings.

No it is not the same, but this is a Dell too. The other one 1000W is indeed a Gold 80+ but I don't know about its efficiency. I will replace this PSU with a 750W from EVGA, then I have power connectors enough and 90% efficiency. Put the 2 HD5870 in for Einstein and Albert.

Glad to read your advice, I have never seen such a plug. Short wire, long but rather small 8 hole connector. This PSU has only 2 molex free, but I need 4 so I can only put two GPU in there with one 6 pin each (2 GTX660).

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30981 - Posted: 24 Jun 2013 | 18:43:00 UTC

Another question.
The Alienware with the liquid cooling seems to stay pretty hot around 70°C.
So I would like to change it by a Zalman processor cooler and put 2 GTX770 in.
Then I do not need to built a new rig just now, (if I can safe more I can buy better stuff in the winter for lower prices by then (I hope)).
I guess it is safe to take the entire liquid cooler parts out at once? I have inspect the system carefully but rather than a pump on the processor and a radiator in front of a fan there is nothing else. I don't see any reservoir with coolant. Has anyone experience with an Alienware?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30987 - Posted: 24 Jun 2013 | 23:17:55 UTC - in response to Message 30977.


BTW: I was ill for a few days and stopped the previous discussion. I disagree with quite a few things mentioned here.. but this doesn't really match the topic anyway. Anyone interested in furtheer Hasweell and "AMD vs. Intel for BOINC" discussions?

MrS

Good to see you back up and posting.
There are lots of views, slants and takes on AMD and Intel processors, which are mostly multi-threaded now. Maybe worth using the Multicore CPUs thread to present any further facts or opinions.
Would be keen to hear any opinions relating to the cheapest/best setup to support a GPU or multiple GPU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31129 - Posted: 29 Jun 2013 | 19:05:29 UTC - in response to Message 30760.

Since the i7-980X there has been little improvement in 'out and out' processing power from either AMD or Intel. Lots of different generations, revisions, sockets, packaging, hype and waffle, but limited performance gain.

It was about time to change the course of the improvement of the processing power. The results of chasing for more and more gigahertzs are Pentium D processors with 95-130W TDP. It was impossible to dissipate that much heat with the supplied coolers. (I don't like them since then, because they can collect a nice tissue on top of the fins of the heatsink, blocking nearly all the airflow, causing overheating, or the CPU underclocking itself.)

Intel and AMD have largely ignored CPU improvement wants from the high end workstation, gaming and crunching markets.

Intel's profit from those marginal markets is insignificant. Just like AMD's, but AMD bought ATi to increase their market coverage.

Instead they have sought gains in other areas; more done on the processor, better performance/watt.

That is what the market needs since then. That is in what the GPUs are better than CPUs. That is why supercomputers are not built on CPUs only anymore. That is why Intel (and AMD) couldn't sell that much CPUs since then. If someone wants to have a high-end workstation with more CPU power (it would be unusual today), it could have two CPUs in it without overheating. Crunching is a long-term activity, so it's better to minimize the cost of the energy it consumes.

At the same time a lot of what we are now working with is the product of marketing strategy; on-die controllers, Intel only chipsets, push towards laptops and portable devices, ...

Nowadays the computing power of mobile devices (smartphones, tablets) is enough for everyday use (office tasks, browsing, social networking), so the office PC/laptop business is in trouble. They have to be more like the mobile devices, or they will become extinct, because the mobile computing devices are much more power efficient.

... fewer PCIE lanes, and the 'shroud of the cloud' (server processors) - all at the expense of high end desktop improvements.

After years of chasing CPU speed to prove the needlessness of 3D accelerators, Intel lost this battle, when the NVidia presented their G80 architecture. So now they focus on what is left for them, and in the meantime trying to catch up with NVidia, AMD (ATi), and ARM. They are 3-5 years behind them, which could be deadly in the computing market.

But the same can be said of NVidia and OpenCL - it's typical business maneuvering.

Without those business maneuvering Intel, AMD and NVidia would be busted, and we couldn't have their devices to crunch on.

When SB arrived we reached the point where a laptop processor existed that was capable of handling >99% of office performance requirements.

Sure. There is no need for faster PCs in the office. But they still could be more power effective, to eliminate active cooling (and the noise and the dust pileup).
I've started my computing experience with passive cooled CPUs (Like Zilog Z80 and MOS 6510 and later the PCs through the Pentium processor), and I (and most of the consumers) would like to have the passive cooling back on modern CPUs of office PCs or laptops.

Since Gulftown, Sandy Bridge-E has been Intel's only attempt at a genuine high end workstation/gaming/crunching system, but it failed to deliver PCIE3 and thus failed almost completely - you don't do CAD on the CPU, don't game on the CPU and don't crunch on the CPU (relatively speaking).

In other words: there is improvement in the high end desktops, but the majority of that comes from NVidia and ATi (AMD), not from Intel.

For crunching the i7-4770K has nothing over i7-3770K.

We'll see, as now I have one. Maybe some projects should gain from the the doubled L1 and L2 cache bandwith, and the other architectural improvements. (I don't expect that the scientific applications could utilize the AVX2)

In fact I would say it's just an i7-3770K done all wrong! Seriously, we don't want lots of USB3 ports and another crap iGPU (which hiked the TDP from 77W to 84W).

If you don't use the iGPU, it won't increase the CPU's power consumption, as the 4xxx series have even more advanced power gating features than the 3xxx series.
The USB3 ports are on the 8x series chipset, which is also more power efficient than the 7x series.
It has nothing to do with your statement above, but I want to share the results of my two system's power consumption measurements:
1. Core2 Duo E6600 (2x2.4GHz), 4x512MB DDR2 800MHz RAM, Intel DQ965GF motherboard
2. Core i7-4770K (8x3.7GHz), 2x2GB DDR3 1333MHz RAM, Gigabyte GA-Z87X-OC motherboard
The PSU and the HDD are the same.
Both systems consumed around 90-96W under full load (Core2 Duo: 2 threads, Core i7: 8 threads)

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31133 - Posted: 30 Jun 2013 | 10:41:35 UTC
Last modified: 30 Jun 2013 | 10:46:29 UTC

A few questions about processors.
If I search Intel for info I see the following:
Number of cores: 6
Box: no
Bus type: DMI
etc. What does "Box" mean?

How do I know which chipset (thus type MOBO) will fit? Not with all processors they mention the chipset I notice. Do I only have to look for the socket to fit?

This one is available in the Netherlands: Intel Core i7-4770K - 3.5GHz - Socket 1150 - Unlocked. What does that "Unlocked" mean? It is not explained on the site.

Finally I see that in the Netherlands AMD processors are very cheap compared with Intel, could be a factor 3 or 4. Am I right that AMD does not have HT?
Will AMD processors work flawless with nVidia GPU's?
Thanks.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31134 - Posted: 30 Jun 2013 | 10:52:50 UTC - in response to Message 31133.
Last modified: 30 Jun 2013 | 11:02:04 UTC

Box means its in a box (retail), rather than a tray (OEM product). Box usually means the CPU is in a box with a heatsink and fan.

Unlocked means you can overclock the CPU.

AMD does not use HT.

Generally speaking AMD processors do not perform as well as Intel CPU's but AMD's tend to be less expensive. While you could pay around 400Euros for an Intel CPU you could also pay 60Euros for a lesser Intel CPU or get an AMD processor.

You will have to research each processor to make sure it fits the board you want. NVidia GPUs run fine on systems with AMD processor or Intel processors.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31135 - Posted: 30 Jun 2013 | 11:22:54 UTC

Well I have one AMD system with AMD CPU and AMD GPU. I use it only for my study and I must say it runs nice, quit and boots fast. I didn't find any problems with it.

I didn't know that it is so important to tell that is is in a box :)
I will use a Zalman CPU cooler.

I found a AMD 8 core black edition 4GHz (125W) for €179 and a comparable (however they can not be compared) Intel 3.5GHz (77W) would cost €305.

I will go for Intel then, first seek a processor and then the MOBO.
However remains my question about the socket and the chipset unanswered.
If a processor needs a 1150 socket and the MOBO has a 1150 socket, then it will fit. But plays the chipset any role? On several retail sites and on Intel's I do not always find info about the chipset. That is way I am asking, as parts that don't work together are easily bought.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31136 - Posted: 30 Jun 2013 | 11:49:01 UTC - in response to Message 31135.
Last modified: 30 Jun 2013 | 12:25:14 UTC

Any LGA1150 CPU will work on any LGA1150 motherboard.

Note that the CPU's bring limitations of their own; an i3 won't support PCIE3, but an i5 or i7 will. If a CPU only clocks to 2GHz it will reduce the GPU performance for here, but only slightly.

My next system will be another Linux rig with 2 GPU's. It will replace an existing (more power hungry) system (another 17% Electric price hike just about to kick in here thanks to privatization) and will be built for purpose (GPU crunching). As I don't intend to use it to crunch for any CPU projects, I don't want to spend £350 on a CPU so I will get the cheapest dual Intel CPU I can (£50). I know I will lose some performance running at PCIE2.0 @X8 and supported by a lesser CPU, but it won't be much and it's certainly not worth paying an extra £300+running costs for a couple of percent improvement.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31137 - Posted: 30 Jun 2013 | 12:33:59 UTC - in response to Message 31133.

Finally I see that in the Netherlands AMD processors are very cheap compared with Intel, could be a factor 3 or 4. Am I right that AMD does not have HT?
Will AMD processors work flawless with nVidia GPU's?

AMD processors work perfectly with NVidia GPUs. I'm running 9 machines with one NVidia GPU and one ATI/AMD GPU in each. Intel is a little faster in most CPU projects but AMD is faster in some. Much of the reason you're seeing faster benchmarks in CPU reviews is that the top AMD processors have more cores and most single programs don't use that many cores at once. From the Guru 3D review:

Concluding then. I'll keep saying this, personally I would have preferred a faster per core performing AMD quad-core processor rather then an eight-core processor with reduced nice per core performance. However we do have to be clear here, we have been working with the FX 8350 processor for a while now and it simply is a great experience overall. Your system is very fast, feels snazzy and responsive. The Achilles heel simply remain single threaded applications. The problem here is that it effects game performance quite a bit, especially with high-end dedicated graphics cards and that's why in it's current form the FX series simply is not that popular amongst the gaming community.

However when multi-threading kicks in wheter that is a game or application ... that loss is turned around in a gain.

http://www.guru3d.com/articles_pages/amd_fx_8350_processor_review,21.html

Of course DC crunching uses all the cores so multicore usage is not a problem. As a couple reviews mention: with the money you save on the AMD processor you can afford a better GPU and thus end up overall with a faster system.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31139 - Posted: 30 Jun 2013 | 16:29:42 UTC
Last modified: 30 Jun 2013 | 17:25:41 UTC

Everybody seems to forget one big glaring development, Jim Keller is back at AMD and there's going to be 1 maybe 2 more CPU upgrades for the socket AM3+.
Ya,ya,ya, just another AMD fanboy spouting off, I've already seen a leaked partially redrawn Steamroller core die shot (socket AM3+), so he's hard at work. That was the one and only reason that sold me on AMD years ago, is the upgrade path with their motherboards, they stick with the same socket for a long time (even AM3 CPU's work in the AM3+).

I'm not trying to start nothing here, I just get tired of AMD being treated like it's a dirty word. I think after what Intel just released, it's to big of a carrot for Jim not to jump on it and make up all the lost ground.

Edit: I mean Jim Keller, sorry about that.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31140 - Posted: 30 Jun 2013 | 19:06:31 UTC - in response to Message 31139.

The Haswell paradox: The best CPU in the world… unless you’re a PC enthusiast, By Sebastian Anthony

"Where the Core i7-3770K is happy to sit at 3.7GHz under full load at 90C, the Core i7-4770K throttles back to 3.5GHz within moments of starting Prime95."
K=pig on a poke!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,720,592,446
RAC: 1,894,958
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31142 - Posted: 1 Jul 2013 | 3:01:55 UTC

Honestly, I do not know much about Computer Science and if would not be for BOINC, I still would not read about it.

But I do know, that I am use all cores or threads (HT) for BOINC on all my computers all the time and if I do have a GPU in the computer, I will have one thread / core reserved feeding it.

So at the end, I am not very concerned about the single thread capabilities of one of the CPU makers, in my case all cores or threads are always 100% charged / occupied and one maker is a lot cheaper, helps keeping the price of CPUs in check, uses the same motherboard for various generations. Or do I miss something? Although I still do have one question mark: The TDP thing, it seems to me that there is a slightly difference of it’s definition between the two competitors, which does not help to compare this important specification of each CPU. But on the other hand is quite similar for more or less the same throughput, isn’t it?

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31143 - Posted: 1 Jul 2013 | 14:04:04 UTC - in response to Message 31142.

Or do I miss something?

What you're missing is, that higher per-thread performance means higher overall performance. Especially for distributed computing projects, like BOINC, that utilize all available computing power, this means you'll get significantly more work done.

For crunching, I think AMD just can't beat Intel right now. The only AMDs I'd pick would be 8-cores, but again they're not real 8-core, they are 4-core with double integer units *, making for half-8-core, if such a term is valid. So, don't really know how they would fare against Intel 4-cores at similar frequencies.

* maybe other core parts as well
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31144 - Posted: 1 Jul 2013 | 14:52:24 UTC - in response to Message 31143.

Or do I miss something?

What you're missing is, that higher per-thread performance means higher overall performance. Especially for distributed computing projects, like BOINC, that utilize all available computing power, this means you'll get significantly more work done.

For crunching, I think AMD just can't beat Intel right now. The only AMDs I'd pick would be 8-cores, but again they're not real 8-core, they are 4-core with double integer units *, making for half-8-core, if such a term is valid. So, don't really know how they would fare against Intel 4-cores at similar frequencies.

* maybe other core parts as well

An AMD 8 core will do more work than an 4 core Intel with the same clock speed.
Also keep in mind the Intel quad-cores with HT and thus run 8 cores, actually have 4 real cores. And the AMD with 8 cores have higher clock speeds than "compatible" Intels. I have an AMD 4 core and does not under perform with Intel, its temperature is lower. So that is a plus, as well as the lower price.
The only min point I have is that AMD's have higer TDP and theoretically use more power. This is off course an issue when running 24/7.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31145 - Posted: 1 Jul 2013 | 16:35:26 UTC - in response to Message 31143.

For crunching, I think AMD just can't beat Intel right now. The only AMDs I'd pick would be 8-cores, but again they're not real 8-core, they are 4-core with double integer units *, making for half-8-core, if such a term is valid. So, don't really know how they would fare against Intel 4-cores at similar frequencies.

Oversimplified explanation: lets call it 8 integer cores and 4 floating point cores. AMD decided to focus on the much more common integer tasks and rely more on extensions to bolster floating point. This was a change from the Phenom X6 which had 6 powerful hardware floating point cores. In fact a case could be made that the Phenom X6 is still the best bang for the buck processor available. At some projects it's faster than ANY Intel i7 (my favorite CPU project Yoyo for instance). The 95w 1045T can be had for $80 from Microcenter or $90 from TigerDirect and works on the latest AM3+ motherboards. Think of all the extra cash you could use to buy a better GPU. For instance the price difference would more than move you up from a 650 Ti to a 660 Ti and still have enough money left over for a nice dinner, another 8GB of ram or a better power supply.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31146 - Posted: 1 Jul 2013 | 17:13:55 UTC - in response to Message 31145.

The Phenom X6 is also slightly faster on Einstein, my main project.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31147 - Posted: 1 Jul 2013 | 17:25:35 UTC - in response to Message 31145.

"8 integer cores and 4 floating point cores" - Now there's a good description, and explanation as to why the Phenom X6 processors outperform the latest 8-core AMD processors for floating point apps.

Which of the Yoyo apps does the 1045T perform best at?

"The Phenom X6 is also slightly faster on Einstein, my main project".
- Faster than what, an 8core AMD or an i7 (and which one)?

Some CPU projects are very dependent on memory speeds.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31148 - Posted: 1 Jul 2013 | 18:41:56 UTC - in response to Message 31147.

"8 integer cores and 4 floating point cores" - Now there's a good description, and explanation as to why the Phenom X6 processors outperform the latest 8-core AMD processors for floating point apps.

Which of the Yoyo apps does the 1045T perform best at?

"The Phenom X6 is also slightly faster on Einstein, my main project".

I run yoyo ecm but even projectwide the X6 is fastest:

http://www.rechenkraft.net/yoyo//top_hosts.php

The 2 opterons at the top are multi cpu servers and the intel in 3rd place has multiple cpus (24 cores). 4th (and fastest single cpu machine) is my lowly 1035T running 5 cores on Yoyo. Next is a 6/12 core HT 3930K that just popped up a few places. From there down on the first page (discounting the multi CPU servers) it's mostly AMD even though there are far more Intels running the project.. Even the 8120 does well. The sandy & ivy bridges have done far better than the earlier Intels and some are hovering near the top, but considering the cost. If you look at the all time list it's even more telling, 14 of the top 20 single CPU machines are Phenoms. It will be interesting though when a few of the new 8350 8 cores work their way up the RAC list. I would guess they might be the new Yoyo speed champs.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31149 - Posted: 1 Jul 2013 | 22:06:55 UTC - in response to Message 31147.

Both.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31151 - Posted: 2 Jul 2013 | 10:16:38 UTC

OK, I spent some time looking at BOINCstats CPU break-downs to get as clear a picture of CPU performance as I can. First of all, I looked at not only Yoyo, but also WCG and SETI, as Yoyo is really a niche project with only ~3700 hosts vs ~216000 of WCG and ~207000 of SETI. Yoyo and WCG run apps of various kinds, so they should reflect overall CPU performance adequately well. SETI is more specific, but should also help.

Then I had some difficulty choosing the column(s) to sort CPUs by, as I wanted to take out historical, populative and usage (how many hours per day of BOINC execution) factors. Total credit is populative, historical and usage-dependent, Average credit is populative and usage-dependent, Credit per CPU is historical and usage-dependent, Average credit per CPU is usage-dependent. That leaves us with Average credit per CPU second, which sounds like independent of these three factors.

For Yoyo, here are the first 20 CPUs(link):

1 Cell Broadband Engine [Model 0 ]
2 Cell Broadband Engine
3 AMD Athlon(tm) II X2 265 Processor
4 PS3Cell Broadband Engine
5 Intel(R) Core(tm) i5-2405S CPU @ 2.50GHz
6 AMD Phenom(tm) II X2 521 Processor
7 Intel(R) Core(tm) i5-2550K CPU @ 3.40GHz
8 Genuine Intel(R) CPU @ 2.90GHz
9 Intel(R) Core(tm)2 Extreme CPU X9775 @ 3.20GHz
10 Intel(R) Core(tm) i5-3570K CPU @ 3.40GHz
11 Intel(R) Xeon(R) CPU E3-1275 V2 @ 3.50GHz
12 Intel(R) Core(tm) i5-2500K CPU @ 3.30GHz
13 AMD Phenom(tm) II X4 B50 Processor
14 Intel(R) Xeon(R) CPU E5205 @ 1.86GHz
15 Intel(R) Core(tm) i5-2500 CPU @ 3.30GHz
16 Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
17 AMD Phenom(tm) II N620 Dual-Core Processor
18 AMD Phenom(tm) II X4 B95 Processor
19 Intel(R) Core(tm) i5-3470S CPU @ 2.90GHz
20 Intel(R) Core(tm) i5-3570 CPU @ 3.40GHz

Here are the top 20 CPUs for WCG (link):

1 IntelQEMU Virtual CPU version 0.9.0
2 Intel(R) Core(tm)2 Quad CPU @ 3.20GHz
3 Intel(R) Core(tm) i5-2300 CPU @ 2.80GHz
4 AMDQEMU Virtual CPU version 0.9.1
5 Genuine Intel(R) CPU 0 @ 2.70GHz
6 Intel(R) Xeon(R) CPU X5270 @ 3.50GHz
7 Intel(R) Xeon(R) CPU X5492 @ 3.40GHz
8 Intel(R) Core(tm)2 Extreme CPU X9750 @ 3.16GHz
9 Intel(R) Core(tm) i7 CPU X 995 @ 3.60GHz
10 Intel(R) Xeon(R) CPU X3380 @ 3.16GHz
11 Intel(R) Xeon(R) CPU E3113 @ 3.00GHz
12 Intel(R) Xeon(R) CPU X5677 @ 3.47GHz
13 Intel(R) Core(tm)2 CPU T6600 @ 2.20GHz
14 Intel(R) Core(tm) i5 CPU S 750 @ 2.40GHz
15 Intel(R) Xeon(R) CPU X3370 @ 3.00GHz
16 Intel(R) Core(tm)2 Duo CPU E8435 @ 3.06GHz
17 Intel(R) Xeon(R) CPU X5272 @ 3.40GHz
18 Intel(R) Core(tm) CPU 750 @ 2.67GHz
19 Intel(R) Xeon(R) CPU X5482 @ 3.20GHz
20 Intel(R) Xeon(R) CPU X5470 @ 3.33GHz

Here's the list for SETI (link):

1 Intel(R) Core(tm) CPU 975 @ 3.33GHz
2 Intel(R) Core(tm) CPU 960 @ 3.20GHz
3 AMDAthlon 64 Dual Core 4200+
4 AMD Athlon(tm) X2 Dual Core Processor 6850e
5 Intel[EM64T Family 6 Model 23 Stepping 10]
6 [Intel64 Family 6 Model 23 Stepping 10]
7 AMD Athlon(tm) 64 X2 Dual Core CPU 4400+
8 Intel(R) Xeon(R) CPU E5240 @ 3.00GHz
9 AMD Sempron(tm) Dual Core Processor 4700
10 AMD Phenom(tm) Ultra X4 22000 Processor
11 Intel(R) Core(tm) CPU 860 @ 2.80GHz
12 Intel(R) Xeon(R) CPU L3110 @ 3.00GHz
13 Genuine Intel(R) CPU 000 @ 2.93GHz
14 AMD[x86 Family 15 Model 107 Stepping 2]
15 Quad-Core AMD Opteron(tm) Processor 2358 SE
16 Intel(R) Core(tm)2 CPU E8500 @ 3.16GHz
17 Intel(R) Xeon(R) CPU X5272 @ 3.40GHz
18 Genuine Intel(R) CPU 000 @ 3.20GHz
19 Intel(R) Core(tm) i7 CPU 965 @ 3.20GHz
20 Intel(R) Core(tm)2 Extreme CPU X9000 @ 2.80GHz

I haven't included the credit numbers, as credit is dependent on the project, so the values themselves wouldn't contribute to valid comparisons. Besides, the numbers are there in the BOINCstats site for everybody to see. Obviously, there are other factors that shape the lists and that we can't control: overclocking, memory speeds, heat-induced clock throttling, etc.

Here's what I make from these lists: SETI appears to utilize AMD CPUs well, although the top AMD, which is the ancient X2 4200+ (which I had myself before replacing with my i7-870!), comes at third place. WCG, with its massive population and wide application range, clearly shows Intel as the winner. Finally, Yoyo gives the win to the PS3 with its Cell 9(?)-core processor and seems to favor AMDs slightly.

So I say that my conclusion holds still: for crunching in general, Intel is better than AMD.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31152 - Posted: 2 Jul 2013 | 11:52:44 UTC - in response to Message 31151.

A few additional issues with those results.
For Yoyo, there are several apps and BOINCstats doesn't reveal which were being crunched by which processor.
The Cell processors are basically a cross between a CPU and a GPU, so they don't make for good comparisons. You certainly can't plug an NVidia GPU into one, so it's irrelevant for here.
The credit/h at WCG can vary by >10% depending on what CPU you have and what project you crunch for. Historically WCG has usually had 6 or 7 active projects. At present it's closer to two projects. Their apps and relative contributions to projects varies significantly.
The results are for 'logical cores'. So an i7 will be seen as having 8 processors, while an Athlon X2 265 will be seen as two processors. Each core of the Athlon may get 0.03 credits per second, but the processor gets 0.06 credits per second as a whole. An i7-3970X gets 0.019585 credits per thread. So 0.15668 credits/second for the entire processor (~2.5times more than the X2 265).

The only accurate way to measure performance is to base it on run time at reference/stock speeds per project. Then compile a list of results to show what the relative performances of each CPU (not core/thread) is like. After that you can look at performance/Watt, and then performance/system-Watt.

Ultimately if you want to crunch on a GPU, it's better to spend the money on the GPU(s) than the CPU.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31153 - Posted: 2 Jul 2013 | 12:23:04 UTC - in response to Message 31152.

For Yoyo, there are several apps and BOINCstats doesn't reveal which were being crunched by which processor.

We don't care about apps though, just the general performance.

The Cell processors are basically a cross between a CPU and a GPU, so they don't make for good comparisons. You certainly can't plug an NVidia GPU into one, so it's irrelevant for here.

It may be irrelevant for GPUGRID, but it's certainly relevant for judging processing performance.

The credit/h at WCG can vary by >10% depending on what CPU you have and what project you crunch for. Historically WCG has usually had 6 or 7 active projects. At present it's closer to two projects. Their apps and relative contributions to projects varies significantly.

Agreed, but the statistics do have historical significance: they are derived from processing all kinds of tasks that have ever existed in WCG, by all types of CPU that have come and gone in the lifetime of WCG. Obviously, not all combinations are recorded, since some CPUs didn't exist when some tasks were available and vice-versa, but that's why I've only considered the top 20, to isolate the best performers (which should include newer chips), or the most efficient task-CPU combinations.

The results are for 'logical cores'. So an i7 will be seen as having 8 processors, while an Athlon X2 265 will be seen as two processors. Each core of the Athlon may get 0.03 credits per second, but the processor gets 0.06 credits per second as a whole. An i7-3970X gets 0.019585 credits per thread. So 0.15668 credits/second for the entire processor (~2.5times more than the X2 265).

Are you sure that the reported Average credit per CPU second is per thread? If it is, then the lists have to change to reflect the actual whole-CPU performance.

The only accurate way to measure performance is to base it on run time at reference/stock speeds per project. Then compile a list of results to show what the relative performances of each CPU (not core/thread) is like. After that you can look at performance/Watt, and then performance/system-Watt.

In an ideal world, yes!

Ultimately if you want to crunch on a GPU, it's better to spend the money on the GPU(s) than the CPU.

Agreed, but I guess we're almost all CPU crunchers as well, so it's not an out-of-scope discussion.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31154 - Posted: 2 Jul 2013 | 13:21:18 UTC - in response to Message 31153.

Are you sure that the reported Average credit per CPU second is per thread? If it is, then the lists have to change to reflect the actual whole-CPU performance.

It looks fairly obvious to me, otherwise the chart is a complete nonsense.

For example,
3 AMD Athlon(tm) II X2 265 Processor 11 985,829.86 8.03 89,620.90 0.73 0.029980
...
21 AMD Athlon(tm) II X4 645 Processor 27 2,258,548.79 3,311.85 83,649.96 122.66 0.021578

We are probably looking at an X4 645 being 44% faster than an overclocked X2 265.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31163 - Posted: 2 Jul 2013 | 16:13:34 UTC - in response to Message 31151.

Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry...

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31166 - Posted: 2 Jul 2013 | 17:02:39 UTC
Last modified: 2 Jul 2013 | 17:06:09 UTC

I thought I replace the liquid cooler in the Alienware but the 3 I have don't fit. Hight is not a problem it is its wideness. Small room so a round pump fits nice. Seems they have thought about that :) Well then new thermal paste and the old stuff seems the be applied little. Now cool when running GPU (2) only, but quickly rising to 83°C when crunching on 6 cpu cores. I run now one at a time (at 75°C) and when finished I will replace once more another paste (the best I can get from were I work).

But I need (want) a new system soon and perhaps I can make a deal with the lady. However a question, I guess for Beyond (he will be smiling). I found an Asus Sabertooth 990FX R2 and a FX8350, together less than an i7 with 3.5GHz!
Enough space for 1 GTX690 or 2 GTX660 (with a slot free). I found Kingston blue memory 2 times 8Gb at 1600MHz. That works on the MOBO but I can not find on AMD's site if it works with the CPU. Will it?

Edit: I also see that the Kernel times (red) are high(er) on the Alienware than this morning when it ran with one GTX660, and the old thermal paste.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31167 - Posted: 2 Jul 2013 | 17:23:54 UTC - in response to Message 31163.
Last modified: 2 Jul 2013 | 17:33:16 UTC

WUProp@home has more meaningful Results for both CPU projects and GPU projects:

WCG FA@H CPU comparison by app runtime

I suggest people buying a CPU look at their preferred project and the performance of different processors.


An example of some of the GPU data available:

GTX650Ti


GTX660Ti


GTX670




http://wuprop.boinc-af.org/results/gpu.py?fabricant=NVIDIA&type=GeForce+600+Series&modele=GeForce+GTX+670&tri=projet&sort=asc
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31168 - Posted: 2 Jul 2013 | 17:42:14 UTC - in response to Message 31166.
Last modified: 2 Jul 2013 | 17:59:22 UTC

But I need (want) a new system soon and perhaps I can make a deal with the lady. However a question, I guess for Beyond (he will be smiling). I found an Asus Sabertooth 990FX R2 and a FX8350, together less than an i7 with 3.5GHz!
Enough space for 1 GTX690 or 2 GTX660 (with a slot free). I found Kingston blue memory 2 times 8Gb at 1600MHz. That works on the MOBO but I can not find on AMD's site if it works with the CPU. Will it?

I would think that CPU & MB would work with most brand name DDR3 modules. Here's some info that might help:

http://www.tomshardware.com/answers/id-1658052/memory-asus-sabertooth-990fx-crosshair-formula.html

http://www.tomshardware.com/forum/361359-28-best-32gb-ddr3-8350-asus-sabertooth-990fx

http://forums.amd.com/game/messageview.cfm?catid=446&threadid=163493&forumid=11

http://support.amd.com/us/kbarticles/Pages/ddr3memoryfrequencyguide.aspx

I'd look into low voltage 1.35v DDR3 to lower power usage and temps a bit. From what I understand the FX8350 memory controller can handle memory voltages down to 1.2v.

BTW, nice system!

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31171 - Posted: 2 Jul 2013 | 20:00:15 UTC - in response to Message 31166.

I found Kingston blue memory 2 times 8Gb at 1600MHz. That works on the MOBO but I can not find on AMD's site if it works with the CPU. Will it?


Yes, works perfectly. I have all 990FX chipsets with Kingston HyperX 1600 memory and no issue's. If you get that system, PM me or Beyond and we can help you with the BIOS settings (hope you don't mind me volunteering you Beyond ;). The FX8350 will work with memory up to 1866 and maybe higher with certain BIOS updates (native), and if you can locate a Sabertooth Gen3 R2.0 you get PCIe 3.0 (not much use yet).

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31172 - Posted: 2 Jul 2013 | 21:34:22 UTC - in response to Message 31171.

Thanks, I will do that, but I think it will take a few months before the new system is ready to build.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31174 - Posted: 2 Jul 2013 | 21:56:37 UTC

While checking a few things I saw that the T7400 has 7 slots, only 2 PCIE that will needed but also with no space in between. Thus this large case is small after all and my mis-bought of the year. It is pointless to install a new PSU in it and will consider it as a loss.

Looking for a new case, they are all small or can fit a cooler with a height slightly under 15cm. The Cosmos II seems large enough but will not fit under my desk and is quite expensive.

Then the MOBO with space for two GPU's and space in between. Also tricky or they become expensive.
@Zoltan, as I see you have rigs with 2 GPU and I suppose you want a free slot in between, which MOBO's do you have that apply this? For a new Intel build.
Will go for the AMD with Sabertooth first, but just in case.

@Beyond, all I can find is Kingston 1.6 or 1.65V memory. It is possible that not everything can be obtained in the Netherlands. We dozens of shops and they have little from a lot of brands with a large price span. Sometimes I find stuff in Germany not all send it abroad. From the US can, but will be very expensive due to transport and customs, I have experienced in the past. Perhaps I can place a memory cooler as well. I saw some things from Zalman, to adjust all the fans from a front panel.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31176 - Posted: 3 Jul 2013 | 0:54:33 UTC - in response to Message 31174.

@Beyond, all I can find is Kingston 1.6 or 1.65V memory. It is possible that not everything can be obtained in the Netherlands. We dozens of shops and they have little from a lot of brands with a large price span. Sometimes I find stuff in Germany not all send it abroad. From the US can, but will be very expensive due to transport and customs, I have experienced in the past. Perhaps I can place a memory cooler as well. I saw some things from Zalman, to adjust all the fans from a front panel.

I would really try to find at least standard 1.5v memory. The 1.35v stuff is getting more common but maybe not there yet. As far as GPU spacing, you need two empty slots between the GPUs. That will give you a decent air space when using double slot cards. The sabertooth you mention has proper spacing for 2 cards.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31177 - Posted: 3 Jul 2013 | 10:13:17 UTC - in response to Message 31163.

Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry...

Don't be sorry, it wasn't much work really. Besides, I believe you're wrong. What if credit rates have changed or some people were cheating? I believe it's pretty easy to single-out dubious results, like that Core2 or the old Athlon X2 and come up with valid conclusions. That's why I chose the top 20, so we have enough CPUs to work with, filtering out the "noise".

I went through the Yoyo list again, multiplying the single-thread Credit per CPU second (indicated by skgiven, thanks!) by the number of cores / threads in each CPU. I also removed the PS3/Cell entries.

So, here's the list for Yoyo:
Intel(R) Xeon(R) CPU L5638 @ 2.00GHz
Intel(R) Xeon(R) CPU E3-1275 V2 @ 3.50GHz
Intel(R) Core(tm) i5-2405S CPU @ 2.50GHz
Intel(R) Core(tm) i5-2550K CPU @ 3.40GHz
Intel(R) Core(tm) i5-3570K CPU @ 3.40GHz
Intel(R) Core(tm)2 Extreme CPU X9775 @ 3.20GHz
Intel(R) Core(tm) i5-2500K CPU @ 3.30GHz
AMD Phenom(tm) II X4 B50 Processor
Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
Intel(R) Core(tm) i5-2500 CPU @ 3.30GHz
AMD Phenom(tm) II X4 B95 Processor
Intel(R) Core(tm) i5-3470S CPU @ 2.90GHz
AMD Athlon(tm) II X2 265 Processor
AMD Phenom(tm) II X2 521 Processor
Intel(R) Xeon(R) CPU E5205 @ 1.86GHz
AMD Phenom(tm) II N620 Dual-Core Processor

I didn't go through the WCG and SETI lists, since they can or have utilized GPUs and so the CPU numbers are not correct, as indicated by Beyond (thanks!). Besides, I didn't want Beyond to think I did so much work again! :P

Before you take a look at the Yoyo Top hosts list and dismiss the list above, please take into account this: This is statistical data for the whole population of Yoyo@home. Statistics is all about averages, means, medians and all that. By definition, single cases can and do exist outside the statistical domain. Also, we're discussing about CPUs here, not hosts. What would happen to Yoyo's Top hosts list if somebody with a dual Xeon L5638 system crunched for it?

The results are pretty clear, to me at least: Intel dominates Yoyo! Yes, if you own Yoyo's top or second top host, you may have a different opinion, but try to think about the general case: in general, Intel crunches better!

Finally, maybe some CPUs are overclocked, maybe credit awarding rates have changed through time, maybe some people have cheated, whatever. These discrepancies apply for both Intel and AMD CPUs, and therefore the final results shouldn't be affected, at least much.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31178 - Posted: 3 Jul 2013 | 10:20:14 UTC - in response to Message 31176.

I found this at Amazon: Kingston KHX16LC9X3K4/16X Arbeitsspeicher 16GB (1600MHz, CL9, 4x 4GB, DIMM 1,35V) DDR3-RAM Kit, so that would be okay.
Indeed the sabertooth is nicely spaced I will order that today as I see that their stock is going down.

Last night my Alienware shut down (I did set in Core Temp to shut it down when reaching 82°C. So the Arctic Silver is not good the the liquid cooling.
I will replace it later as the last Nathan is finished. CPU crunching is not possible now. Shame that I can't find a small CPU cooler with a fan that fit in the space provided. The MOBO is quite high mounted in the case to it will reach the top of it.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31180 - Posted: 3 Jul 2013 | 14:00:02 UTC - in response to Message 31177.
Last modified: 3 Jul 2013 | 14:17:54 UTC

Vagelis, your list has this Xeon L5638 @ 2.00GHz as the top CPU, yet it's really very slow:

http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+L5638+%40+2.00GHz

Your list has the i5-2405S @ 2.50GHz running faster than the i5-2550K @ 3.40GHz or the i5-3570K @ 3.40GHz. No i7 CPUs at all? What? Did you even look at your list? Seems not.

Pretty silly list, isn't it?

Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry...

The results are pretty clear, to me at least: Intel dominates Yoyo! Yes, if you own Yoyo's top or second top host, you may have a different opinion, but try to think about the general case: in general, Intel crunches better!

This is ridiculous. You can go through all the phony statistical gyrations you want but it doesn't make it true. Those of us who've run Yoyo for years know what CPUs produce the most. My teammate who was running all Sandy Bridge Intels saw how fast my AMDs were running and started switching over to AMD. He's the highest Yoyo producer ever. I'm number two. He's almost completely converted to AMD now. I admit to being an AMD fan. The cost/performance is better. In my experience they've been much more reliable (I've built hundreds of PCs for local companies and individuals). AMD has been the only thing that has kept Intel honest both in performance and pricing. Intel has played dirty pool against competitors throughout its history with it's FUD, anti-competitive practices, rigging benchmarks, giving payouts to PC makers to not use AMD, etc, etc, etc. . At least post a disclaimer that you're an Intel fanboy.

Here's the all time Yoyo leading machines. AMD dominates Yoyo (14 of the top 20 single CPU computers). Yet as you say there are MANY more Intels being used. How do you explain (twist) that?

http://www.rechenkraft.net/yoyo//top_hosts.php?sort_by=total_credit

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31181 - Posted: 3 Jul 2013 | 14:08:40 UTC

http://www.agner.org/optimize/blog/read.php?i=49&v=t

http://semiaccurate.com/2011/06/20/nvidia-amd-and-via-quit-bapco-over-sysmark-2012/

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31182 - Posted: 3 Jul 2013 | 15:23:38 UTC
Last modified: 3 Jul 2013 | 15:26:08 UTC

Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you.

And to the hardware. I have use other heat paste on the Alienware. It has now one GTX660, running with 93% load at 74°C and 74% fan speed (maximum). Perhaps to thick? The stuff did nice flow smooth from itself.

But worse it is now crunching one Nathan and 2 Rosies on the CPU. CPU is between 69-80°C. So I guess the Alienware will soon die. It will Kill itself or shut dowm when 82°C is reached.

The other GTX660 is in the other box (with all the problems with the first card) and does a short run now. It seems to be the same problems as before. Kernel times are almost the same as CPU usage. So when finished the GTX660 goes in the box and I put the GTX285 back in do some MilkyWay.
____________
Greetings from TJ

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31183 - Posted: 3 Jul 2013 | 15:53:24 UTC

Then the Thread should be renamed to GPU Hardware Questions. So it is legitim to talk about CPUs in a HARDWARE QUESTION Thread in my opinion :P
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31184 - Posted: 3 Jul 2013 | 16:07:09 UTC - in response to Message 31182.

Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you.

TJ, when you asked the magical question: "What CPU would you suggest?" all heck broke loose :-)

And to the hardware. I have use other heat paste on the Alienware. It has now one GTX660, running with 93% load at 74°C and 74% fan speed (maximum). Perhaps to thick? The stuff did nice flow smooth from itself.

But worse it is now crunching one Nathan and 2 Rosies on the CPU. CPU is between 69-80°C. So I guess the Alienware will soon die. It will Kill itself or shut dowm when 82°C is reached.

Something is wrong, is the HS/fan seated properly? I'd drop the CPU WUs until you get it worked out. You could also try opening the case and aim a fan at it for now.

The other GTX660 is in the other box (with all the problems with the first card) and does a short run now. It seems to be the same problems as before. Kernel times are almost the same as CPU usage. So when finished the GTX660 goes in the box and I put the GTX285 back in do some MilkyWay.

With NV 6xx series GPUs the CPU time is supposed to be almost the same as the GPU time. It's hard to say more when your computers are hidden...

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31185 - Posted: 3 Jul 2013 | 16:16:12 UTC - in response to Message 31184.

[quote] TJ, when you asked the magical question: "What CPU would you suggest?" all heck broke loose :-)

I know, I did, but that is sorted out now. The sabertooth is on its way so...
But a long discussion about statistics and comparing things with each other that are not comparable, makes the thread long. Mercedes and Audi aren't comparable either.

And to the hardware. I have use other heat paste on the Alienware. It has now one GTX660, running with 93% load at 74°C and 74% fan speed (maximum). Perhaps to thick? The stuff did nice flow smooth from itself.

Something is wrong, is the HS/fan seated properly? I'd drop the CPU WUs until you get it worked out. You could also try opening the case and aim a fan at it for now.

Yes good seated, I did already set a fan besides it. And I found a small cooler, will order it and hope it fit.


____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31193 - Posted: 3 Jul 2013 | 17:59:47 UTC - in response to Message 31182.

Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you.

I agree, there is probably a bit too much about CPU's in this thread already, and I get the feeling plenty of people want a better place to discuss CPU's further, so I started a new thread, CPU Comparisons - general open discussion :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31216 - Posted: 4 Jul 2013 | 17:25:31 UTC

A question for system builders.
I have read some manuals about applying thermal grease to the CPU. A big drop in the middle and then spread out with an old bank card. Is that indeed what you do?

I have it spread very evenly in the past with the syringe it is in. The more expensive stuff then automatically flowed out like one smooth surface, but could be thick.
In a movie I saw that after spreading it out with the bank card that you could see the metal again. Is that okay?

I ask this because I get some coolers tomorrow and hope one will fit on the Alienware to replace it not functioning liquid cooler.

Thanks for the info, really reading with high interest.

More questions will follow in the coming days...
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31221 - Posted: 4 Jul 2013 | 19:54:13 UTC - in response to Message 31216.

A question for system builders.
I have read some manuals about applying thermal grease to the CPU. A big drop in the middle and then spread out with an old bank card. Is that indeed what you do?

I've read the same stuff and tried a lot of ways myself. What works for me is to apply a thin layer to the whole surface of the CPU then put a rice sized drop in the middle before attaching the HS/fan. Sometimes it takes a few days before optimum temps are reached, but it should be close to target from the start. If not, the HS/fan is probably not seated well.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31224 - Posted: 4 Jul 2013 | 20:20:06 UTC - in response to Message 31221.
Last modified: 4 Jul 2013 | 20:21:18 UTC

A question for system builders.
I have read some manuals about applying thermal grease to the CPU. A big drop in the middle and then spread out with an old bank card. Is that indeed what you do?

I've read the same stuff and tried a lot of ways myself. What works for me is to apply a thin layer to the whole surface of the CPU then put a rice sized drop in the middle before attaching the HS/fan. Sometimes it takes a few days before optimum temps are reached, but it should be close to target from the start. If not, the HS/fan is probably not seated well.

Except the rise drop we do it the same. I have got stuff from the cleanroom from the Uni where the use it with chips they made themselves.
However I forgot a liquid to clean the old grease. I have used white spirit in the past, but that is not good as I have read correctly. Personally I don't believe in all the special products that are sell for it. Expensive for "normal" chemicals found around the house.
What do you use Beyond?

P.S. the sabertooth arrived, nice board.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31227 - Posted: 4 Jul 2013 | 21:16:25 UTC - in response to Message 31224.

What do you use Beyond?

99% iso-alcohol if I can find it, 91% if I can't.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31229 - Posted: 4 Jul 2013 | 21:20:51 UTC

Almost sounds like you run out of water in that liquid cooler :D

And don't worry too much about the thermal paste. Apply a thin layer, maybe a small additional spot in the middle or not. Then once you place the heat sink press and twist it lightly back and forth. This will distribute the paste just fine, if it's not super-viscous.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31230 - Posted: 4 Jul 2013 | 21:32:08 UTC - in response to Message 31229.

Then once you place the heat sink press and twist it lightly back and forth. This will distribute the paste just fine

Hey, I was going to say that too but I forgot :-)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31237 - Posted: 4 Jul 2013 | 22:42:58 UTC - in response to Message 31230.
Last modified: 4 Jul 2013 | 22:49:23 UTC

The biggest mistake people make when applying heatsink compound is using too much. The compound layer needs to be as thin as possible, to conduct the heat from one solid to the other, but cover as much of the CPU and heatsink as possible (all of them). Obviously heatsinks that don't cover all of the processor are of lesser design.
I prefer the thinner (less-viscous) spreads as you can get a more even layer and you know it will spread itself out, even if you don't because you have applied too much proof :)
When using the less-viscous compounds I tend to spread it on the processor and the heatsink (very thinly). I push down and twist the heatsink around a bit before tightening at opposite angles, bit by bit.

After all the pro tips, if you cant apply resin now, you have no chance.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31239 - Posted: 4 Jul 2013 | 22:54:15 UTC
Last modified: 4 Jul 2013 | 22:54:25 UTC

Thank you all guys, this is very useful information.
I have checked and have a lot of alcohol type in house but not iso-alcohol, it is a bit hard to get, but will try Uni again.

I have the liquid cooler built out completely and did not find any leaks or any openings to check or fill the radiator. Its rather small all in all, no tack like in a car. According to Dell it is not water but a special liquid.
The tubes are stiff and seems to mounted and then heated or so to let them shrink. As far as I can judge it, it is not possible to remove a tube without damaging it. There are no clamps used as other systems does.

At first I thought this was my first and only water cooler. But I have watched a lot of systems today, in preparation for my new system to build, and saw a lot of water coolers, even on the graphics. So perhaps in the future, if I heard or read more experience of it.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31243 - Posted: 5 Jul 2013 | 9:26:18 UTC
Last modified: 5 Jul 2013 | 9:28:06 UTC

Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4

In the video, you can see that the layer method is actually worse than the other ones. From what I understand, because of air trapped between the top surface and the not perfectly flat paste layer.

I like to use the line method, especially with heatsinks that have their heat pipes directly touch the CPU: I apply a line of paste perpendicular to the pipes.

I've also tried the X / cross method a couple times. Never had a problem, really.

As skgiven advices above, the key is to NOT apply too much.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31256 - Posted: 5 Jul 2013 | 15:15:36 UTC

I don't know which engineer invented the push pins to mount a Shuriken B, but it is absolutely not working. The pins are under the cooling body with ample space!

The whole thing keeps moving and moving over the CPU while trying to push one pin in. (When taking of it is seen that only half of it was smeared with paste.) Then the other is not in its place, as they move very loose in the holder and I can not get it installed. I tried long enough, almost out of past and it is Friday evening so I won't be able to get it.
The cooling body is larger then the area with the push pins, so there is no room for fingers to push and my fingers are quite small. Also tools won't get applied to push.

Terrible they make to cases so small that a normal cooler with screws wont fit.
Well then the Intel cooler must be used. It has push pins as well, but they sit next to the cooler aluminum body. So I can replace them with the screws from the liquid cooler.
I am not enjoining this. All I want was to replace the GTX285 with an GTX660 in the i7 with the XFX MOBO, and then problems start with me installing and deinstalling cards and stuff. Buying an old T7400 (which was advised no to, but the thing is crunching without problems and cool with an GTX660 in it and 5 Docking tasks on CPU for 370W/h.
I take a rest for 15 minutes and hope for some inspiration...
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31258 - Posted: 5 Jul 2013 | 17:04:56 UTC - in response to Message 31243.

Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4

In the video, you can see that the layer method is actually worse than the other ones. From what I understand, because of air trapped between the top surface and the not perfectly flat paste layer.

Nice video, thanks. That's why I always put a rice sized drop in the middle along with the thin spread. Have also tried the cross and line methods as well as the dab in the middle only method. I seem to get most consistent results with the thin layer plus dab in the middle, but my testing has been pretty subjective. The sheet of glass test is an interesting idea.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31260 - Posted: 5 Jul 2013 | 19:10:40 UTC

Well skgiven you are right again, I am turning mad...
I am still being installing a cooler since 3 o'clock this afternoon, haven't eaten yet and won't be anymore.
The push pins never fir and to get them out of the Intel cooler I had to cut them.
I drilled the holes wider so that bolts could fit, but than there was room. The bolt and the white part of the push pen had space to move 6 mm in the holder, that fit around the aluminum element. The solve this is first used springs but helped not enough. Eventually I cut the white parts shorter and use two rings. Now the screws go 3mm in the holder, the back plate in the Aliennware (what a terrible case). If this was not the last thing I got from my dad I would have smashed it against the wall.
And if the temperature of the CPU will went to 85°C I will.
But that is not all, if Murphy is around... off course a piece of the black part of the fitting that holds the aluminum element broke, so I had to glue that and wait for it to dry. And last but not least the aluminum felt out of my hand and isn't perfectly round anymore. As said way earlier in this thread, dark forces reign over the Alienware. Its my first and only. I guess Dell is out of sight as well. Not bad experience but there are almost no options anymore in the Netherlands the let a system build as one likes.
I will build one myself in a roomy case, in the largest case I can find, with a minimum on devices no crap that I don't use.
Or I can build one GTX690 and put that in the old T7400, but than no new system(s) this year anymore.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31261 - Posted: 5 Jul 2013 | 19:17:05 UTC - in response to Message 31258.

Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4

In the video, you can see that the layer method is actually worse than the other ones. From what I understand, because of air trapped between the top surface and the not perfectly flat paste layer.

Nice video, thanks. That's why I always put a rice sized drop in the middle along with the thin spread. Have also tried the cross and line methods as well as the dab in the middle only method. I seem to get most consistent results with the thin layer plus dab in the middle, but my testing has been pretty subjective. The sheet of glass test is an interesting idea.


Yeah great catch Vagelis. The glass plate shows indeed what happened. I had it smeared like that and perhaps there where air bubbles between the CPU and the water cooler.
I try three lines and then push and take it off to so how evenly it went.
I have past for to attempts, so will just work, or not...
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31262 - Posted: 5 Jul 2013 | 20:01:34 UTC - in response to Message 31261.

TJ, I really do hope your adventure has a good ending!

All too often, computer parts are of too low quality, making our lives miserable...
____________

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31263 - Posted: 5 Jul 2013 | 20:09:37 UTC - in response to Message 31258.

Nice video, thanks. That's why I always put a rice sized drop in the middle along with the thin spread. Have also tried the cross and line methods as well as the dab in the middle only method. I seem to get most consistent results with the thin layer plus dab in the middle, but my testing has been pretty subjective. The sheet of glass test is an interesting idea.

The sheet of glass thing is a GREAT idea that made me think how come I never had it!
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31267 - Posted: 5 Jul 2013 | 20:38:06 UTC

Oh dear TJ, that sounds like a very unhappy exercise!

While it's done now.. did you take the mainboard out to try to mount the new cooler with push pins? It's kind of a pain to mount / unmount a mainboard, mainly due to the LED cennoectors.. but then you should be able to apply pressure onto the pins even under the cooler. I always mount the cooler before mounting the mainboard, if I have the chance to do so (i.e. not just exchaning it). Also gives me a nice mechanical handle for the board :D

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31271 - Posted: 5 Jul 2013 | 22:58:48 UTC
Last modified: 5 Jul 2013 | 22:59:08 UTC

Eventually I got it working. Mom learned me to always keep calm, whatever happened. Let it rest for a time, do something else and then go further until you get it done.

The Alienware is running, but not for long I guess. Boinc had still 7 docking WU's in its manager when I powered the system up. Temperature went quickly to 84°C, when I suspended BOINC quickly it cooled down to 57°C. One WU and then temperatures around 65°C. I know the paste need some time, but perhaps there are air bubbles, that I will never know.

Now one Nathan LR and one Docking WU and temperature is between 58-72°C highly fluctuating. It has a nice temperature scheme, were I have set all 5 fans with increased speed, in fact increased all the curves. But this was my big cruncher, now it can not crunch even 4WU's. This is not what I expected.
The MOBO is fit 3 cm from the top of the case head, and around 4 cm down the CPU is another heat block (Alu.) with a 3x3 cm fan mounted. So a big cooler will never fit. It has to be a liquid thing.
A new case, what I thought first, will also not help, due to the cooling thing 4cm beneath the CPU.

Now it it not necessary to crunch full time. It is a choice. The idea was that if computer power is over by a user, BOINC could use it. But we made a sort of a hobby of it. PC's and parts have cost me quite a bit money and the energy bill is higher then from a colleague with 4 kids. We are only with two, and bedside computers do not use a lot electricity.
But I especially feel a need to crunch for projects with medicine for brain diseases and cancer. Mom died of a brain disease, and dad of cancer (was treated well and under control), but got later several brain tumors and then it went quick. I can never get them back, and I am sometimes still struggling with it. So if these diseases eventually can be treated (or completely cured), than that would be fine for other people. Please don't feel sad, that is not my intention.
But you all now know that I will keep asking a lot of questions about computers.
So I will build a new rig.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31272 - Posted: 5 Jul 2013 | 23:08:45 UTC

The GTX 550Ti did ran Milkyway at 974MHz clock speed. When I let it run GG, the Nathan LR failed quick. So I thought let's try a SR. The Santi keeps running but at clock speed 405MHZ at 98% load. So something happened with the card as it down-clocked again. Is the driver 314.22 the cause? 8% In 2 hours for a SR.

I also noticed something: When the i7, the Alienware, the quad and the T7400, are idle, or crunch CPU, the kernel times are all very low, sometimes not visible. But when GPU crunching starts, for GG of MW, then kernel times increase to about the half of CPU usage. Is this normal, or an indication that something is wrong?


GG = GPUGRID
LR = long run
SR = short run
T7400 = cheap refurbished Dell high end workstation with 1000W PSU but with little connectors.
MW = Milkyway@home
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31274 - Posted: 6 Jul 2013 | 7:56:59 UTC

Before I went to bed I installed driver 314,07 on the quad again and boot it (Vista x86). The GPU clock ran at 974MHz but GG LR and SR fail quick. I turned to Einstein@home, and that seems to run, expected time 5 hours.
However when I checked this morning Einstein not finished yet and the GPU clock is running at 405MHz again.
So the driver is not good, or something else. Two days ago this system ran fine. The only thing I did was placing a GTX660 for a day.
So I need absolutely advise here what to do.

The Alienware kept running overnight and is finished in about 2 hours. I will power it down then. The old T7400 is running fine and will I let run.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31275 - Posted: 6 Jul 2013 | 8:01:42 UTC - in response to Message 31267.

Oh dear TJ, that sounds like a very unhappy exercise!

While it's done now.. did you take the mainboard out to try to mount the new cooler with push pins? It's kind of a pain to mount / unmount a mainboard, mainly due to the LED cennoectors.. but then you should be able to apply pressure onto the pins even under the cooler. I always mount the cooler before mounting the mainboard, if I have the chance to do so (i.e. not just exchaning it). Also gives me a nice mechanical handle for the board :D

MrS

Hello ETA, no I didn´t take it out. There is ample space so I would have taken out everything and there are a lot of cables and sensors attached. And it was not possible to remove the other side panel, that would make it easier.

It is mounted now but it cools not enough I think, an Intel box cooler. I can try a new water cooler later this year.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31276 - Posted: 6 Jul 2013 | 9:04:48 UTC

Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG.

BTW: when ever a driver reset happens in newer nVidia drivers the GPU stays in fail-safe mode (the 400 MHz in your case). I have not yet found a way around this other than rebooting.

If the GPU was running fine at these clock speeds before, it might be the summer finally showing its head over here. And chips gradually degrade over time, so it is expected that a GPU will reach 10 - 20 MHz less than last summer after a year of sustained crunching. You won't notice this if you didn't push it to the border of stability, though.

Regarding your CPU: so your liquid cooler was actually not that bad and probably not broken. I don't think it's the thermal paste which is responsible for these temperatures. I lost track of that machine a bit.. which CPU is it? The i7 960? Is the case open during these tests? Is it overclocked by Alienware? What's the CPU speed unter BOINC load?

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31277 - Posted: 6 Jul 2013 | 10:01:54 UTC - in response to Message 31276.

Indeed I reboot the quad with the 550Ti (VENUS) and it is at 974MHz again. Doing Einstein. Will try GG if they finished. And yes MW ran cool on it, seems about 150 seconds faster than on the 660? But could always be a light change in type of WU.

The Alienware is SATURN (I have them named after the planets and when the solar system is complete then I stop buying/building rigs :) ) All are visible now.
Its an i7 960 Bloomfield, not OC as far as I know its frequency is: 3408.45MHz (133.66x25.2), don't know what you mean by CPU speed under load. Where can I see that?
The case is closed, because the side panels have special flow area's fed by a fan in the lower front. The GPU's have a funnel with it's own fan. So cooling is better when it is closed. It is also 31°C on the attic. So ambient temperature is not helping, therefore I power it down when Nathan's has finished.

When typing this on VENUS I, see that the running Einstein GPU WU stopped and switched to another. Weird. But this results in the clock to go down to 405MHz.

So at this point I become dejected and must admit the rigs beat me. I let it run and go outside in the sun.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31278 - Posted: 6 Jul 2013 | 10:10:36 UTC

One more thing (for now), the Einstein WU stopped, hoovering with the muse shows this message: Not enough free CPU/GPU memory available! Waiting for 15 minutes.
And so went the next task.
What can the cause be? Vista x86 has almost 3.4GB that it can use, 1.88GB is currently used, two CPU Einsten WU's running.
The 550Ti has 1GB, uses 30% at the moment.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31283 - Posted: 6 Jul 2013 | 12:46:08 UTC

There's a BOINC setting "use at most xx% of memory", which might be set too low by default (I think you recently had some problem with the settings being reset, didn't you?). And if your GPU is idle (i.e. not enough memory to run any of them) then it's actually OK that it's clocked down. But then the clock speed should go up as soon as there's a load again.

Tools like CPU-Z or HWInfo64 can tell you what your CPU really runs at. Windows did get better at this job, but sometimes even Win 8 shows garbage values for me.

What you could do to immediately reduce temperatures and improve power efficiency of your 960 is to disable turbo mode in the BIOS. On mobile chips turbo is great, but in Bloomfield it was very weak anyway (+1 or 2 bins) and yielded significantly higher power consumption (Intel being very cautious with their first implementation of it).

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31284 - Posted: 6 Jul 2013 | 13:09:00 UTC

Thanks ETA, that is great info, I didn´t know.
The memory use is set to 90% in use or idle.

I leave the Alienware at rest for a week. I will replace it original liquid cooler and the 2 AMD´s 5870 again and make the BIOS change you said.
Thanks I am really happy with this information!
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31294 - Posted: 6 Jul 2013 | 18:47:14 UTC - in response to Message 31276.

Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG.

BTW: when ever a driver reset happens in newer nVidia drivers the GPU stays in fail-safe mode (the 400 MHz in your case). I have not yet found a way around this other than rebooting.

Not long ago I was down to 4 NVidias and was seriously considering dumping those. At most GPU projects the ATI/AMD cards are simply much more powerful and efficient. Then I came back to GPUGrid to give it another try. Now I'm working back up to a 50/50 ATI(AMD)/NVidia ratio again. The programming ability here is questionable (IMO) but the scientific results are compelling. I just wish they would ask for some help in getting the OpenCL app working. People can't be experts at everything. I've been seeing horror stories about the recent NV drivers and it's kind of strange: it used to be that NV drivers were solid and ATI drivers questionable. Lately I've been having luck installing the latest ATI and holding back on NV. To make a long ramble a bit shorter: I've stuck with 310.90 NV and am having no particular problems.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31296 - Posted: 6 Jul 2013 | 20:22:38 UTC - in response to Message 31294.

Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5.

Boinc's values sometimes reset after a bad system crash, at which point it might be an idea to reinstall Boinc and have a serious look at temps.

Periform's Speccy allows you to see GPU, CPU and other component temperatures all from the one app.

I've had one system crash since editing the registry and no driver restarts - it works for me.


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31303 - Posted: 6 Jul 2013 | 21:02:26 UTC - in response to Message 31294.

Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG.

BTW: when ever a driver reset happens in newer nVidia drivers the GPU stays in fail-safe mode (the 400 MHz in your case). I have not yet found a way around this other than rebooting.

Not long ago I was down to 4 NVidias and was seriously considering dumping those. At most GPU projects the ATI/AMD cards are simply much more powerful and efficient. Then I came back to GPUGrid to give it another try. Now I'm working back up to a 50/50 ATI(AMD)/NVidia ratio again. The programming ability here is questionable (IMO) but the scientific results are compelling. I just wish they would ask for some help in getting the OpenCL app working. People can't be experts at everything. I've been seeing horror stories about the recent NV drivers and it's kind of strange: it used to be that NV drivers were solid and ATI drivers questionable. Lately I've been having luck installing the latest ATI and holding back on NV. To make a long ramble a bit shorter: I've stuck with 310.90 NV and am having no particular problems.


I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31304 - Posted: 6 Jul 2013 | 21:04:16 UTC - in response to Message 31296.

Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5.

Boinc's values sometimes reset after a bad system crash, at which point it might be an idea to reinstall Boinc and have a serious look at temps.

Periform's Speccy allows you to see GPU, CPU and other component temperatures all from the one app.

I've had one system crash since editing the registry and no driver restarts - it works for me.


Thanks, I will try that right away. Up to the warm attic.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31308 - Posted: 6 Jul 2013 | 21:54:23 UTC - in response to Message 31303.

I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart.

If you had better success on MW with the GTX 285 than with your HD 5870, you had a major setup problem. I had virtually no failures on MW. Don't run it anymore since hitting my 500,000,000 credit target. The HD 5870 is (at least was when I was running it) is so much faster and more efficient than the GTX 285 at DP, it's not even a comparison.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31314 - Posted: 6 Jul 2013 | 22:45:34 UTC - in response to Message 31308.
Last modified: 6 Jul 2013 | 22:46:57 UTC

I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart.

If you had better success on MW with the GTX 285 than with your HD 5870, you had a major setup problem. I had virtually no failures on MW. Don't run it anymore since hitting my 500,000,000 credit target. The HD 5870 is (at least was when I was running it) is so much faster and more efficient than the GTX 285 at DP, it's not even a comparison.

Yes still is almost 10 times faster at Milkyway. Due to the short runtime a great project to experiment.
By the way not only me, I have checked and saw a lot of errors to validate even on 6xxx and 7xxx AMD cards.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31315 - Posted: 6 Jul 2013 | 22:55:34 UTC - in response to Message 31304.

Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5.

Boinc's values sometimes reset after a bad system crash, at which point it might be an idea to reinstall Boinc and have a serious look at temps.

Periform's Speccy allows you to see GPU, CPU and other component temperatures all from the one app.

I've had one system crash since editing the registry and no driver restarts - it works for me.


Thanks, I will try that right away. Up to the warm attic.


Well I removed everything from EVGA and nVidia and installed the latest drivers. Now the Einstein WU is finishing with GPU clock steady at 951MHz and temp of 66°C.
Kernel times are low now as well. See tomorrow how it went.

Speccy has higher temperatures then Core Temp. But I have that said earlier, all these programs vary with their readings. But a CPU of around 50°C is not bad.

After a few days with problems, installing and de-installing I must admit that the old T7400 where we had our thoughts about, does it well with the GTX660 at 65-66°C and steady clock. The two Xeons keep quite cool with 50-65°C (passive coolers, with a fan at little distance from the two massive aluminum blocks), at 360-370Watt in total. I become attached to this system ;-)
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31322 - Posted: 7 Jul 2013 | 10:05:10 UTC

My three cuda capable GPU's are all crunching for GG so I am happy. However now the weather turns hot, with rising ambient temperatures :(

One question: what does is say that kernel times are high(er)? For most rigs they are low, near the bottom. At times I see them climbing to around 1/4, 1/3 in the window of CPU usage, you know what I mean.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31323 - Posted: 7 Jul 2013 | 10:22:38 UTC

If a CPU cooler fits on an AMD3 will it fit on an AMD3+ as well?
Idea was a Zalman but I like the Cooler Master V8.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31324 - Posted: 7 Jul 2013 | 12:59:03 UTC - in response to Message 31323.

If a CPU cooler fits on an AMD3 will it fit on an AMD3+ as well?
Idea was a Zalman but I like the Cooler Master V8.

Yep, same mounting system.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31344 - Posted: 7 Jul 2013 | 22:05:01 UTC
Last modified: 7 Jul 2013 | 22:05:45 UTC

My last idea for today. I want to contribute to GG with more then the GTX285, so bought a GTX660, but didn´t work out well. The story is known.
Now I need a new rig to fit two 660´s, but I also bought the refurbished T7400 for less.
I can buy a GTX770 for 390€ and put it in the T7400 as it has 1 8-pin and 2 6-pin power adapters.
Then the new PC can be made in winter and probably prices have dropped as well.
Good idea or not?

Sweet dreams.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31345 - Posted: 7 Jul 2013 | 22:34:45 UTC - in response to Message 31344.
Last modified: 7 Jul 2013 | 23:03:45 UTC

That's more of a put off than a solution. The best solution IMO is to get rid of your old power hungry hardware and replace it with a cheap system (basic CPU, motherboard with two PCIE slots) and kit it out with a good PSU and whatever GPU(s) you want. I don't think there are going to be many new NVidia cards any time soon (possibly a few revisions). While i3 4000series Intel and AMD Kaveri's are likely before the end of the year, these will be low to mid range (sitting on the fence) CPU's. You have to decide if you want to buy a fairly power hungry 8core AMD, invest heavily in an Intel CPU, or neither. If neither appeals to you, just forget about CPU crunching and concentrate on where it's at, the GPU. A basic CPU and motherboard with a good GPU will be cheap to buy, cheap to run, do lots of good work and get you lots of Boinc credits.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31357 - Posted: 8 Jul 2013 | 21:47:33 UTC - in response to Message 31345.

That makes a lot of sense. Especially since you'd "loose" 2 threads feeding the GPUs at GPU-Grid anyway, so there's less benefit of making these cores fast.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31603 - Posted: 17 Jul 2013 | 22:36:56 UTC
Last modified: 17 Jul 2013 | 23:34:17 UTC

With temperatures rising I need to power down my rigs. But I got a GTX770 today and wanted to put in in the i7 with a (faulty XFX MOBO). After a lot of installing and booting it worked. Better than with the 660 from EVGA that didn´t work. This one is from ASUS. Not the brand as first choice of mine, but I thought that might work in the old rig. I must say I like the software with it. You can see everything, change the order of it and do i.e. setting for clock speed, fan.
This one seems to work with the XFX MOBO, system is more responsive and not yet slowing down. However windows seem to freeze at the place where they first appeared after dragging them somewhere else. Using a window as an eraser over the screen will clear things up, if you know what I mean. So not optimal. I will try in in the old T7400 later when it becomes cooler.
The 770 is now doing Milkyway as a test and goes smooth. When finished I set it to GPUGRID with all applications. See how that goes overnight. Tomorrow becomes even warmer so I must it shut down than as well.

Edit: one thing though, the i7 with the 770 has its kernel times almost the same as CPU usage, other my other rigs have low kernel times. Unfortunately none has yet explained what this means/what the issue could be.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31615 - Posted: 18 Jul 2013 | 9:00:17 UTC - in response to Message 31603.

Its not working properly either. The 770 in the i7 (with XFX MOBO) has done 52% in 8.5 hours. That can't be right, seeing the results of other 770's. I will let it finish and then put it in the T7400 and will consider the XFX PC as scrap, I can use some parts of it later.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31635 - Posted: 18 Jul 2013 | 19:55:09 UTC

Well the 8-pin power plug on a Dell T7400 is not the same as a 8-pin power plug from a EVGA PSU. It is white and is fitted on the same wires as the 6-pins power plug for GPU. The other 6-pins plug has not this extra plug.
So the GTX770 won't fit in my workhorse. Its in its box again waiting for a new rig. So RAC will gradually decline, not because I am leaving the project but because of hardware issues.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31647 - Posted: 19 Jul 2013 | 8:56:02 UTC - in response to Message 31635.

Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there.
____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31649 - Posted: 19 Jul 2013 | 10:20:30 UTC - in response to Message 31647.

Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there.

Those adapters can cause more problems in the long term than they can solve straight off.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31659 - Posted: 19 Jul 2013 | 15:28:34 UTC - in response to Message 31649.

Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there.

Those adapters can cause more problems in the long term than they can solve straight off.

More over there is only one free. I can replace the PSU, but I am building a new one earlier than planned so that is not an option right a way.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32356 - Posted: 28 Aug 2013 | 9:02:15 UTC

Hello, instead of heat conduction paste I found liquid metal pads. This is its description:

The Coollaboratory Liquid MetalPad ist the first heat conduction pad, which is composed of 100% metal and melts with just less heating (BurnIn-process), then it confects its superior heat transfer. It dissipates the heat fast and efficiently and needn´t hide from the best heat conduction paste. The simple, clean and fast installation turns the Liquid MetalPad into the ultimative heat conduction medium for HighEnd PCs and game consoles. The Liquid MetalPad can be used with all on the cooling market commercially available materials, for instance aluminum or copper! It doesn´t age and doesn´t have to exchange regular. The Coollaboratory Liquid MetalPad is certainly RoHS conformable and absolute nontoxic.


Should be easy to apply and is always evenly. Has anyone experience with this?
Thanks.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32412 - Posted: 28 Aug 2013 | 19:42:08 UTC - in response to Message 32356.

Regular paste can't beat liquid metal for heat conductivity. Just be careful to clean the interfaces properly before applying (otherwise it will not wet the surface properly, rubbing with Aceton is enough to remove any organic surface contamination) and beware of drops floating anywhere in your PC. Happened to me once, but that was while applying the stuff directly to an insufficiently cleaned surface, not with a pad. Luckily, in the same way it did not wet my heat sink it didn't wet the mainboard either, so it formed a bubble on my mainboard and I could just turn the PC upside down to get it out again... shoudln't happen with a pad :D

And the older mixtures have the drawback of forming an alloy with the heat sink base, i.e. if you want to remove it you have to rub the heat sink base seriously with metal polish to get a flat surface again. But I heard that's not a problem any more.

Anyway, before turning to such rather extreme measures I'd make sure the low-hanging fruit are already harvested: good cooler, sufficient case cooling, adjusting voltage etc.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32419 - Posted: 28 Aug 2013 | 21:00:51 UTC - in response to Message 32412.

Thanks for the input ETA.
I am not planning to use it soon, but I thought about the move with the glass plate and the paste. Where we could see that it is not so easy to get a very thin and smooth surface and then I came across these liquid metal pads per accident browsing a web shop for computer parts. I will think about it.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33256 - Posted: 29 Sep 2013 | 15:29:47 UTC

Another question from me.
One PC a Dell xps480 is running 24/7 for 5 years now. Last week when I look at it, the log on screen was visible, so it had booted itself. I logged in and a few hours later the same. After a new log in it worked for 6 days, overnight it booted itself again and when I logged in. after about 15 minutes a sharp bang and power off in my attic. Seems two fuses went of, the ground fault circuit breaker as well. The PC is a bit smelly to burn, especially in the area of the PSU.
I checked Who Crashed and Event Explorer but no indication of any sort.

How can I see or test that the PSU is faulty? If some parts are black or burned than it is obvious.
Does this bang with power cut have done damage to other hardware in the PC?
I will mount the PSU of off course and open it to see if I can recognize anything.

One other thing when I started the other PC again, doing a GPUGRID SR, more than 50% finished, was starting from zero again. After a power cut, this does noet happen after a re-boot without suspending the WU first.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33257 - Posted: 29 Sep 2013 | 16:29:56 UTC

I got the PSU out and opened it. It smells a bit and most parts feel still a bit warmish after approx. 2 hours, but no black blathering that looks like burn.

The rest of the components look okay to me. I bring this PSU to an electric shop, to test it for me.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33258 - Posted: 29 Sep 2013 | 18:15:30 UTC - in response to Message 33256.

One PC a Dell xps480 is running 24/7 for 5 years now. Last week when I look at it, the log on screen was visible, so it had booted itself. I logged in and a few hours later the same. After a new log in it worked for 6 days, overnight it booted itself again...

These are the typical symptoms of a faulty PSU. (or an overheating CPU/MB)

...and when I logged in. after about 15 minutes a sharp bang and power off in my attic. Seems two fuses went of, the ground fault circuit breaker as well.

The final sharp bang is a quite confirmation of that the PSU have failed.

The PC is a bit smelly to burn, especially in the area of the PSU.

The smell of burn was caused by the high current going through a failing semiconductor (typically the switching FET, or the rectifier, or both). This PSU won't work until it's taken apart, and repaired. Even trying to switch it on again could be dangerous.

I checked Who Crashed and Event Explorer but no indication of any sort.

How can I see or test that the PSU is faulty? If some parts are black or burned than it is obvious.

It's more easy to smell than see. The semiconductors are smoke powered: when this magic smoke comes out of a semiconductor (you can tell it by it's smell), it won't work anymore.

Does this bang with power cut have done damage to other hardware in the PC?

It could, but it's not typical. The OCP (Over Current Protection) feature of the PSU should prevent such damage. However if you try to turn on the failed PSU again, there is a greater risk of burning more parts in it, or in the PC.

I will mount the PSU of off course and open it to see if I can recognize anything.

Sometimes you can see brown burning marks around the parts mounted to the PCB, but the rectifiers and the FETs are mounted to a heatsink, and usually you won't see anything suspicious on them, but sometimes their casing could be cracked, or a crater shaped part could be missing.
 
I got the PSU out and opened it. It smells a bit and most parts feel still a bit warmish after approx. 2 hours, but no black blathering that looks like burn.

The smell is the sign of the burn. It could be smelt after days.

The rest of the components look okay to me. I bring this PSU to an electric shop, to test it for me.

They should test it strictly with a dummy load. But it's wise to test it only after all of the high current semiconductors are checked, and the failed ones have been replaced. But this process could cost more than a new PSU.

One other thing when I started the other PC again, doing a GPUGRID SR, more than 50% finished, was starting from zero again. After a power cut, this does noet happen after a re-boot without suspending the WU first.

It's typical when the files containing the checkpoint aren't written to the disk correctly (because of a power failure)

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33259 - Posted: 29 Sep 2013 | 18:29:08 UTC - in response to Message 33258.

Thanks Zoltan,

I will bring it to a shop tomorrow, to ask. I can not see anything, but the smell is obvious. I didn't see smoke or any damage to components. I will look again tomorrow with the sun light.

The GTX550Ti was in a PCI 16 slot with 75Watt, that is what is printed on the MOBO. Could that be the cause of the problem? That the GPU uses more that the 75 Watt?



____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,222,865,968
RAC: 1,764,666
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33261 - Posted: 29 Sep 2013 | 18:57:56 UTC - in response to Message 33259.

I can not see anything, but the smell is obvious. I didn't see smoke or any damage to components. I will look again tomorrow with the sun light.

Sometimes it's much easier to see the damage on macro photographs (but usually you have to take the parts out of the PSU to take these photos).

The GTX550Ti was in a PCI 16 slot with 75Watt, that is what is printed on the MOBO. Could that be the cause of the problem? That the GPU uses more that the 75 Watt?

No. The GTX550Ti has a PCIe power connector, and this can supply the additional power.
It's as simple as your PSU has reached the end of its lifetime.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33262 - Posted: 29 Sep 2013 | 19:07:16 UTC - in response to Message 33258.
Last modified: 29 Sep 2013 | 19:09:15 UTC

Good advice from Zoltan. Sounds like a failed PSU to me too. If it smells burnt, you should replace it. I would not even consider attempting to repair a PSU. Anything else is fair game, but not the PSU. If it's under warranty and worth the bother, RTM it, otherwise bin it. It's really not worth the time, money or risk to repair a failed PSU yourself. The PSU might have a lot of unseen damage that could surface later on, cause problems and take out more hardware - when a PSU fails it can cause other hardware failures (Motherboard, anything in a PCIE slot, RAM, disk drive). The best PSU's just blow a fuse (but come with a spare), the worst are fire-crackers that make everything attached flare up.

BTW. Even testing damaged hardware is risky; a failed GPU, motherboard, drive, RAM module... can cause other hardware failures. If you DIY, test components on an old board with a good but not too expensive PSU, or leave it to an experienced tester. If it's not worth the risk, don't risk it!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33267 - Posted: 29 Sep 2013 | 20:08:48 UTC

I mostly agree with Zoltan and SK here - except for the part of throwing the broken PSU in the bin. It belongs into the electronic bin! Well, at least that's what we have in Germany.. ;)

MrS
____________
Scanning for our furry friends since Jan 2002

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 33270 - Posted: 29 Sep 2013 | 21:18:12 UTC - in response to Message 33267.
Last modified: 29 Sep 2013 | 21:19:25 UTC

I mostly agree with Zoltan and SK here - except for the part of throwing the broken PSU in the bin. It belongs into the electronic bin! Well, at least that's what we have in Germany.. ;)

MrS


Same here in California, in fact, they charge us a disposal fee when we buy certain computer components and we don't have a land fill in Tuolumne County. Our refuse is trucked to Nevada that doesn't charge fees for computer component disposal, it's a racket (especially if you don't save your original receipt).

As for your power supply, they can be very dangerous if those large capacitors have a charge, they can even kill, so be very carful. I have a newer digital PSU tester I picked up for $20.00 US, not a bad idea if your going to have several machines crunching. Here's one at Amazon, it's more expensive and I don't know why it's twice the price it goes for at Newegg.

http://www.amazon.com/Silver-APEVIA-Supply-Tester-Aluminum/dp/B009D514I0/ref=sr_1_fkmr0_1?ie=UTF8&qid=1380488686&sr=8-1-fkmr0&keywords=Apevia+Power+Supply+Tester+PST+03

Anyway, just to give you an idea what they look like and how they work, I use mine on a monthly basis.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33288 - Posted: 30 Sep 2013 | 14:59:13 UTC

Thanks for all the information guys.

I will not try to repair a PSU myself, I do know a bit about electronics, but that is beyond my capabilities. Secondly I do not want to have this one repaired. It has worked 5 years 24/7. And I will not bring power to it any more as it has switched of two fuses in the electricity meter cupboard (is it called that way in English?), when it broke. Checking if it is really faulty will cost me 25 euro, and with what you all told me, I am convinced it is broken so I order a new one right a way.

And finally I will not trow it in the bin. We have a depot where we can bring all sorts of potentially environmental hazardous stuff. I have a special box for it and when it´s full I bring it.
When we buy something electric, even batteries, there is a small surcharge for disposing it later. This means that where I buy a new PSU, I can give them the old one and they deal with it. There is already payed for. We have this surcharge also for motor car oil and tires.
Personally I try to harm the environment as little as possible. However with 24/7 crunching I am using more power then I need to, so the power plant has to work extra with extra exhaust. But it is for a very good cause!
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33289 - Posted: 30 Sep 2013 | 15:05:01 UTC - in response to Message 33270.

One question flashawk, testing a PSU with that devise, would that involve to bring power to the PSU?
____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 33293 - Posted: 30 Sep 2013 | 21:28:29 UTC - in response to Message 33288.

And I will not bring power to it any more as it has switched of two fuses in the electricity meter cupboard (is it called that way in English?), when it broke.


Here in the states we call those breaker boxes or circuit breaker boxes, they have digital meters that put out a wireless signal for the meter readers, they don't even get out of their cars.

To test the power supply with one of those testers, yes, you would need to plug it in. I remember that Dell computers had their own proprietary PSU's and an industry standard PSU would plug in to the Dell motherboard and fry them. Those were of the old 20 pin type if I remember right.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33297 - Posted: 1 Oct 2013 | 5:32:46 UTC - in response to Message 33293.

I remember that Dell computers had their own proprietary PSU's and an industry standard PSU would plug in to the Dell motherboard and fry them. Those were of the old 20 pin type if I remember right.

That would not be nice :(
Indeed Dell has everything others, the plugs for the fans are very flat as well so a "normal" would not fit.
This Dell has a 24 pin. W´ll see, I get a PSU today and will build it in let know how it works. Thansk for the warning!
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33318 - Posted: 1 Oct 2013 | 22:05:22 UTC

Well as promised I would let know how it went.
It was indeed the PSU that gave the sharp bang. Another good suggestion from Zoltan, as always.
The new PSU is a Cooler Master 600 Bronze, not my first pick but the size must fit in the space and this does. This PC will not much longer run 24/7, so no problem.
However it consumes about 50 Watt less when crunching two Rosetta on the quad CPU and one Einstein on the 550Ti (to slow for here anymore).
System runs fine and cooler as well, CPU is 3-4°C lower, I guess due to the new thermal paste, as I needed to get the heat sink out for proper fan cleaning and inspection and PSU cabling.
This PSU feels cooler at the back (exhaust) as well then the old one did, but it is 200Watt more so a larger overhead.
And luckily the motherboard isn't frying (yet).
Thanks again for all the help and advises.

____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33335 - Posted: 2 Oct 2013 | 21:15:15 UTC - in response to Message 33318.

50 W less? Wow, that old PSU must have been really old and / or really crappy! The new one should pay for itself quickly, depending on how long the machine still runs 24/7 :)

BTW: Einstein likes 2 WUs per GPU for increased GPU utilization and throughput.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34197 - Posted: 11 Dec 2013 | 6:35:44 UTC

Hello, another question from me.
Yesterday one of my rigs for GPUGRID, suddenly powered off. I reboot today and checked WhoCrashed but that didn't and windows7 started normal no mention that it was closed unexpectedly.
Does anyone know where I can find a log in windows to see what could have been the cause?

And could it be a BOINC thing as I had GPUGRID on the first 660 and Einstein on the second 660? As there was no work and Einstein is set to 1% as resources.

Finally could it be that a 600W PSU is not sufficient for two 660's?

Thanks for the input.

____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34198 - Posted: 11 Dec 2013 | 7:31:58 UTC - in response to Message 34197.

It depends on the other components in your system too, not just the GPUs.

Google for "psu calculator", you'll get several hits. At one of those sites, enter your system's hardware specs and it will tell you how big your power supply unit (PSU) should be. Some people say some of those sites don't provide accurate calculations but if you try three different sites and they're all fairly close then you probably have a good idea of what you need.

Could it be the machine shutdown due to a power outage? I have my systems on UPS (uninterruptable power supply) now. I wish I had starting UPS years ago because now I get far fewer problems, OS corruptions, lost work and general hassles. Yes, one should have backups and I do but I'd rather just enjoy my computers than be restoring stuff.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34202 - Posted: 11 Dec 2013 | 10:44:21 UTC - in response to Message 34198.
Last modified: 11 Dec 2013 | 10:45:02 UTC

Your GTX660's each have a 140W TDP, and your AMD FX(tm)-8350 has a TDP of 125W.
That's 405W. The motherboard might be using ~60W, RAM around 10W, Drives around 10W and case fans around 10W. So you might be using up to 500W.

Some high end PSU's (Platinum) are comfortable continuously supplying 80% of their maximum power specification, but mid-range PSU's are not (Bronze PSU's).

What is the PSU's make and model?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34203 - Posted: 11 Dec 2013 | 16:53:10 UTC - in response to Message 34202.

I have checked the power use of the system when I put the two 660's in and it is using 350-458W depending on if the two GPU's are busy and 4 Rosetta WU's on the GPU.
The PSU is a Cooler Master Silent Pro M600, 80 plus bronze.
So I think that is the problem?
It has run for 23h41m before yesterday. I powered it this morning and finished a GPUGRID SR. Now it is powered down again.
I only have a spare 750W EVGA 80plus gold PSU as a spare, will try that one.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34258 - Posted: 12 Dec 2013 | 19:38:57 UTC - in response to Message 34203.

Might be some variation in that EVGA range but I did see one that is 90% efficient. The EVGA is likely to be 5% more efficient than the Cooler Master, as the EVGA is a higher end PSU (Gold rather than Bronze and with more power headroom).


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34259 - Posted: 12 Dec 2013 | 19:54:51 UTC - in response to Message 34258.
Last modified: 12 Dec 2013 | 19:55:33 UTC

Yes thanks skgiven.
I put the 750W Gold EVGH PSU in and is now running 23h30m, so almost the same time when it powered itself down. Now two LR's on the 660's so more power draw then yesterday with one GPUGRID and one Einstein. So I guess the PSU was not powerful enough. I will let know tomorrow if it still runs. Know I alos see why it is handy to have spare hardware, even older stuff.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34266 - Posted: 13 Dec 2013 | 1:10:12 UTC

Coincidentally I was behind this PC.
I have Core Temp 1.0RC% running on the AMD (gives error when set to start at boot) and set the overheat protection function on at 80°C (CPU) to shutdown after ten seconds.
The PC was nicely working at 48°C (52°C according to Asus's Thermal Radar 1.01.20) and a message appeared that the temp was reached and system will shut down in 10 seconds. I could just stop that.
Very likely that this happened a few days ago and that the PSU could manage (as it did for two days).

Problem is now that there is no program that measures the temperature correct, all I tested gave different readings. The one most reliable, CPUID HWMonitor has no overheat protection in it. I like that in case I am away.
Has someone any Ideas? Should I use TThrottle for this?
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34305 - Posted: 14 Dec 2013 | 14:18:51 UTC - in response to Message 34266.

That 80+ Gold PSU is a better fit for that system - saves you money and operates closer to peak efficiency (~50% load). You can check with your power meter, you should see less power usage under comparable load now.

Regarding your CPU: you always have trouble cooling your CPUs, don't you? ;)
Anyway, that 8350 is one power-hungry beast and supports a turbo mode. that means it will keep pushing itself harder as long as temperature and power draw allow it. Getting precise temperature measurements from AMDs has been a pain since quite a few years, but: you can trust the internal measurement. I.e. when the CPU still turbos up, it's not dangerously hot. If you set a different threshold you two will be fighting each other.

What I suggest: deactivate that turbo mode in the BIOS. It will make the chip run more efficiently (lower voltage) and at the same time cut back on heat generation. You'll loose some CPU performance, but maybe you can make up for that buy crunching on one more CPU core (now that it runs more efficiently).

This doesn't really help you in monitoring and trusting CPU temeprature.. but if you're using any decent CPU cooler with a 120 mm fan and fluid bearings, it will not just suddenly fail. And if it does, CPU & mainboard still have internal protection against damaging temperatures. So the advice is this: use a working point where you can easily handle CPU temperature and then stop worrying about it.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34309 - Posted: 14 Dec 2013 | 16:17:44 UTC - in response to Message 34305.

Hello ETA, you have a very good memory. Indeed I have always problems with my CPU coolings.

Problem is, if it is a problem, that I have checked with 4 different programs to measuere CPU temperature and they all give a different reading. That worries me.
I will try your advise to deactivated the turbo mode.

But even now, last Saturday I brought my new built on line with a i7 4771 and a Sabertooht Z87 (yes Asus again, EVGA hard to obtain or very very expensive) and Zalman CNPS12X with 3 fans. According to Thermal Radar 2, the MOBO readings the CPU is 47 to 52°C when 7 cores a busy but according to CPUID HWMonitor and Core Temp 1.0 its 10 degrees hotter. Still not alarming as it never was over 65°C, but the difference gives me worries. I checked a few time with an IR-thermometer and all is still okay. Temperature of the MOBO have both programs the same (Asus and HWMonitor)
____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34324 - Posted: 15 Dec 2013 | 9:21:37 UTC - in response to Message 34309.

I've noticed differences in reported temperatures from different programs too. There is no single way to read the temperature sensors. It's not like looking at a thermometer and reading a number off of a scale. Different programs use different methods to read the temperature therefore you get a different report from each one. It's not because the sensors are broken or inaccurate or that the software is broken. It's due to different software developers having different philosophies about how the sensors should be read and reported. Take an average of 3 or 4 readings and relax. Or err on the side of caution and watch only the one that gives the highest reading. If none of them are reporting excessive temperature then that's a good sign because it means no matter which standard you use you're inn the green. At least they all agree on that point.

____________
BOINC <<--- credit whores, pedants, alien hunters

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34335 - Posted: 15 Dec 2013 | 21:29:26 UTC

There's also some variation in which sensors are reported as CPU. For Intels you can trust CoreTemp: it reads the values directly measured and reported from the cores, the same which Intel uses themselves to set the turbo mode states. Each core has 100+ temperature sensors and reports the hottest one. Some of these are directly at the FPUs, which are hot spots during heavy number crunching. So these are really worst-case numbers and can hence reach 70°C without trouble, with even 80°C still being OK.

Then there are temperature sensors on the mainboard, in the CPU socket, like in the "good old days" before the integrated ones. These naturally report lower values, by 10 - 20°C, and react slower to load changes but are often reported by various tools.

For AMD things are not as clear, unfortunately. There are internal sensors, but they seem to be placed in some remote places and measure far lower temperatures than Intels. This is also expressed by the limits being far lower: typically around 63°C, depending on the CPU. The hot spots in the CPU will probably be 20 - 30 °C hotter than this reading. And on the software-side things are not as clear as well: I'm often seeing people report nice low load temepratures.. but when they take a reading at idle, it's below ambient temperature. You're breaking the 2nd law of thermodynamics, AMD!

I'm not saying all of their readings would be wrong or useless.. it's just that something very different from what CoreTemp reads on Intels is being reported.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34336 - Posted: 15 Dec 2013 | 23:18:37 UTC

Thanks ETA and Dagorath, I didn't know all of that. But knowing this now and seeing no temperature higher than 65°C for the 4771 i7 with running 4 Rosetta's and 2 GPU's, leaves two for me to play with, I am happy.
The noise is quite low even of all fans (10 in total).
But I want to use the iGPU for Einstein@home as well, but I need to study a bit first how I get that working, but will go to the excellent Einstein fora for that.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34366 - Posted: 17 Dec 2013 | 21:08:38 UTC - in response to Message 34336.

Use driver 9.18.10.3071 (date 19.03.2013), not the current 10.x (could be Win 8.1 only anyway, don't know), a current BOINC and connect something to an output port of the iGPU (VGA cable of monitor, VGA dummy, whatever). Allow "Binary Radio Pulsar Search (Perseus Arm Survey)" (the WUs don't count as GPU WUs for the server) and switch off "run CPU tasks for projects where GPU is available". If it works run 2 WUs in parallel for a nice throughput boost and you're good to go :)

For problems or further discussion let's head over there.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34370 - Posted: 17 Dec 2013 | 23:37:27 UTC - in response to Message 34366.

Thanks for the good advise ETA. Will try in the coming days.
____________
Greetings from TJ

Post to thread

Message boards : Number crunching : Hardware questions

//