Message boards : Number crunching : Hardware questions
Author | Message |
---|---|
It is me again.. | |
ID: 30643 | Rating: 0 | rate: / Reply Quote | |
I wouldn't buy that old workstation - the running costs will be far too high! The T7400 can take up to Penryn 45 nm Core 2 Quads (not sure what's inside now, could still be 65 nm C2Qs). The performane and efficiency increases have been massive since then. But maybe worst is the mainboard and chipset: I expect 200 - 300 W power draw at idle. | |
ID: 30672 | Rating: 0 | rate: / Reply Quote | |
I guess you are very right ETA, but to late I already ordered it. | |
ID: 30709 | Rating: 0 | rate: / Reply Quote | |
Well.. then have fun with your new toy :) | |
ID: 30715 | Rating: 0 | rate: / Reply Quote | |
Yeah thanks, well it runs quit, 270 Watt when idle, 385 Watt when 8 cores crunching Rosetta. I think the case is nice and big that is usable, the rest is... Well you warn me (either did Beyond) but a little to late, the order was on its way. Never mind it was really cheap so no worries. | |
ID: 30718 | Rating: 0 | rate: / Reply Quote | |
You could just pull the Quadro FX4600's and add two GTX660's (presuming they would work on that motherboard). You would be drawing around 600W though! | |
ID: 30731 | Rating: 0 | rate: / Reply Quote | |
The Xeons are E5430 @2.66 and not to fast. It seems indeed that old stuff uses more power. The system is from May 2008. | |
ID: 30737 | Rating: 0 | rate: / Reply Quote | |
The Xeons are E5430 @2.66 and not to fast. It seems indeed that old stuff uses more power. The system is from May 2008. That's better than my Xeon - same frequency, but 45nm, 1333MHz FSB, 12MB Cache, 80W TDP. http://ark.intel.com/products/33081/Intel-Xeon-Processor-E5430-12M-Cache-2_66-GHz-1333-MHz-FSB Is the system RAM DDR2 or DDR3? Why not pull the CC1.0 GPU's and test a different GPU? I would like a new system to replace my i7 with the "heater". I want two GPU's and an i7 (or Xeon) again with Win7. It will run GPUGRID and Rosetta on the CPU. What parts do you have? ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 30739 | Rating: 0 | rate: / Reply Quote | |
I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison. | |
ID: 30740 | Rating: 0 | rate: / Reply Quote | |
I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison. Would be a reasonably good comparison if you mention how long the i7 takes; if it's 30min then the Xeon's are not great, but if it's 10h then the 10min difference is negligible. I checked Intel indeed before I ordered it and saw that they use 80 Watt, that's less than the i7 (960) 144.33 Watt. How is it than possible that it is drawing almost 300Watt, when idle (doing nothing)? Even with the plug in the mains it draws 3 Watt. That's 80W for each CPU. Mainly the motherboard, the two GPU's, eight sticks of DDR2 and the PSU, but also drives. The DDR2 may be forcing the FSB to operate at 800MHz, when it could be 1333MHz. This would make the processors slower for computation. I saw a fairly large difference when I moved my Xeon from a DDR2 board to the DDR3 board (not saying this is the way forward though; I think the E5430 isn't DDR3 compatible). The only GPU I have that can do BOINC projects are AMD HD 5870 (2) they were in the system where now the GTX660 is running. I would put one back in then - it could do way more work than both CC1.0 cards combined. There are 4 SATA connectors free and 5 SATA power plugs (small, long, black), and only 1 white (large) 4 pin plug. Weird. Yeah, a bit of an odd PSU design; only accommodates one big GPU, but is 1000W and has lots of SATA connectors (newer than the 4-pin IDE power connectors). This also prevents you from using two 4-pin IDE power connectors to hook up another GPU! I have an SSD, 2 HD's, several fan's and Win7 professional all new, and a case with 5 fan's. Is that to be used for a new build? ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 30744 | Rating: 0 | rate: / Reply Quote | |
Careful with run-time comparisons at Rosetta: you decide how long your WUs shall run (approximately) and your hardware decides how much work is being accomplished during this time, which will be reflected in the amount of credit given for the WUs. | |
ID: 30747 | Rating: 0 | rate: / Reply Quote | |
I did Rosetta on the Xeons and saw that they used approx. 600 seconds more than an i7 (960@3.20GHz). I know that is not a good comparison. The i7: 586361826 532500775 9 Jun 2013 9:20:22 UTC 9 Jun 2013 12:12:14 UTC Over Success Done 10,118.13 59.06 61.76 The Xeon: (same type of job): 586009637 1619478 7 Jun 2013 13:49:39 UTC 7 Jun 2013 16:45:54 UTC Over Success Done 10,100.83 62.60 76.99 586010293 1619478 7 Jun 2013 13:53:43 UTC 7 Jun 2013 17:14:56 UTC Over Success Done 10,696.68 66.29 80.25 Thus one faster and one slower. But yeah Xeon not to bad. The only GPU I have that can do BOINC projects are AMD HD 5870 (2) they were in the system where now the GTX660 is running. Will do, but this system will only run when cheaper power rate is active and not very often. The i7 with teh GTX285 at 90% load and 6 Rosies running is only using 315 Watt! I could almost run two of these, I should have listen to you all. This also prevents you from using two 4-pin IDE power connectors to hook up another GPU! Indeed that was my plan, but nope. I have an SSD, 2 HD's, several fan's and Win7 professional all new, and a case with 5 fan's. Yes, all new I will not use it for an old system. ____________ Greetings from TJ | |
ID: 30748 | Rating: 0 | rate: / Reply Quote | |
Thanks ETA. There is only really 1 4 pin molex connector for older devices? Seems weird. These could be used to power GPUs, but you need 2 of them for 1 6-pin GPU onnector (ideally originating from different cables). Two, one is in the DVD-drive How efficient is that 1 kW PSU? Chances are it's not all that bad, if it's been in a high end workstation. I don't know. Can I find that out? And I second crunching on that HD5870. It's still a very decent card for a few projects, as it packs lot's of raw horse power into a moderately efficient 40 nm chip. Examples: Milkyway, POEM, Einstein (not many credits, though) and probably some more. Yes the two of them did Einstein, Albert and Milkyway nicely. I don't care about the credits that much. The science I find useful/important is what I crunch for. I will not OC things. I like EVGA and saw nice new MOBO's of them. Would be nice to have a MOBO, PSU and 2 GPU's from EVGA. Would work good together. However not easy to find in the Netherlands. Zalman has nice cases, not to find here. And the nVidia Tesla case seems very nice, but I can't find that thing even in the US. However I now from Beyond (cruncher) that a good case with good airflow is important as well. ____________ Greetings from TJ | |
ID: 30749 | Rating: 0 | rate: / Reply Quote | |
Screw that DVD drive, you can access another one over the network ;) | |
ID: 30751 | Rating: 0 | rate: / Reply Quote | |
Yes, you could put in some large mainboards. However, I wouldn't touch a 6-core for BOINC. As Intel it's too expensive and not energy-efficient enough (still 32 nm Sandy Bridge) and as AMD.. well, no need to discuss that :p MrS Why are you always making AMD snipes? The X6 processors are decent and still offer good crunching bang for the buck. At one project I run they are faster than even the fastest Intels that are MUCH more expensive. We all better hope AMD sticks around or we'll be mortgaging our houses to buy CPUs. Intel on the desktop: ivy bride runs hotter than sandy bride, the new haswell runs hotter and uses more energy than ivy bridge. Aren't they going in the wrong direction? Here's part of the review from Xbit Labs: "The Haswell CPU core temperatures are seriously higher than those of the previous generation processors. And although most every-day tasks do not cause the CPU to heat up so dramatically, we should base our conclusions primarily on specialized stability tests, which create heavy but nevertheless quite realistic load. So, it turns out that overclocking the new CPUs calls for much better coolers than those we could use for Ivy Bridge processors. In other words, it is harder to reach the same results when overclocking Core i7-4770K as we did with the overclocker-friendly Sandy Bridge and Ivy Bridge products in LGA1155 form-factor." http://www.xbitlabs.com/articles/cpu/display/core-i7-4770k_12.html "And frankly speaking, this product is not that impressive at all, especially in the eyes of computer enthusiasts. We tested the top of the line desktop Haswell, Core i7-4770K, and drew a number of bitter conclusions. First, Core i7-4770K is just a little bit faster than the flagship Ivy Bridge processor. Microarchitectural improvements only provide a 5-15 % performance boost, and the clock frequency hasn’t changed at all. Second, Core i7-4770K processor turned out a significantly hotter processor than the CPUs based on previous microarchitecture. Even though Haswell allows engineering energy-efficient processors with impressively low heat dissipation, its performance-per-watt has worsened a lot when they adjusted its characteristics to meet the desktop requirements. This resulted into the third item on this list: without extreme cooling Core i7-4770K overclocks less effectively than the previous generation overclocker processors. The specific CPU sample we tested this time allows us to conclude that these processors may get overheated at 4.4-4.5 GHz clock speeds even with high-performance air coolers. And fourth: Haswell processors require new LGA 1150 platform, which doesn’t boast any unique advantages, but merely offers more USB 3.0 and SATA 6 Gbps ports. But currently this platform seems quite raw and awaits a new chipset stepping, which will fix some issues with the USB 3.0 controller." http://www.xbitlabs.com/articles/cpu/display/core-i7-4770k_13.html | |
ID: 30753 | Rating: 0 | rate: / Reply Quote | |
Intel on the desktop: ivy bride runs hotter than sandy bride, the new haswell runs hotter and uses more energy than ivy bridge. Aren't they going in the wrong direction? The chip of the CPU series prior to Ivy Bridge (namely Sandy Bridge, Sandy Bridge-E, Gulftown, Bloomfield) were actually soldered to the IHS (Integrated Heat Spreader, the metal housing of the chip) resulting in good thermal transfer to the IHS, and low CPU temperatures. But Intel using some "cheap" thermal interface material (TIM) on the new series, so if you want lower CPU temperatures to overclock more, you should remove the IHS (voiding warranty), and put the CPU cooler directly onto the chip (very risky), and/or use better and/or thinner TIM. See this video. | |
ID: 30754 | Rating: 0 | rate: / Reply Quote | |
Two more questions. | |
ID: 30755 | Rating: 0 | rate: / Reply Quote | |
Two more questions. Only GTX Titan, and GTX 780 is good at DP. 2. Heat. I have RealTemp 3.70 and CPUID Hardware Monitor running and the temperatures RealTemp shows, are 10 degrees lower (colder) of the CPU; 59 57 58 57 to 69 67 68 67. All in Celsius. The CPU is doing Rosetta 1 for GPUGRID and 1 idle. This is very strange. You should try your motherboard's original monitoring software. (or coretemp 32bit or 64bit) I also have an Alienware with liquid cooling and temperatures here are even worse, 72 70 71 69. RealTemp and CPUID have the same values though. This is 10-15 degrees higher than a liquid cooler should be able to provide. If it's noisy, perhaps its pump is about to fail, or the level of its coolant is low, these are the worse could happen to a liquid cooler, and to the part which is cooled by it. The question: what are acceptable CPU temperatures? Around 70°C is acceptable with air cooling. The lower the better, even more for overclocking. There are a coulpe of ways lowering the CPU temperature, some of them voids its warranty (removing, or polishing the IHS) | |
ID: 30756 | Rating: 0 | rate: / Reply Quote | |
@Beyond: because even if you can find a project where AMD CPU provide good performance (you can, as you said), the energy efficiency is far too bad for general 24/7 crunching, compared to Intel. I know there are places where electricity is much cheaper than where I live, but I don't think the Netherlands belong to these. Otherwise I like some of what AMD is doing and whish they could do it even better (sore points: single threaded integer performance of Bulldozer and children, power efficiency, smarter turbo modes to be more specific). | |
ID: 30759 | Rating: 0 | rate: / Reply Quote | |
Tetchy thread, but I'll dip my toes again. | |
ID: 30760 | Rating: 0 | rate: / Reply Quote | |
Thanks guys, this is good info. | |
ID: 30761 | Rating: 0 | rate: / Reply Quote | |
There is more information coming, this is good but makes it more difficult to choose right as well for me. | |
ID: 30762 | Rating: 0 | rate: / Reply Quote | |
The reason Intel switched from solder to TIM on IB was because the conductivity of the solder was not compatible with the new tri gate resistor technology. | |
ID: 30763 | Rating: 0 | rate: / Reply Quote | |
For crunching the i7-4770K has nothing over i7-3770K. In fact I would say it's just an i7-3770K done all wrong! Seriously, we don't want lots of USB3 ports and another crap iGPU (which hiked the TDP from 77W to 84W). Two major updates with no improvement on desktop boxes. I measured the actual crunching difference at 4.2GHz of an i7-2600K and an i7-3770K for ~14 CPU projects and there was nothing between them. Only one app from one project was significantly faster (7%), and many were slightly (1 to 3%) faster on the 2600K. Yep, both Intel and AMD have had little performance increase on the desktop. The big difference is price. AMD has made significant strides in onboard graphics performance though. Intel is trying to catch up in that department. Has AMD any plans to go to PCIE3.0 boards? PCI Express 3.0 - The Latest Graphics Standard Now on AMD Boards: http://www.asus.com/Motherboards/SABERTOOTH_990FXGEN3_R20 http://www.maximumpc.com/article/news/ces_2013_look_ma_amds_990fx_does_have_pcie_30_support_video AMD is one of the major players in PCIe 4.0. I've heard rumors that some of their latest processors already include support, so perhaps they're not too concerned with wholeheartedly supporting version 3, although 4 is still a ways off AFAIK. PCI Express 4 in the Works: Set to Achieve 16Gb/s per Lane.: http://www.xbitlabs.com/news/other/display/20110624231122_PCI_Express_4_in_the_Works_Set_to_Achieve_16Gb_s_per_Lane.html | |
ID: 30780 | Rating: 0 | rate: / Reply Quote | |
@Beyond: because even if you can find a project where AMD CPU provide good performance (you can, as you said), the energy efficiency is far too bad for general 24/7 crunching, compared to Intel. I know there are places where electricity is much cheaper than where I live, but I don't think the Netherlands belong to these. Otherwise I like some of what AMD is doing and whish they could do it even better (sore points: single threaded integer performance of Bulldozer and children, power efficiency, smarter turbo modes to be more specific). MrS Happens to be my favorite CPU project: Yoyo. Energy efficiency? The current Phenom X6 has a TDP of 95w, Haswell is 84w. Whoopie, 11 watts. What's the Netherlands have to do with anything? I was thinking of trying a Haswell, but after reading many reviews I'd probably go with a used Sandy Bridge at this point and save some bucks. I do like to support AMD as they've historically been the only company pushing Intel and keeping Intel from robbing us blind like they used to. For crunching on AMD the X6 is still the best unless you're also using the built in GPUs on some of the latest parts. | |
ID: 30781 | Rating: 0 | rate: / Reply Quote | |
For 1 kWh electricity you have to pay 0,23 eurocent ($0,31/kWh) in the Netherlands. 11 Watts is something like 96 kWh for 1 year (theoretical) full load. And thats more expensive compared to other countries... (I've seen a topic about this??? hmm) | |
ID: 30782 | Rating: 0 | rate: / Reply Quote | |
I pay only 11 eurocent per 1KW during the day and 5 cent at nighttime, in the Netherlands. And the end of the year when the bill arrives there is a tax surcharge for the total amount of electricity used. I use a lot indeed and after final calculation this means, in my case, I have to pay 16.44 eurocent per 1KW. (I use around 1KW per hour, depending if I am free and have several rigs running). | |
ID: 30785 | Rating: 0 | rate: / Reply Quote | |
I pay only 11 eurocent per 1KW during the day and 5 cent at nighttime, in the Netherlands. And the end of the year when the bill arrives there is a tax surcharge for the total amount of electricity used. I use a lot indeed and after final calculation this means, in my case, I have to pay 16.44 eurocent per 1KW. (I use around 1KW per hour, depending if I am free and have several rigs running). Electricity is $0.09/ kilowatt here (kind of off peak, fixed rate but allows the power company to cycle the heating and air conditioning). Don't use air conditioning more than 1 or 2 days a year and don't use heating at all (the computers provide more than enough heat [Minnesota]). So I am still waiting for some replay on some of my messages below. But I am patient. You've seen more than enough jabbering about CPUs from various fanboys (me included). XFX, Antec, Corsair, Seasonic and Sparkle all make good power supplies. Google some reviews on the particular model you're looking at, although the gold and platinum models of any of these are most likely very good. The Rosewill platinum line seems to be good too. I mostly use low cost Antec 300 cases. Very good air flow. You'll have to add at least a 120mm side fan and a front fan or two. The last case I bought was the NZXT Source 210 and it's very impressive for the cost: http://benchmarkreviews.com/index.php?option=com_content&task=view&id=804&Itemid=99999999&limit=1&limitstart=4 In my experience ASUS, ASRock, Gigabyte, MSI, Foxconn and Biostar all make some good and bad motherboards. I'd stay away from ECS completely. Read the reviews (including newegg comments) and make sure the PCIe configuration (preferably 2 x16 lanes) and spacing is good for 2 GPUs. | |
ID: 30792 | Rating: 0 | rate: / Reply Quote | |
With ECS you mean: Elitegroup Computer Systems? I don't think it is to find in the Netherlands. There is not one large store that has everything. There are few on the net, but also limited stock/availability. | |
ID: 30807 | Rating: 0 | rate: / Reply Quote | |
With ECS you mean: Elitegroup Computer Systems? You got it. I found a 80 gold PSU from EVGA, and a MOBO from EVGA with good space for three GPU's and of course EVGA GPU's and all to order directly from EVGA in Germany. Sounds like a nice machine. | |
ID: 30809 | Rating: 0 | rate: / Reply Quote | |
Later today (13 June, or 14 June UTC) I will bring online my new PC. | |
ID: 30816 | Rating: 0 | rate: / Reply Quote | |
That motherboard should be able to handle the just released FX-9590 CPU, I'd like to see how well it does at CPDN. | |
ID: 30818 | Rating: 0 | rate: / Reply Quote | |
@TJ Yes I know. But my reaction had to do with what Beyond writed: from Beyond: Your 16,44ct/kWh is minus energy tax? (heffingskorting in Dutch) @ your hardware I can't give you a good advice. What I would to do is buy maximum performance for as little money as possible... but you know that too, you aren't stupid! haha ;) Quadro sounds very good, but two cards of FX4600 are only some renamed 8800GTX cards (or sort of). I should get some second hand 660Ti or 670 now @ Tweakers.net or something. | |
ID: 30840 | Rating: 0 | rate: / Reply Quote | |
@TJ Yes indeed, I have added all costs, like meter costs, network costs, energy tax, then subtracted the benefits (a few euro because I pay automatic and the part of the energy tax we get back). Then the total divided by the amount of kW I haves used over the last year. A my prices per kW vary (per year) the mean prices varies also per year. But I use gas only for cooking and showering, not heating :) ____________ Greetings from TJ | |
ID: 30844 | Rating: 0 | rate: / Reply Quote | |
I have an quad with 2 nVidia cards with cuda 1.0 so they are obsolete. I have two AMD hd5870 cards left. I can use these for Einstein. But only 2 6 pins power plugs and I need 4. Now the PSU (700Watt) has a very short 8 bus connector free (female connector). I guess I can use an adapter here from 1 8 pins to 2 6 pins pcie? How is such a cable called? I need on quite long or perhaps with an extra extension cable. However I can not find on, but I guess because I don't now how it is called. | |
ID: 30960 | Rating: 0 | rate: / Reply Quote | |
If that 8-pin connector is very short I suppose it's the E-ATX CPU power for the mainboard. I wouldn't try to repurpose this for powering GPUs, as I don't know what side effects this might have. | |
ID: 30977 | Rating: 0 | rate: / Reply Quote | |
Thanks ETA, glad you are better again, it was way to warm to be ill a few days ago in our surroundings. | |
ID: 30979 | Rating: 0 | rate: / Reply Quote | |
Another question. | |
ID: 30981 | Rating: 0 | rate: / Reply Quote | |
Good to see you back up and posting. There are lots of views, slants and takes on AMD and Intel processors, which are mostly multi-threaded now. Maybe worth using the Multicore CPUs thread to present any further facts or opinions. Would be keen to hear any opinions relating to the cheapest/best setup to support a GPU or multiple GPU's. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 30987 | Rating: 0 | rate: / Reply Quote | |
Since the i7-980X there has been little improvement in 'out and out' processing power from either AMD or Intel. Lots of different generations, revisions, sockets, packaging, hype and waffle, but limited performance gain. It was about time to change the course of the improvement of the processing power. The results of chasing for more and more gigahertzs are Pentium D processors with 95-130W TDP. It was impossible to dissipate that much heat with the supplied coolers. (I don't like them since then, because they can collect a nice tissue on top of the fins of the heatsink, blocking nearly all the airflow, causing overheating, or the CPU underclocking itself.) Intel and AMD have largely ignored CPU improvement wants from the high end workstation, gaming and crunching markets. Intel's profit from those marginal markets is insignificant. Just like AMD's, but AMD bought ATi to increase their market coverage. Instead they have sought gains in other areas; more done on the processor, better performance/watt. That is what the market needs since then. That is in what the GPUs are better than CPUs. That is why supercomputers are not built on CPUs only anymore. That is why Intel (and AMD) couldn't sell that much CPUs since then. If someone wants to have a high-end workstation with more CPU power (it would be unusual today), it could have two CPUs in it without overheating. Crunching is a long-term activity, so it's better to minimize the cost of the energy it consumes. At the same time a lot of what we are now working with is the product of marketing strategy; on-die controllers, Intel only chipsets, push towards laptops and portable devices, ... Nowadays the computing power of mobile devices (smartphones, tablets) is enough for everyday use (office tasks, browsing, social networking), so the office PC/laptop business is in trouble. They have to be more like the mobile devices, or they will become extinct, because the mobile computing devices are much more power efficient. ... fewer PCIE lanes, and the 'shroud of the cloud' (server processors) - all at the expense of high end desktop improvements. After years of chasing CPU speed to prove the needlessness of 3D accelerators, Intel lost this battle, when the NVidia presented their G80 architecture. So now they focus on what is left for them, and in the meantime trying to catch up with NVidia, AMD (ATi), and ARM. They are 3-5 years behind them, which could be deadly in the computing market. But the same can be said of NVidia and OpenCL - it's typical business maneuvering. Without those business maneuvering Intel, AMD and NVidia would be busted, and we couldn't have their devices to crunch on. When SB arrived we reached the point where a laptop processor existed that was capable of handling >99% of office performance requirements. Sure. There is no need for faster PCs in the office. But they still could be more power effective, to eliminate active cooling (and the noise and the dust pileup). I've started my computing experience with passive cooled CPUs (Like Zilog Z80 and MOS 6510 and later the PCs through the Pentium processor), and I (and most of the consumers) would like to have the passive cooling back on modern CPUs of office PCs or laptops. Since Gulftown, Sandy Bridge-E has been Intel's only attempt at a genuine high end workstation/gaming/crunching system, but it failed to deliver PCIE3 and thus failed almost completely - you don't do CAD on the CPU, don't game on the CPU and don't crunch on the CPU (relatively speaking). In other words: there is improvement in the high end desktops, but the majority of that comes from NVidia and ATi (AMD), not from Intel. For crunching the i7-4770K has nothing over i7-3770K. We'll see, as now I have one. Maybe some projects should gain from the the doubled L1 and L2 cache bandwith, and the other architectural improvements. (I don't expect that the scientific applications could utilize the AVX2) In fact I would say it's just an i7-3770K done all wrong! Seriously, we don't want lots of USB3 ports and another crap iGPU (which hiked the TDP from 77W to 84W). If you don't use the iGPU, it won't increase the CPU's power consumption, as the 4xxx series have even more advanced power gating features than the 3xxx series. The USB3 ports are on the 8x series chipset, which is also more power efficient than the 7x series. It has nothing to do with your statement above, but I want to share the results of my two system's power consumption measurements: 1. Core2 Duo E6600 (2x2.4GHz), 4x512MB DDR2 800MHz RAM, Intel DQ965GF motherboard 2. Core i7-4770K (8x3.7GHz), 2x2GB DDR3 1333MHz RAM, Gigabyte GA-Z87X-OC motherboard The PSU and the HDD are the same. Both systems consumed around 90-96W under full load (Core2 Duo: 2 threads, Core i7: 8 threads) | |
ID: 31129 | Rating: 0 | rate: / Reply Quote | |
A few questions about processors. | |
ID: 31133 | Rating: 0 | rate: / Reply Quote | |
Box means its in a box (retail), rather than a tray (OEM product). Box usually means the CPU is in a box with a heatsink and fan. | |
ID: 31134 | Rating: 0 | rate: / Reply Quote | |
Well I have one AMD system with AMD CPU and AMD GPU. I use it only for my study and I must say it runs nice, quit and boots fast. I didn't find any problems with it. | |
ID: 31135 | Rating: 0 | rate: / Reply Quote | |
Any LGA1150 CPU will work on any LGA1150 motherboard. | |
ID: 31136 | Rating: 0 | rate: / Reply Quote | |
Finally I see that in the Netherlands AMD processors are very cheap compared with Intel, could be a factor 3 or 4. Am I right that AMD does not have HT? AMD processors work perfectly with NVidia GPUs. I'm running 9 machines with one NVidia GPU and one ATI/AMD GPU in each. Intel is a little faster in most CPU projects but AMD is faster in some. Much of the reason you're seeing faster benchmarks in CPU reviews is that the top AMD processors have more cores and most single programs don't use that many cores at once. From the Guru 3D review: Concluding then. I'll keep saying this, personally I would have preferred a faster per core performing AMD quad-core processor rather then an eight-core processor with reduced nice per core performance. However we do have to be clear here, we have been working with the FX 8350 processor for a while now and it simply is a great experience overall. Your system is very fast, feels snazzy and responsive. The Achilles heel simply remain single threaded applications. The problem here is that it effects game performance quite a bit, especially with high-end dedicated graphics cards and that's why in it's current form the FX series simply is not that popular amongst the gaming community. http://www.guru3d.com/articles_pages/amd_fx_8350_processor_review,21.html Of course DC crunching uses all the cores so multicore usage is not a problem. As a couple reviews mention: with the money you save on the AMD processor you can afford a better GPU and thus end up overall with a faster system. | |
ID: 31137 | Rating: 0 | rate: / Reply Quote | |
Everybody seems to forget one big glaring development, Jim Keller is back at AMD and there's going to be 1 maybe 2 more CPU upgrades for the socket AM3+. | |
ID: 31139 | Rating: 0 | rate: / Reply Quote | |
The Haswell paradox: The best CPU in the world… unless you’re a PC enthusiast, By Sebastian Anthony | |
ID: 31140 | Rating: 0 | rate: / Reply Quote | |
Honestly, I do not know much about Computer Science and if would not be for BOINC, I still would not read about it. | |
ID: 31142 | Rating: 0 | rate: / Reply Quote | |
Or do I miss something? What you're missing is, that higher per-thread performance means higher overall performance. Especially for distributed computing projects, like BOINC, that utilize all available computing power, this means you'll get significantly more work done. For crunching, I think AMD just can't beat Intel right now. The only AMDs I'd pick would be 8-cores, but again they're not real 8-core, they are 4-core with double integer units *, making for half-8-core, if such a term is valid. So, don't really know how they would fare against Intel 4-cores at similar frequencies. * maybe other core parts as well ____________ | |
ID: 31143 | Rating: 0 | rate: / Reply Quote | |
Or do I miss something? An AMD 8 core will do more work than an 4 core Intel with the same clock speed. Also keep in mind the Intel quad-cores with HT and thus run 8 cores, actually have 4 real cores. And the AMD with 8 cores have higher clock speeds than "compatible" Intels. I have an AMD 4 core and does not under perform with Intel, its temperature is lower. So that is a plus, as well as the lower price. The only min point I have is that AMD's have higer TDP and theoretically use more power. This is off course an issue when running 24/7. ____________ Greetings from TJ | |
ID: 31144 | Rating: 0 | rate: / Reply Quote | |
For crunching, I think AMD just can't beat Intel right now. The only AMDs I'd pick would be 8-cores, but again they're not real 8-core, they are 4-core with double integer units *, making for half-8-core, if such a term is valid. So, don't really know how they would fare against Intel 4-cores at similar frequencies. Oversimplified explanation: lets call it 8 integer cores and 4 floating point cores. AMD decided to focus on the much more common integer tasks and rely more on extensions to bolster floating point. This was a change from the Phenom X6 which had 6 powerful hardware floating point cores. In fact a case could be made that the Phenom X6 is still the best bang for the buck processor available. At some projects it's faster than ANY Intel i7 (my favorite CPU project Yoyo for instance). The 95w 1045T can be had for $80 from Microcenter or $90 from TigerDirect and works on the latest AM3+ motherboards. Think of all the extra cash you could use to buy a better GPU. For instance the price difference would more than move you up from a 650 Ti to a 660 Ti and still have enough money left over for a nice dinner, another 8GB of ram or a better power supply. | |
ID: 31145 | Rating: 0 | rate: / Reply Quote | |
The Phenom X6 is also slightly faster on Einstein, my main project. | |
ID: 31146 | Rating: 0 | rate: / Reply Quote | |
"8 integer cores and 4 floating point cores" - Now there's a good description, and explanation as to why the Phenom X6 processors outperform the latest 8-core AMD processors for floating point apps. | |
ID: 31147 | Rating: 0 | rate: / Reply Quote | |
"8 integer cores and 4 floating point cores" - Now there's a good description, and explanation as to why the Phenom X6 processors outperform the latest 8-core AMD processors for floating point apps. I run yoyo ecm but even projectwide the X6 is fastest: http://www.rechenkraft.net/yoyo//top_hosts.php The 2 opterons at the top are multi cpu servers and the intel in 3rd place has multiple cpus (24 cores). 4th (and fastest single cpu machine) is my lowly 1035T running 5 cores on Yoyo. Next is a 6/12 core HT 3930K that just popped up a few places. From there down on the first page (discounting the multi CPU servers) it's mostly AMD even though there are far more Intels running the project.. Even the 8120 does well. The sandy & ivy bridges have done far better than the earlier Intels and some are hovering near the top, but considering the cost. If you look at the all time list it's even more telling, 14 of the top 20 single CPU machines are Phenoms. It will be interesting though when a few of the new 8350 8 cores work their way up the RAC list. I would guess they might be the new Yoyo speed champs. | |
ID: 31148 | Rating: 0 | rate: / Reply Quote | |
Both. | |
ID: 31149 | Rating: 0 | rate: / Reply Quote | |
OK, I spent some time looking at BOINCstats CPU break-downs to get as clear a picture of CPU performance as I can. First of all, I looked at not only Yoyo, but also WCG and SETI, as Yoyo is really a niche project with only ~3700 hosts vs ~216000 of WCG and ~207000 of SETI. Yoyo and WCG run apps of various kinds, so they should reflect overall CPU performance adequately well. SETI is more specific, but should also help. | |
ID: 31151 | Rating: 0 | rate: / Reply Quote | |
A few additional issues with those results. | |
ID: 31152 | Rating: 0 | rate: / Reply Quote | |
For Yoyo, there are several apps and BOINCstats doesn't reveal which were being crunched by which processor. We don't care about apps though, just the general performance. The Cell processors are basically a cross between a CPU and a GPU, so they don't make for good comparisons. You certainly can't plug an NVidia GPU into one, so it's irrelevant for here. It may be irrelevant for GPUGRID, but it's certainly relevant for judging processing performance. The credit/h at WCG can vary by >10% depending on what CPU you have and what project you crunch for. Historically WCG has usually had 6 or 7 active projects. At present it's closer to two projects. Their apps and relative contributions to projects varies significantly. Agreed, but the statistics do have historical significance: they are derived from processing all kinds of tasks that have ever existed in WCG, by all types of CPU that have come and gone in the lifetime of WCG. Obviously, not all combinations are recorded, since some CPUs didn't exist when some tasks were available and vice-versa, but that's why I've only considered the top 20, to isolate the best performers (which should include newer chips), or the most efficient task-CPU combinations. The results are for 'logical cores'. So an i7 will be seen as having 8 processors, while an Athlon X2 265 will be seen as two processors. Each core of the Athlon may get 0.03 credits per second, but the processor gets 0.06 credits per second as a whole. An i7-3970X gets 0.019585 credits per thread. So 0.15668 credits/second for the entire processor (~2.5times more than the X2 265). Are you sure that the reported Average credit per CPU second is per thread? If it is, then the lists have to change to reflect the actual whole-CPU performance. The only accurate way to measure performance is to base it on run time at reference/stock speeds per project. Then compile a list of results to show what the relative performances of each CPU (not core/thread) is like. After that you can look at performance/Watt, and then performance/system-Watt. In an ideal world, yes! Ultimately if you want to crunch on a GPU, it's better to spend the money on the GPU(s) than the CPU. Agreed, but I guess we're almost all CPU crunchers as well, so it's not an out-of-scope discussion. ____________ | |
ID: 31153 | Rating: 0 | rate: / Reply Quote | |
Are you sure that the reported Average credit per CPU second is per thread? If it is, then the lists have to change to reflect the actual whole-CPU performance. It looks fairly obvious to me, otherwise the chart is a complete nonsense. For example, 3 AMD Athlon(tm) II X2 265 Processor 11 985,829.86 8.03 89,620.90 0.73 0.029980 ... 21 AMD Athlon(tm) II X4 645 Processor 27 2,258,548.79 3,311.85 83,649.96 122.66 0.021578 We are probably looking at an X4 645 being 44% faster than an overclocked X2 265. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 31154 | Rating: 0 | rate: / Reply Quote | |
Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry... | |
ID: 31163 | Rating: 0 | rate: / Reply Quote | |
I thought I replace the liquid cooler in the Alienware but the 3 I have don't fit. Hight is not a problem it is its wideness. Small room so a round pump fits nice. Seems they have thought about that :) Well then new thermal paste and the old stuff seems the be applied little. Now cool when running GPU (2) only, but quickly rising to 83°C when crunching on 6 cpu cores. I run now one at a time (at 75°C) and when finished I will replace once more another paste (the best I can get from were I work). | |
ID: 31166 | Rating: 0 | rate: / Reply Quote | |
WUProp@home has more meaningful Results for both CPU projects and GPU projects: | |
ID: 31167 | Rating: 0 | rate: / Reply Quote | |
But I need (want) a new system soon and perhaps I can make a deal with the lady. However a question, I guess for Beyond (he will be smiling). I found an Asus Sabertooth 990FX R2 and a FX8350, together less than an i7 with 3.5GHz! I would think that CPU & MB would work with most brand name DDR3 modules. Here's some info that might help: http://www.tomshardware.com/answers/id-1658052/memory-asus-sabertooth-990fx-crosshair-formula.html http://www.tomshardware.com/forum/361359-28-best-32gb-ddr3-8350-asus-sabertooth-990fx http://forums.amd.com/game/messageview.cfm?catid=446&threadid=163493&forumid=11 http://support.amd.com/us/kbarticles/Pages/ddr3memoryfrequencyguide.aspx I'd look into low voltage 1.35v DDR3 to lower power usage and temps a bit. From what I understand the FX8350 memory controller can handle memory voltages down to 1.2v. BTW, nice system! | |
ID: 31168 | Rating: 0 | rate: / Reply Quote | |
I found Kingston blue memory 2 times 8Gb at 1600MHz. That works on the MOBO but I can not find on AMD's site if it works with the CPU. Will it? Yes, works perfectly. I have all 990FX chipsets with Kingston HyperX 1600 memory and no issue's. If you get that system, PM me or Beyond and we can help you with the BIOS settings (hope you don't mind me volunteering you Beyond ;). The FX8350 will work with memory up to 1866 and maybe higher with certain BIOS updates (native), and if you can locate a Sabertooth Gen3 R2.0 you get PCIe 3.0 (not much use yet). | |
ID: 31171 | Rating: 0 | rate: / Reply Quote | |
Thanks, I will do that, but I think it will take a few months before the new system is ready to build. | |
ID: 31172 | Rating: 0 | rate: / Reply Quote | |
While checking a few things I saw that the T7400 has 7 slots, only 2 PCIE that will needed but also with no space in between. Thus this large case is small after all and my mis-bought of the year. It is pointless to install a new PSU in it and will consider it as a loss. | |
ID: 31174 | Rating: 0 | rate: / Reply Quote | |
@Beyond, all I can find is Kingston 1.6 or 1.65V memory. It is possible that not everything can be obtained in the Netherlands. We dozens of shops and they have little from a lot of brands with a large price span. Sometimes I find stuff in Germany not all send it abroad. From the US can, but will be very expensive due to transport and customs, I have experienced in the past. Perhaps I can place a memory cooler as well. I saw some things from Zalman, to adjust all the fans from a front panel. I would really try to find at least standard 1.5v memory. The 1.35v stuff is getting more common but maybe not there yet. As far as GPU spacing, you need two empty slots between the GPUs. That will give you a decent air space when using double slot cards. The sabertooth you mention has proper spacing for 2 cards. | |
ID: 31176 | Rating: 0 | rate: / Reply Quote | |
Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry... Don't be sorry, it wasn't much work really. Besides, I believe you're wrong. What if credit rates have changed or some people were cheating? I believe it's pretty easy to single-out dubious results, like that Core2 or the old Athlon X2 and come up with valid conclusions. That's why I chose the top 20, so we have enough CPUs to work with, filtering out the "noise". I went through the Yoyo list again, multiplying the single-thread Credit per CPU second (indicated by skgiven, thanks!) by the number of cores / threads in each CPU. I also removed the PS3/Cell entries. So, here's the list for Yoyo: Intel(R) Xeon(R) CPU L5638 @ 2.00GHz Intel(R) Xeon(R) CPU E3-1275 V2 @ 3.50GHz Intel(R) Core(tm) i5-2405S CPU @ 2.50GHz Intel(R) Core(tm) i5-2550K CPU @ 3.40GHz Intel(R) Core(tm) i5-3570K CPU @ 3.40GHz Intel(R) Core(tm)2 Extreme CPU X9775 @ 3.20GHz Intel(R) Core(tm) i5-2500K CPU @ 3.30GHz AMD Phenom(tm) II X4 B50 Processor Intel(R) Xeon(R) CPU E31220 @ 3.10GHz Intel(R) Core(tm) i5-2500 CPU @ 3.30GHz AMD Phenom(tm) II X4 B95 Processor Intel(R) Core(tm) i5-3470S CPU @ 2.90GHz AMD Athlon(tm) II X2 265 Processor AMD Phenom(tm) II X2 521 Processor Intel(R) Xeon(R) CPU E5205 @ 1.86GHz AMD Phenom(tm) II N620 Dual-Core Processor I didn't go through the WCG and SETI lists, since they can or have utilized GPUs and so the CPU numbers are not correct, as indicated by Beyond (thanks!). Besides, I didn't want Beyond to think I did so much work again! :P Before you take a look at the Yoyo Top hosts list and dismiss the list above, please take into account this: This is statistical data for the whole population of Yoyo@home. Statistics is all about averages, means, medians and all that. By definition, single cases can and do exist outside the statistical domain. Also, we're discussing about CPUs here, not hosts. What would happen to Yoyo's Top hosts list if somebody with a dual Xeon L5638 system crunched for it? The results are pretty clear, to me at least: Intel dominates Yoyo! Yes, if you own Yoyo's top or second top host, you may have a different opinion, but try to think about the general case: in general, Intel crunches better! Finally, maybe some CPUs are overclocked, maybe credit awarding rates have changed through time, maybe some people have cheated, whatever. These discrepancies apply for both Intel and AMD CPUs, and therefore the final results shouldn't be affected, at least much. ____________ | |
ID: 31177 | Rating: 0 | rate: / Reply Quote | |
I found this at Amazon: Kingston KHX16LC9X3K4/16X Arbeitsspeicher 16GB (1600MHz, CL9, 4x 4GB, DIMM 1,35V) DDR3-RAM Kit, so that would be okay. | |
ID: 31178 | Rating: 0 | rate: / Reply Quote | |
Vagelis, your list has this Xeon L5638 @ 2.00GHz as the top CPU, yet it's really very slow: Vagelis, sorry you went to all this work but these charts are obviously useless. They contain info from various apps over the years that are no longer used and had cell and GPU clients and apps that once had much higher credit awards. In some cases in the past cheating was commonplace and rampant. For instance, a core2 host in SETI or WCG running a GPU would score much higher than an i7 without a GPU. These charts include those cases. There is no useful information to be gained from these. Even a cursory glance reveals that they make no sense. Sorry... This is ridiculous. You can go through all the phony statistical gyrations you want but it doesn't make it true. Those of us who've run Yoyo for years know what CPUs produce the most. My teammate who was running all Sandy Bridge Intels saw how fast my AMDs were running and started switching over to AMD. He's the highest Yoyo producer ever. I'm number two. He's almost completely converted to AMD now. I admit to being an AMD fan. The cost/performance is better. In my experience they've been much more reliable (I've built hundreds of PCs for local companies and individuals). AMD has been the only thing that has kept Intel honest both in performance and pricing. Intel has played dirty pool against competitors throughout its history with it's FUD, anti-competitive practices, rigging benchmarks, giving payouts to PC makers to not use AMD, etc, etc, etc. . At least post a disclaimer that you're an Intel fanboy. Here's the all time Yoyo leading machines. AMD dominates Yoyo (14 of the top 20 single CPU computers). Yet as you say there are MANY more Intels being used. How do you explain (twist) that? http://www.rechenkraft.net/yoyo//top_hosts.php?sort_by=total_credit | |
ID: 31180 | Rating: 0 | rate: / Reply Quote | |
http://www.agner.org/optimize/blog/read.php?i=49&v=t | |
ID: 31181 | Rating: 0 | rate: / Reply Quote | |
Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you. | |
ID: 31182 | Rating: 0 | rate: / Reply Quote | |
Then the Thread should be renamed to GPU Hardware Questions. So it is legitim to talk about CPUs in a HARDWARE QUESTION Thread in my opinion :P | |
ID: 31183 | Rating: 0 | rate: / Reply Quote | |
Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you. TJ, when you asked the magical question: "What CPU would you suggest?" all heck broke loose :-) And to the hardware. I have use other heat paste on the Alienware. It has now one GTX660, running with 93% load at 74°C and 74% fan speed (maximum). Perhaps to thick? The stuff did nice flow smooth from itself. Something is wrong, is the HS/fan seated properly? I'd drop the CPU WUs until you get it worked out. You could also try opening the case and aim a fan at it for now. The other GTX660 is in the other box (with all the problems with the first card) and does a short run now. It seems to be the same problems as before. Kernel times are almost the same as CPU usage. So when finished the GTX660 goes in the box and I put the GTX285 back in do some MilkyWay. With NV 6xx series GPUs the CPU time is supposed to be almost the same as the GPU time. It's hard to say more when your computers are hidden... | |
ID: 31184 | Rating: 0 | rate: / Reply Quote | |
[quote] TJ, when you asked the magical question: "What CPU would you suggest?" all heck broke loose :-) I know, I did, but that is sorted out now. The sabertooth is on its way so... But a long discussion about statistics and comparing things with each other that are not comparable, makes the thread long. Mercedes and Audi aren't comparable either. And to the hardware. I have use other heat paste on the Alienware. It has now one GTX660, running with 93% load at 74°C and 74% fan speed (maximum). Perhaps to thick? The stuff did nice flow smooth from itself. Yes good seated, I did already set a fan besides it. And I found a small cooler, will order it and hope it fit. ____________ Greetings from TJ | |
ID: 31185 | Rating: 0 | rate: / Reply Quote | |
Hi Guys, can we please keep this thread about hardware? That was the reason I started it. Okay CPU is also hardware, but please start a new thread about CPU comparison. Thank you. I agree, there is probably a bit too much about CPU's in this thread already, and I get the feeling plenty of people want a better place to discuss CPU's further, so I started a new thread, CPU Comparisons - general open discussion :) ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 31193 | Rating: 0 | rate: / Reply Quote | |
A question for system builders. | |
ID: 31216 | Rating: 0 | rate: / Reply Quote | |
A question for system builders. I've read the same stuff and tried a lot of ways myself. What works for me is to apply a thin layer to the whole surface of the CPU then put a rice sized drop in the middle before attaching the HS/fan. Sometimes it takes a few days before optimum temps are reached, but it should be close to target from the start. If not, the HS/fan is probably not seated well. | |
ID: 31221 | Rating: 0 | rate: / Reply Quote | |
A question for system builders. Except the rise drop we do it the same. I have got stuff from the cleanroom from the Uni where the use it with chips they made themselves. However I forgot a liquid to clean the old grease. I have used white spirit in the past, but that is not good as I have read correctly. Personally I don't believe in all the special products that are sell for it. Expensive for "normal" chemicals found around the house. What do you use Beyond? P.S. the sabertooth arrived, nice board. ____________ Greetings from TJ | |
ID: 31224 | Rating: 0 | rate: / Reply Quote | |
What do you use Beyond? 99% iso-alcohol if I can find it, 91% if I can't. | |
ID: 31227 | Rating: 0 | rate: / Reply Quote | |
Almost sounds like you run out of water in that liquid cooler :D | |
ID: 31229 | Rating: 0 | rate: / Reply Quote | |
Then once you place the heat sink press and twist it lightly back and forth. This will distribute the paste just fine Hey, I was going to say that too but I forgot :-) | |
ID: 31230 | Rating: 0 | rate: / Reply Quote | |
The biggest mistake people make when applying heatsink compound is using too much. The compound layer needs to be as thin as possible, to conduct the heat from one solid to the other, but cover as much of the CPU and heatsink as possible (all of them). Obviously heatsinks that don't cover all of the processor are of lesser design. | |
ID: 31237 | Rating: 0 | rate: / Reply Quote | |
Thank you all guys, this is very useful information. | |
ID: 31239 | Rating: 0 | rate: / Reply Quote | |
Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4 | |
ID: 31243 | Rating: 0 | rate: / Reply Quote | |
I don't know which engineer invented the push pins to mount a Shuriken B, but it is absolutely not working. The pins are under the cooling body with ample space! | |
ID: 31256 | Rating: 0 | rate: / Reply Quote | |
Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4 Nice video, thanks. That's why I always put a rice sized drop in the middle along with the thin spread. Have also tried the cross and line methods as well as the dab in the middle only method. I seem to get most consistent results with the thin layer plus dab in the middle, but my testing has been pretty subjective. The sheet of glass test is an interesting idea. | |
ID: 31258 | Rating: 0 | rate: / Reply Quote | |
Well skgiven you are right again, I am turning mad... | |
ID: 31260 | Rating: 0 | rate: / Reply Quote | |
Here's a video I found VERY helpful with applying thermal paste: https://www.youtube.com/watch?v=EyXLu1Ms-q4 Yeah great catch Vagelis. The glass plate shows indeed what happened. I had it smeared like that and perhaps there where air bubbles between the CPU and the water cooler. I try three lines and then push and take it off to so how evenly it went. I have past for to attempts, so will just work, or not... ____________ Greetings from TJ | |
ID: 31261 | Rating: 0 | rate: / Reply Quote | |
TJ, I really do hope your adventure has a good ending! | |
ID: 31262 | Rating: 0 | rate: / Reply Quote | |
Nice video, thanks. That's why I always put a rice sized drop in the middle along with the thin spread. Have also tried the cross and line methods as well as the dab in the middle only method. I seem to get most consistent results with the thin layer plus dab in the middle, but my testing has been pretty subjective. The sheet of glass test is an interesting idea. The sheet of glass thing is a GREAT idea that made me think how come I never had it! ____________ | |
ID: 31263 | Rating: 0 | rate: / Reply Quote | |
Oh dear TJ, that sounds like a very unhappy exercise! | |
ID: 31267 | Rating: 0 | rate: / Reply Quote | |
Eventually I got it working. Mom learned me to always keep calm, whatever happened. Let it rest for a time, do something else and then go further until you get it done. | |
ID: 31271 | Rating: 0 | rate: / Reply Quote | |
The GTX 550Ti did ran Milkyway at 974MHz clock speed. When I let it run GG, the Nathan LR failed quick. So I thought let's try a SR. The Santi keeps running but at clock speed 405MHZ at 98% load. So something happened with the card as it down-clocked again. Is the driver 314.22 the cause? 8% In 2 hours for a SR. | |
ID: 31272 | Rating: 0 | rate: / Reply Quote | |
Before I went to bed I installed driver 314,07 on the quad again and boot it (Vista x86). The GPU clock ran at 974MHz but GG LR and SR fail quick. I turned to Einstein@home, and that seems to run, expected time 5 hours. | |
ID: 31274 | Rating: 0 | rate: / Reply Quote | |
Oh dear TJ, that sounds like a very unhappy exercise! Hello ETA, no I didn´t take it out. There is ample space so I would have taken out everything and there are a lot of cables and sensors attached. And it was not possible to remove the other side panel, that would make it easier. It is mounted now but it cools not enough I think, an Intel box cooler. I can try a new water cooler later this year. ____________ Greetings from TJ | |
ID: 31275 | Rating: 0 | rate: / Reply Quote | |
Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG. | |
ID: 31276 | Rating: 0 | rate: / Reply Quote | |
Indeed I reboot the quad with the 550Ti (VENUS) and it is at 974MHz again. Doing Einstein. Will try GG if they finished. And yes MW ran cool on it, seems about 150 seconds faster than on the 660? But could always be a light change in type of WU. | |
ID: 31277 | Rating: 0 | rate: / Reply Quote | |
One more thing (for now), the Einstein WU stopped, hoovering with the muse shows this message: Not enough free CPU/GPU memory available! Waiting for 15 minutes. | |
ID: 31278 | Rating: 0 | rate: / Reply Quote | |
There's a BOINC setting "use at most xx% of memory", which might be set too low by default (I think you recently had some problem with the settings being reset, didn't you?). And if your GPU is idle (i.e. not enough memory to run any of them) then it's actually OK that it's clocked down. But then the clock speed should go up as soon as there's a load again. | |
ID: 31283 | Rating: 0 | rate: / Reply Quote | |
Thanks ETA, that is great info, I didn´t know. | |
ID: 31284 | Rating: 0 | rate: / Reply Quote | |
Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG. Not long ago I was down to 4 NVidias and was seriously considering dumping those. At most GPU projects the ATI/AMD cards are simply much more powerful and efficient. Then I came back to GPUGrid to give it another try. Now I'm working back up to a 50/50 ATI(AMD)/NVidia ratio again. The programming ability here is questionable (IMO) but the scientific results are compelling. I just wish they would ask for some help in getting the OpenCL app working. People can't be experts at everything. I've been seeing horror stories about the recent NV drivers and it's kind of strange: it used to be that NV drivers were solid and ATI drivers questionable. Lately I've been having luck installing the latest ATI and holding back on NV. To make a long ramble a bit shorter: I've stuck with 310.90 NV and am having no particular problems. | |
ID: 31294 | Rating: 0 | rate: / Reply Quote | |
Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5. | |
ID: 31296 | Rating: 0 | rate: / Reply Quote | |
Regarding your GTX550Ti: MW can take quite a beating, GPU-clock-wise. And especially on nVidias it's using only a tiny fraction of their hardware (hence it's quite slow on most nVidias), generating not much heat. I'm not surprised the card can take higher clocks at MW than at GG. I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart. ____________ Greetings from TJ | |
ID: 31303 | Rating: 0 | rate: / Reply Quote | |
Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5. Thanks, I will try that right away. Up to the warm attic. ____________ Greetings from TJ | |
ID: 31304 | Rating: 0 | rate: / Reply Quote | |
I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart. If you had better success on MW with the GTX 285 than with your HD 5870, you had a major setup problem. I had virtually no failures on MW. Don't run it anymore since hitting my 500,000,000 credit target. The HD 5870 is (at least was when I was running it) is so much faster and more efficient than the GTX 285 at DP, it's not even a comparison. | |
ID: 31308 | Rating: 0 | rate: / Reply Quote | |
I had the opposite with Milkyway. Two AMD HD5870 running with one failure in every 30 WU, after Doc T. made a change it the app. But according to the fora it was at my end. Then I set the old but power horse GTX285 on the project, and thus not the open CL app and then 600 WU´s in a row without failures. With Einstein and Albert there are also few Open Cl that don´t validate, cuda only one failure due to my fault I did not suspend when restart. Yes still is almost 10 times faster at Milkyway. Due to the short runtime a great project to experiment. By the way not only me, I have checked and saw a lot of errors to validate even on 6xxx and 7xxx AMD cards. ____________ Greetings from TJ | |
ID: 31314 | Rating: 0 | rate: / Reply Quote | |
Instead of trying to run the 550Ti @974MHz I would set it manually to reference values of 900MHz for the GPU and 4104MHz for the GDDR5. Well I removed everything from EVGA and nVidia and installed the latest drivers. Now the Einstein WU is finishing with GPU clock steady at 951MHz and temp of 66°C. Kernel times are low now as well. See tomorrow how it went. Speccy has higher temperatures then Core Temp. But I have that said earlier, all these programs vary with their readings. But a CPU of around 50°C is not bad. After a few days with problems, installing and de-installing I must admit that the old T7400 where we had our thoughts about, does it well with the GTX660 at 65-66°C and steady clock. The two Xeons keep quite cool with 50-65°C (passive coolers, with a fan at little distance from the two massive aluminum blocks), at 360-370Watt in total. I become attached to this system ;-) ____________ Greetings from TJ | |
ID: 31315 | Rating: 0 | rate: / Reply Quote | |
My three cuda capable GPU's are all crunching for GG so I am happy. However now the weather turns hot, with rising ambient temperatures :( | |
ID: 31322 | Rating: 0 | rate: / Reply Quote | |
If a CPU cooler fits on an AMD3 will it fit on an AMD3+ as well? | |
ID: 31323 | Rating: 0 | rate: / Reply Quote | |
If a CPU cooler fits on an AMD3 will it fit on an AMD3+ as well? Yep, same mounting system. | |
ID: 31324 | Rating: 0 | rate: / Reply Quote | |
My last idea for today. I want to contribute to GG with more then the GTX285, so bought a GTX660, but didn´t work out well. The story is known. | |
ID: 31344 | Rating: 0 | rate: / Reply Quote | |
That's more of a put off than a solution. The best solution IMO is to get rid of your old power hungry hardware and replace it with a cheap system (basic CPU, motherboard with two PCIE slots) and kit it out with a good PSU and whatever GPU(s) you want. I don't think there are going to be many new NVidia cards any time soon (possibly a few revisions). While i3 4000series Intel and AMD Kaveri's are likely before the end of the year, these will be low to mid range (sitting on the fence) CPU's. You have to decide if you want to buy a fairly power hungry 8core AMD, invest heavily in an Intel CPU, or neither. If neither appeals to you, just forget about CPU crunching and concentrate on where it's at, the GPU. A basic CPU and motherboard with a good GPU will be cheap to buy, cheap to run, do lots of good work and get you lots of Boinc credits. | |
ID: 31345 | Rating: 0 | rate: / Reply Quote | |
That makes a lot of sense. Especially since you'd "loose" 2 threads feeding the GPUs at GPU-Grid anyway, so there's less benefit of making these cores fast. | |
ID: 31357 | Rating: 0 | rate: / Reply Quote | |
With temperatures rising I need to power down my rigs. But I got a GTX770 today and wanted to put in in the i7 with a (faulty XFX MOBO). After a lot of installing and booting it worked. Better than with the 660 from EVGA that didn´t work. This one is from ASUS. Not the brand as first choice of mine, but I thought that might work in the old rig. I must say I like the software with it. You can see everything, change the order of it and do i.e. setting for clock speed, fan. | |
ID: 31603 | Rating: 0 | rate: / Reply Quote | |
Its not working properly either. The 770 in the i7 (with XFX MOBO) has done 52% in 8.5 hours. That can't be right, seeing the results of other 770's. I will let it finish and then put it in the T7400 and will consider the XFX PC as scrap, I can use some parts of it later. | |
ID: 31615 | Rating: 0 | rate: / Reply Quote | |
Well the 8-pin power plug on a Dell T7400 is not the same as a 8-pin power plug from a EVGA PSU. It is white and is fitted on the same wires as the 6-pins power plug for GPU. The other 6-pins plug has not this extra plug. | |
ID: 31635 | Rating: 0 | rate: / Reply Quote | |
Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there. | |
ID: 31647 | Rating: 0 | rate: / Reply Quote | |
Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there. Those adapters can cause more problems in the long term than they can solve straight off. | |
ID: 31649 | Rating: 0 | rate: / Reply Quote | |
Provided the PSU can support the card, can't you use a molex-to-PCIe adaptor? There must be a spare molex or two dangling in there. More over there is only one free. I can replace the PSU, but I am building a new one earlier than planned so that is not an option right a way. ____________ Greetings from TJ | |
ID: 31659 | Rating: 0 | rate: / Reply Quote | |
Hello, instead of heat conduction paste I found liquid metal pads. This is its description: | |
ID: 32356 | Rating: 0 | rate: / Reply Quote | |
Regular paste can't beat liquid metal for heat conductivity. Just be careful to clean the interfaces properly before applying (otherwise it will not wet the surface properly, rubbing with Aceton is enough to remove any organic surface contamination) and beware of drops floating anywhere in your PC. Happened to me once, but that was while applying the stuff directly to an insufficiently cleaned surface, not with a pad. Luckily, in the same way it did not wet my heat sink it didn't wet the mainboard either, so it formed a bubble on my mainboard and I could just turn the PC upside down to get it out again... shoudln't happen with a pad :D | |
ID: 32412 | Rating: 0 | rate: / Reply Quote | |
Thanks for the input ETA. | |
ID: 32419 | Rating: 0 | rate: / Reply Quote | |
Another question from me. | |
ID: 33256 | Rating: 0 | rate: / Reply Quote | |
I got the PSU out and opened it. It smells a bit and most parts feel still a bit warmish after approx. 2 hours, but no black blathering that looks like burn. | |
ID: 33257 | Rating: 0 | rate: / Reply Quote | |
One PC a Dell xps480 is running 24/7 for 5 years now. Last week when I look at it, the log on screen was visible, so it had booted itself. I logged in and a few hours later the same. After a new log in it worked for 6 days, overnight it booted itself again... These are the typical symptoms of a faulty PSU. (or an overheating CPU/MB) ...and when I logged in. after about 15 minutes a sharp bang and power off in my attic. Seems two fuses went of, the ground fault circuit breaker as well. The final sharp bang is a quite confirmation of that the PSU have failed. The PC is a bit smelly to burn, especially in the area of the PSU. The smell of burn was caused by the high current going through a failing semiconductor (typically the switching FET, or the rectifier, or both). This PSU won't work until it's taken apart, and repaired. Even trying to switch it on again could be dangerous. I checked Who Crashed and Event Explorer but no indication of any sort. It's more easy to smell than see. The semiconductors are smoke powered: when this magic smoke comes out of a semiconductor (you can tell it by it's smell), it won't work anymore. Does this bang with power cut have done damage to other hardware in the PC? It could, but it's not typical. The OCP (Over Current Protection) feature of the PSU should prevent such damage. However if you try to turn on the failed PSU again, there is a greater risk of burning more parts in it, or in the PC. I will mount the PSU of off course and open it to see if I can recognize anything. Sometimes you can see brown burning marks around the parts mounted to the PCB, but the rectifiers and the FETs are mounted to a heatsink, and usually you won't see anything suspicious on them, but sometimes their casing could be cracked, or a crater shaped part could be missing. I got the PSU out and opened it. It smells a bit and most parts feel still a bit warmish after approx. 2 hours, but no black blathering that looks like burn. The smell is the sign of the burn. It could be smelt after days. The rest of the components look okay to me. I bring this PSU to an electric shop, to test it for me. They should test it strictly with a dummy load. But it's wise to test it only after all of the high current semiconductors are checked, and the failed ones have been replaced. But this process could cost more than a new PSU. One other thing when I started the other PC again, doing a GPUGRID SR, more than 50% finished, was starting from zero again. After a power cut, this does noet happen after a re-boot without suspending the WU first. It's typical when the files containing the checkpoint aren't written to the disk correctly (because of a power failure) | |
ID: 33258 | Rating: 0 | rate: / Reply Quote | |
Thanks Zoltan, | |
ID: 33259 | Rating: 0 | rate: / Reply Quote | |
I can not see anything, but the smell is obvious. I didn't see smoke or any damage to components. I will look again tomorrow with the sun light. Sometimes it's much easier to see the damage on macro photographs (but usually you have to take the parts out of the PSU to take these photos). The GTX550Ti was in a PCI 16 slot with 75Watt, that is what is printed on the MOBO. Could that be the cause of the problem? That the GPU uses more that the 75 Watt? No. The GTX550Ti has a PCIe power connector, and this can supply the additional power. It's as simple as your PSU has reached the end of its lifetime. | |
ID: 33261 | Rating: 0 | rate: / Reply Quote | |
Good advice from Zoltan. Sounds like a failed PSU to me too. If it smells burnt, you should replace it. I would not even consider attempting to repair a PSU. Anything else is fair game, but not the PSU. If it's under warranty and worth the bother, RTM it, otherwise bin it. It's really not worth the time, money or risk to repair a failed PSU yourself. The PSU might have a lot of unseen damage that could surface later on, cause problems and take out more hardware - when a PSU fails it can cause other hardware failures (Motherboard, anything in a PCIE slot, RAM, disk drive). The best PSU's just blow a fuse (but come with a spare), the worst are fire-crackers that make everything attached flare up. | |
ID: 33262 | Rating: 0 | rate: / Reply Quote | |
I mostly agree with Zoltan and SK here - except for the part of throwing the broken PSU in the bin. It belongs into the electronic bin! Well, at least that's what we have in Germany.. ;) | |
ID: 33267 | Rating: 0 | rate: / Reply Quote | |
I mostly agree with Zoltan and SK here - except for the part of throwing the broken PSU in the bin. It belongs into the electronic bin! Well, at least that's what we have in Germany.. ;) Same here in California, in fact, they charge us a disposal fee when we buy certain computer components and we don't have a land fill in Tuolumne County. Our refuse is trucked to Nevada that doesn't charge fees for computer component disposal, it's a racket (especially if you don't save your original receipt). As for your power supply, they can be very dangerous if those large capacitors have a charge, they can even kill, so be very carful. I have a newer digital PSU tester I picked up for $20.00 US, not a bad idea if your going to have several machines crunching. Here's one at Amazon, it's more expensive and I don't know why it's twice the price it goes for at Newegg. http://www.amazon.com/Silver-APEVIA-Supply-Tester-Aluminum/dp/B009D514I0/ref=sr_1_fkmr0_1?ie=UTF8&qid=1380488686&sr=8-1-fkmr0&keywords=Apevia+Power+Supply+Tester+PST+03 Anyway, just to give you an idea what they look like and how they work, I use mine on a monthly basis. | |
ID: 33270 | Rating: 0 | rate: / Reply Quote | |
Thanks for all the information guys. | |
ID: 33288 | Rating: 0 | rate: / Reply Quote | |
One question flashawk, testing a PSU with that devise, would that involve to bring power to the PSU? | |
ID: 33289 | Rating: 0 | rate: / Reply Quote | |
And I will not bring power to it any more as it has switched of two fuses in the electricity meter cupboard (is it called that way in English?), when it broke. Here in the states we call those breaker boxes or circuit breaker boxes, they have digital meters that put out a wireless signal for the meter readers, they don't even get out of their cars. To test the power supply with one of those testers, yes, you would need to plug it in. I remember that Dell computers had their own proprietary PSU's and an industry standard PSU would plug in to the Dell motherboard and fry them. Those were of the old 20 pin type if I remember right. | |
ID: 33293 | Rating: 0 | rate: / Reply Quote | |
I remember that Dell computers had their own proprietary PSU's and an industry standard PSU would plug in to the Dell motherboard and fry them. Those were of the old 20 pin type if I remember right. That would not be nice :( Indeed Dell has everything others, the plugs for the fans are very flat as well so a "normal" would not fit. This Dell has a 24 pin. W´ll see, I get a PSU today and will build it in let know how it works. Thansk for the warning! ____________ Greetings from TJ | |
ID: 33297 | Rating: 0 | rate: / Reply Quote | |
Well as promised I would let know how it went. | |
ID: 33318 | Rating: 0 | rate: / Reply Quote | |
50 W less? Wow, that old PSU must have been really old and / or really crappy! The new one should pay for itself quickly, depending on how long the machine still runs 24/7 :) | |
ID: 33335 | Rating: 0 | rate: / Reply Quote | |
Hello, another question from me. | |
ID: 34197 | Rating: 0 | rate: / Reply Quote | |
It depends on the other components in your system too, not just the GPUs. | |
ID: 34198 | Rating: 0 | rate: / Reply Quote | |
Your GTX660's each have a 140W TDP, and your AMD FX(tm)-8350 has a TDP of 125W. | |
ID: 34202 | Rating: 0 | rate: / Reply Quote | |
I have checked the power use of the system when I put the two 660's in and it is using 350-458W depending on if the two GPU's are busy and 4 Rosetta WU's on the GPU. | |
ID: 34203 | Rating: 0 | rate: / Reply Quote | |
Might be some variation in that EVGA range but I did see one that is 90% efficient. The EVGA is likely to be 5% more efficient than the Cooler Master, as the EVGA is a higher end PSU (Gold rather than Bronze and with more power headroom). | |
ID: 34258 | Rating: 0 | rate: / Reply Quote | |
Yes thanks skgiven. | |
ID: 34259 | Rating: 0 | rate: / Reply Quote | |
Coincidentally I was behind this PC. | |
ID: 34266 | Rating: 0 | rate: / Reply Quote | |
That 80+ Gold PSU is a better fit for that system - saves you money and operates closer to peak efficiency (~50% load). You can check with your power meter, you should see less power usage under comparable load now. | |
ID: 34305 | Rating: 0 | rate: / Reply Quote | |
Hello ETA, you have a very good memory. Indeed I have always problems with my CPU coolings. | |
ID: 34309 | Rating: 0 | rate: / Reply Quote | |
I've noticed differences in reported temperatures from different programs too. There is no single way to read the temperature sensors. It's not like looking at a thermometer and reading a number off of a scale. Different programs use different methods to read the temperature therefore you get a different report from each one. It's not because the sensors are broken or inaccurate or that the software is broken. It's due to different software developers having different philosophies about how the sensors should be read and reported. Take an average of 3 or 4 readings and relax. Or err on the side of caution and watch only the one that gives the highest reading. If none of them are reporting excessive temperature then that's a good sign because it means no matter which standard you use you're inn the green. At least they all agree on that point. | |
ID: 34324 | Rating: 0 | rate: / Reply Quote | |
There's also some variation in which sensors are reported as CPU. For Intels you can trust CoreTemp: it reads the values directly measured and reported from the cores, the same which Intel uses themselves to set the turbo mode states. Each core has 100+ temperature sensors and reports the hottest one. Some of these are directly at the FPUs, which are hot spots during heavy number crunching. So these are really worst-case numbers and can hence reach 70°C without trouble, with even 80°C still being OK. | |
ID: 34335 | Rating: 0 | rate: / Reply Quote | |
Thanks ETA and Dagorath, I didn't know all of that. But knowing this now and seeing no temperature higher than 65°C for the 4771 i7 with running 4 Rosetta's and 2 GPU's, leaves two for me to play with, I am happy. | |
ID: 34336 | Rating: 0 | rate: / Reply Quote | |
Use driver 9.18.10.3071 (date 19.03.2013), not the current 10.x (could be Win 8.1 only anyway, don't know), a current BOINC and connect something to an output port of the iGPU (VGA cable of monitor, VGA dummy, whatever). Allow "Binary Radio Pulsar Search (Perseus Arm Survey)" (the WUs don't count as GPU WUs for the server) and switch off "run CPU tasks for projects where GPU is available". If it works run 2 WUs in parallel for a nice throughput boost and you're good to go :) | |
ID: 34366 | Rating: 0 | rate: / Reply Quote | |
Thanks for the good advise ETA. Will try in the coming days. | |
ID: 34370 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Hardware questions