Message boards :
Number crunching :
Advice on upgrade. Q9300 > i5-3570K
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
MJHSend message Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level ![]() Scientific publications ![]()
|
Asus Z87-WS is good for us. |
|
Send message Joined: 5 Jan 09 Posts: 670 Credit: 2,498,095,550 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Asus Z87-WS is good for us. I'm sure it is but it's a lot of pennies for a MB :( |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Asus Z87-WS is good for us. The mobo is the foundation of any computer so it makes sense to spend a little more, IMHO. A mobo that is already equipped with features that you will likely find useful when the next generation of CPUs, GPUs, RAM, whatevers, becomes affordable is a sound investment. For example, for this project the current wisdom is that PCIe 3.0 does not benefit performance. That might be true for the current generation of GPUs and apps but what about the future? With GPU performance increasing as quickly as it is, future models might benefit significantly from PCIe 3.0. Also, with the right power supply, you'll be able to put two hi-perf cards on a mobo with PCI gen 3.0 without taking a performance hit due to PCI bus bottleneck. If I were to buy a new mobo today I think I would settle for nothing less than 2 X PCIe x16 3.0 slots that run at x16 when both occupied. You can buy a less capable board and replace it in the future but if you also want to upgrade your GPU at that time you'll be looking at a big expenditure. To me it makes sense to upgrade 1 component at a time to keep purchases small. That way I can pay cash and save interest charges and wait until what I want goes on sale. You'll never get the mobo, GPU and CPU you want all on sale at the same time so wait until the best mobo you can get goes on sale then pounce on it. Then wait for the other components to get marked down 1 by 1. BOINC <<--- credit whores, pedants, alien hunters |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That's the way I do it too, but it takes a long time to get a new rig complete. Greetings from TJ |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
All I can say is the bitter taste of a hasty purchase lingers long after the sweet taste of a quick, inexpensive purchase. It's your money not mine so I'll stop preaching. BOINC <<--- credit whores, pedants, alien hunters |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's important to get a balanced system. No point getting a 1200W PSU or a 4 PCIE slot motherboard for one or two GTX660s, and forking out £700 for a CPU would be just as daft. If you build your own systems then you always have the option of selling parts and upgrading dynamically, but if you don't build your own systems its better that they are future proofed to some extent - an unused PCIE slot, additional PCIE power connectors, CPU and RAM upgrade routes, and two PCIE3 X16 slots would be nice just in case you wanted to drop in a Maxwell or two some time in the future. Unfortunately future proofing a system for GPUGrid is something of a guessing game; you don't know how powerful the next generation of GPU's will be or what the bottlenecks will be. We may well see GPU's that are less reliant on the CPU but we don't know yet. If the Maxwell's are more reliant on PCIE speeds then PCIE3 might be important. We also don't know what the apps will be like in a year or 18months time. If mid-high end GPU's, such as a GTX660 or GTX670, only see a performance drop of 1% due to PCIE2 x8 vs PCIE3 x16 it would go unnoticed as it's too difficult to measure. Consider an app change that would increase this to 3%. No biggie for a mid-range GPU, but for a GTX780Ti that might translate to 6% performance loss (per GPU) and perhaps 10% for Maxwells... When PCIE was more important for here it was proportional to the GPU performance; less noticeable on lesser cards. Conversely, in 18months time PCIE2 x8 might be just as good as it is today, but we might become more reliant on system memory &/or the CPU. Note that the CPU performance has been a noticeable factor for years (again, less noticeable with lesser cards). In my opinion, now is a good time to buy existing architecture; the prices are reasonable, the cards are mature and the apps are mature. It's likely that we won't be using Maxwells for a year and we don't know what the performance will be like or app redevelopment time. For sure they will be pricey when they turn up. Even when they do turn up, and the apps are redeveloped, the uptake by crunchers will be slow. It will likely be at least another year before the Maxwell's are in the majority. This means the projects focus will likely be continued support for the GK110 and GK104 architectures. For GPU crunching, 2years of relatively high performance is about as good as it gets. If you must have two PCIE3 X16 lane slots now the recent Intel i7-4820K is PCIE3 compliant and has access to 40 PCIE lanes (so two slots at PCIE3 X16). Ditto for the 4930K and the 4960K (but at a daft price). These are for the somewhat dated LGA2011 (but still the best) motherboards. There is also an AMD option. The ASUS Sabertooth 990FX/GEN3.0 R2.0. This uses a PLX-made 48-lane PCI-Express Gen 3.0 bridge chip. There might be other similar AMD options. These 4 PCIE slot options with support for two cards at PCIE3 x16 would allow you to use up to 4 GPU's now (when PCIE3 doesn't matter), and still be future proofed should PCIE3 and Maxwell's offer something more in a year or two, and when it might be a good time to upgrade the GPU's. Obviously this is only applicable if you don't build, are space constrained or want to buy reasonably high end GPU's now; the high cost of these systems is >twice that of basic systems that can support two GPU's. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That is a good piece of advice skgiven! I would like to make one addition. The motherboards with 3 to even 6 PCIe slots have all the issue that the space between slots is two, so that means that 3, 4 cards will be very close to each other, which likely result in heat build up and thus lesser performance or even errors. Moreover most MOBO's have only 2 x16 slots So in my opinion two high-end cards in one system are enough. That makes a system also less expensive as a PSU of 1000-1100W would be enough, not a to big case, less memory, less expensive MOBO. Greetings from TJ |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes, we can always rely on skgiven for good advice (not to say he's the only good advisor). I must agree with his thoughts regarding future-proofing within the context of crunching GPUgrid but in the broader scheme, there are other projects with GPU apps one might want to crunch, not just GPUgrid, and there will be even more in the future. Those might very well benefit from higher PCIe capacity. Or they may not. So really, maybe the only benefit of spending more on a mobo is that I would feel better knowing that if I need it I'll have it already. Hmmmm. Money just to feel good. Like beer without the hangover. Ram If you intend on eventually having 4 GPUs then it's slightly more economical (wrt RAM) to have all the RAM on 1 mobo with 4 GPUs because then you have RAM overhead for only 1 OS. If you have 2 GPUs on 2 mobos then you have 2 OSs to provide RAM for. It's a very small difference but I think it is slightly in favor of 1 mobo. Power Supply If you consider cost per watt rather than purchase price, bigger power supplies cost about the same as smaller power power supplies and in some cases even less. For example, st newegg.ca right now the cheapest (in terms of $/watt) Rosewill Capstone +80 Gold PSU is the 1,000 watt model at $5.00/watt. Their 550 watt model is $6.47/watt, the 750 watt model is $6.00/watt. If 4 GPUs on 1 mobo require 2,000 watts then I think maybe 2 X 1,000 watt PSUs properly connected and grounded might work. I haven't tried it with 2 modern PSUs but I have combined 2 old 250 watt PSUs together to power 1 mobo and it worked. One PSU powered the mobo, the other powered the HDD, CD and floppy. By worked I mean the voltages at the outputs didn't change, nothing over-heated, the computer ran fine and even crunched a few Einstein tasks error free before I finally shut it down and tossed it all in the trash. It was an old P3 mobo. I haven't had the cahones to try it with 2 new PSUs and a new mobo but I have heard it works fine with new active PFC style PSUs too. As for a 2,000 watt PSU... I haven't found a price for one but I haven't looked very hard. Newegg lists an Athena brand 1620 watt +80 certified (not Gold) with active PFC for $270 which gives $6.0/watt, FWIW. Case Size I think 4 GPUs in 1 big case is more efficient in terms of space and expense. And who needs a case anyway? Skip da Shu sticks his mobos in milk crates, no problem he says. BOINC <<--- credit whores, pedants, alien hunters |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Case Size I think I'll get a case from dutchcedar, metal flake paint, pin striping and a couple of clear coats. That should make it check email like a screamin' demon. |
|
Send message Joined: 16 Mar 11 Posts: 509 Credit: 179,005,236 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Flashawk, that would go fast for sure but I've noticed the fastest cars in NASCAR have wild flames painted on them, might work on computers too? Seriously, wood is easy to work with but I would go with metal for better static protection and if I were to go to the trouble of making a case (which I will be doing) I would make it big enough to hold 4 mobos plus PSUs. I would make it with an easily removed false back that the mobos would butt up against. I would cut holes in the false back for the video cards to blow hot exhaust through. The space between the false back and the real back would be a duct that would carry the hot air out of the box. I would mount the case below/beside/over a window and I would continue that duct out the window. I would also have a duct to draw air into the box from the outside and put a HEPA dust filter in the duct. The PSUs would attach directly to the false back and exhaust directly into the duct. I would likely vent the CPUs directly into the box because the mobos likely would not be all the same model therefore the CPUs would not all be in the same positions relative to the false back which would mean a custom duct for each CPU, doable but with adequate airflow through the case probably not necessary. I would mount an ethernet switch inside the box so only 1 ethernet cable going into it. The mobos would boot off PXE and share 1 HDD configured as NAS. Some of you will recall I did something like this months ago. It worked extremely well but I abandoned it because it was essentially a huge box that I stuck cased computers into. The duct work was complicated and wasted a lot of space. It proved the concept but I'm going to redo it as described above to shrink it's size and increase "component density". That's food for a totally new thread so enough about that in this thread. I brought it up here only to continue the idea I brought up earlier about building farms slowly and inexpensively with scalability and cooling (always the biggest problem if running multiple rigs/GPUs) in mind. BOINC <<--- credit whores, pedants, alien hunters |
©2025 Universitat Pompeu Fabra