Message boards :
Number crunching :
Specs of the GPUGRID 4x GPU lab machine
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
We are trying to build one. We would like to know what is the real power consumption to see if we can cope with a 1500W PSU. gdf |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Even if we assume maximum TDP power draw of 300W for each card the system would stay below 1.5 kW. I'd estimate power draw under GPU-Grid to fall between 200 and 250 W, so sustained draw should be fine. Problems may arise during peak draw and due to load distribution within the PSU. I'm not sure if anyone could tell you reliably if it will work.. without testing. You could switch the GTX 280 to 295 one by one and check stability, maybe monitor draw with a kill-a-watt. If it doesn't work you could stay with a 1+3 or 2+2 config. Not very convenient, but may be the only way to find out.. assuming noone else is going to try something as crazy as this ;) MrS Scanning for our furry friends since Jan 2002 |
Jack ShaftoeSend message Joined: 26 Nov 08 Posts: 27 Credit: 1,813,606 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
4 GTX 295 would be awesome.. if only for the sake of it :D Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not? |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
4 GTX 295 would be awesome.. if only for the sake of it :D Who knows, maybe they want to run some real projects too ... IN that case they want some real CPU power while they be playing with the GPU Grid thingie toys ... :) |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not? It's not because it's a top of the line cpu, it's because 4 GTX 295 have a total of 8 GPUs, so ideally you'd want 8 CPU cores to keep them busy. A system with the smallest i7 should still be cheaper and more power efficient than a dual quad core. Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU. The Devs probably prefer linux, where the CPU utilization is not a problem anyway. So forget about the i7 comment! MrS Scanning for our furry friends since Jan 2002 |
Jack ShaftoeSend message Joined: 26 Nov 08 Posts: 27 Credit: 1,813,606 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU. I run Windows Vista x64 right now, and 6.55 app on 6.5.0 BOINC uses about 6-7% of my available CPU - or about 28% of 1 core. I bet you could run 4 of these with a Q6600 (with no other projects) and not have the bottleneck be the CPU. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I just checked, currently I'm at 11 - 13% of my quad, whereas it used to be 15 - 20%. Anyway, if performance on the GPUs suffers even a little bit you're going to loose thousands of credits a day.. and bite yourself in the a** for not having more cores or linux or a workaournd for win ;) MrS Scanning for our furry friends since Jan 2002 |
Jack ShaftoeSend message Joined: 26 Nov 08 Posts: 27 Credit: 1,813,606 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]()
|
If you build a 4x GPU system, the chances of using your CPU for anything else is slim, and the i7 uses significantly more power than C2Q (and DDR3 costs a lot more too). Just think it would be wise to save a couple hundred bucks to go C2Q. Maybe Q9450 or Q9550 instead of Q6600. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I just don't like the looks of the i7's - they eat tons of power, and i've read lots of reports that they run really hot. I don't like the rediculous price of the X58 boards with tons of stuff I neither need nor want to pay for. But the processors themselves are great. Under load they don't use more power than an equally clocked Penryn-Quad but provide about 30% more raw number crunching performance. And under medium load they use considerably less power than a Penryn, because they can switch individual cores off. About running hot: I can imagine that the single die leads to higher temperatures at the same power consumption compared to a Penryn, where the heat is spread over 2 spatially separated dies. And I'd use proper cooling for any of my 24/7 CPUs anyway. Edit: after reading your edit, maybe I should make my point more clear. The GPU crunches along until a step is finished. Once it's finished the CPU needs to do *something* ASAP, otherwise the GPU can not continue. So if One CPU core feeds 2 GPUs there will be cases when both GPUs finish, but the CPU is still busy dealing with GPU 1 and can not yet care about GPU 2. The toal load of that core may be less than 100%, still on average you'd loose performance on the GPUs. That's why I started my thinking with "1 core per GPU". Later I remembered that under Linux the situation is much less critical. If each GPU only needs about 1% of a core I can imagine that a quad is good enough for 8 GPUs. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
TDP for the GTX295 is listed as 289 watts here: http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Oh, yeah.. that's what I meant by "300 W" :D MrS Scanning for our furry friends since Jan 2002 |
[BOINC@Poland]AiDecSend message Joined: 2 Sep 08 Posts: 53 Credit: 9,213,937 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I`m just curious how you can manage work for 4 GPU`s? With quadcore, you can have just 4 WU`s. And all WU`s will be crunched (nothing in stock). When one WU will be ready crunched, then one GPU will be idle. Even with <report_results_immediately>1, each of your GPU will be idle over some time once every 6 hours. It means for times per 6 hours. It seems that you are wasting at least few hours of your GPU`s every day. Or maybe you have some secret method to manage work for this maschine? I`m using method (which I don`t like) with crunching GPUGrid at 100% and SETI CUDA at 10% (I have always something in stock then). What`s your method? Or maybe it doesn`t matter for you that GPU`s are idle so long time every day? And if you can fix 100% working time for these GPU`s then how you can do that with daily quota 15 WU`s? Because even stock graphics cards can crunch 16 WU`s... I had maschine with 3x280. But it wasn`t possible to manage work for them over 100% time every day without babysitting. Without checking the computer every few hours. I can`t do that. Because of my work I`m sometimes 18 hours out of home. Then I`ve splitted GPU`s and now this maschine have just 2x 280`s and it`s approximately ok. But soon, very soon I`d like to buy 2x295. I don`t know how to manage work for these cards for 100% of time without babysitting all of the time. Then my question is `how YOU are doing that`? |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@AiDec Well, I am using BOINC Manager version 6.5.0 ... It has .4 days extra wok, connect 0.1 and I have one task in work and on that system I have 3 "spare" tasks. On the other system, also running BOINC Manager 6.5.0 I have 0.1 day queue and no extra and I have one task in work and one or two pending ... I check every few hours and post updates as tasks are completed, but, other than that, I just let the two systems run ... The first one has a GTX 280 which does tasks in about 4.5 hours and the slower system at the moment has a 9800GT ... To me the keys are the version of the BOINC Manager and the queue size ... Hope this helps ... |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hope this helps ... Sorry, but I don't think so. He's asking for how people are keeping 4 GPus on a quad core fed, where you're limited to 4 WUs at one time. Buying an i7 could help, but that ruins the "good crunching power per investment" relation which makes us use the GPUs in the first place. MrS Scanning for our furry friends since Jan 2002 |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hope this helps ... Hmm, I could have sworn that I had as many as 6 tasks locally ... my bad I guess ... I suppose if the project is limiting the downloads to a max of 4 total then I should wait some time before I run out and get a pair of 295s ... I suppose the problem will arise if you run more than GPU Grid on the machine. If you only run GPU Grid then when the task is done and with 0.1 queue or less should contact the scheduler an get more work ... I guess I am still missing something ... |
[BOINC@Poland]AiDecSend message Joined: 2 Sep 08 Posts: 53 Credit: 9,213,937 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
@Paul D. Buck My post was about the main case of this thread - ` Specs of the GPUGRID 4x GPU lab machine`, not about your comp :). But anyway thx for answer ;). Buying an i7 could help(...) The case is not what could. The case is about what is now. Is the owner of this maschine wasting a lot of time of his cards or does he knows some tricks? The case is if there is any sense to have 4 GPU`s (cause I don`t see any sense to have more than 2x280 cause it`s impossible to manage 100% work for more than 2x280). There is just few questions which can tell us a lot about GPUGrid... I thought about multiple GPU`s at one maschine since 6 months. I had a maschine with 3x280 and I couldn`t get 100% work for all GPU`s. I`ve asked for 3xCPU tasks per comp, and I`ve asked for 2xCPU tasks per comp... And nothing happened. Then I`m asking what`s the way to fill up 4 GPU`s. |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@Paul D. Buck My bad ... I guess the last that is left to me is to suggest water ... probably about 4 gallons worth ... :) I guess I am fortunate in that I only have small GPUs with only one core per card so I don't see the walls ... I would guess the guy that we have been working one getting his 3 GTX 295s working is in for a disappointment too ... Sadly, there are only two projects that use the GPU on BOINC at the moment ... with GPU Grid being the best run to this point with the stablest application. I probably won't go more nuts until Einstein@HOme or some other project comes out with a GPU version of their application. If tax season does not hit me too hard I would like to build a machine again in April/May and by then may be these issues will be ironed out ... anyhow, sorry about the confusion ... |
|
Send message Joined: 21 Oct 08 Posts: 144 Credit: 2,973,555 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
I believe that the secret is that they don't run SETI (or any other project than this one). With no CPU-based projects, can't one effectively setup BOINC to use a '0+4' alignment by adjusting the CPU use percentages in the same way that others use a '3+1' in place of the default '4+1'? |
Stefan LedwinaSend message Joined: 16 Jul 07 Posts: 464 Credit: 298,573,998 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't think they run BOINC on their lab machine... My guess is that they manually feed it with jobs... ;) pixelicious.at - my little photoblog |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Perhaps we can get ETA to prevail on the people that make decisions. We have at least one potential system out there that is going to have 6 cores that will be available for work. If the scheduler cannot be made smarter the simplest solution is to raise the total queue to 7 ... but, I would suggest an investment to make the scheduler smarter (if possible) so that the queue size is related to the productivity and speed of the system. I mean, heck I am almost tempted to buy a 295 pair with a better PS to put into my top system which would give me 6 cores in the one system here too ... but, there is no point in that if I cannot get enough work to keep them busy ... Of course, as ETA has been happy to point out so often ... I have been missing the point and saying the wrong stuff... oh well ... |
©2025 Universitat Pompeu Fabra