Message boards :
Graphics cards (GPUs) :
How to Run Only One of Two GPU's?
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 30 Jul 08 Posts: 17 Credit: 80,343,188 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
I have two GPU cards, but want to run only one of them and leave the other one idle. How does one set multi-GPU preferences? Thanks! |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have two GPU cards, but want to run only one of them and leave the other one idle. How does one set multi-GPU preferences? Sadly we do not have this capability directly. Over on the NC at SaH they are using a configuration file that allows them to "force" the use of the GPU along with CPU, the so-called 4+1 option ... you might try that until we get manager controls to make things work ... The issue is likely to be that the system will then opt to use only one GPU, the same one that powers the display because that is the primary ... but best shot I can suggest ... The file is called cc_config and I got a link to a thread where they talk about it ... {edit} they are trying to force use of the GPU, and you are trying to force it to NOT use the GPU .. but the same thing really ... trying to bash BOINC into submission ... |
EdboardSend message Joined: 24 Sep 08 Posts: 72 Credit: 12,410,275 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
In Vista, you can try: NOT to extend the desktop to the second gpu, which you can easily do in the Nvidia Control Panel. |
|
Send message Joined: 30 Jul 08 Posts: 17 Credit: 80,343,188 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
In Vista, you can try: NOT to extend the desktop to the second gpu, which you can easily do in the Nvidia Control Panel. Thanks, that's a great idea and I imagine that will work! I will give it a try when I get home this evening. :) |
|
Send message Joined: 12 Jan 09 Posts: 36 Credit: 1,075,543 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Another thing to consider. If you're simply trying to limit how many CUDA tasks are running and how hard the GPUs work, try enabling SLI. For those wanting each of multiple cards to all run CUDA task, you have to disable SLI or else CUDA thinks you only have 1 CUDA device since SLI unifies them in a single cooperative unit. You'll see that in the messages tab when you start BOINC. If SLI is enabled, BOINC sees only 1 CUDA device, but if you don't have SLI enabled, the messages will show all of your CUDA cards as separate devices. That's why for folks trying to run the highest number of work units possible, they need to disable SLI before starting BOINC. That's a real annoyance we have since usually for maximum performance, such as in games, you'd want SLI enable. That's the who idea behind SLI. For some reason nVidia hasn't updated CUDA to handle SLI yet. But, in your case, you may want to enable SLI. Then, since CUDA only sees one device, BOINC will only pull one CUDA task. They'll both share the work load, but it will be half as much work being done and may help with what you're trying to accomplish. Actually, I haven't experimented to see if the two cards would then just work half as hard, or if the result would be that the single task would just complete twice as fast. Anyway, might be worth trying. You can easily enable or disable SLI in the control panel. However, for BOINC to recognize the change in CUDA device counts, you need to exit BOINC before changing the SLI setting, then relaunch it. It will do a new CUDA device detect when it starts. Hope this helps. |
©2025 Universitat Pompeu Fabra