Advanced search

Message boards : Number crunching : Monitor sometimes becomes black while crunching GPUGRID

Author Message
Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50139 - Posted: 29 Jul 2018 | 19:56:54 UTC

on the same machine with 2 GTX980ti on which I have been crunching GPUGRID for 2 1/2 years on Windows XP, I have installed Windows 10 recently.
For 3 days, I have been crunching LHC tasks, which don't use the GPU.

Today I started crunching GPUGRID, and it happens every few hours that all of a sudden the monitor becomes black.
From what it looks, GPUGRID crunching stops at that moment (the PC eminates less temperature), but there is some warm air coming out from the side, so I guess that the LHC tasks carry on (BTW, during only LHC crunching in the past days, this problem never happened).

All I can do is to push the off-button and make a reboot.

The Windows event log under system shows the warning "the graphic driver nvlddmkm does no longer react and was restored. This entry shows up between 50 and 60 times within about 4 minutes, at around the time the monitor got black (probably at some point the driver could no longer be restored, or whatever).
Under "details" it shows an eventID 4101, and under event data "nvlddmkm"

Does anyone know about this problem, and could give me an advice how to solve it?

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1620
Credit: 8,822,866,430
RAC: 19,442,844
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50140 - Posted: 29 Jul 2018 | 21:01:03 UTC - in response to Message 50139.

Driver crashes are a common problem when a GPU is overheated or overclocked.

Check ventilation and dust bunnies: if overclocked, knock it back a couple of notches.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 50141 - Posted: 29 Jul 2018 | 21:44:54 UTC
Last modified: 29 Jul 2018 | 21:52:02 UTC

You can install TThrottle and record the GPU temps in order to find out. After a crash, you can reboot and check the temperature graphs of the last 24 hours. In TThrottle you may also set a particular max. temperature to shut down the PC automatically before the GPU gets damaged, I normally set it to 85°C.

https://efmer.com/download-tthrottle/

In addition to that, MSI Afterburner keeps my GPU temps constantly below 70°C. Both measures together have saved my GPUs a couple of times.

In case the GPU temperature is too high, you could try to renew the thermal grease. If that is not successful there may be a broken heat pipe. But in case the GPU temperatures are OK, that unfortunately does not mean anything as the memory chips or voltage regulators temps don't show up in the graphs. There simply are no sensors there. A thermogram of the board could reveal the error cause then.

https://www.guru3d.com/articles-pages/msi-geforce-gtx-970-gaming-review,9.html
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

jjch
Send message
Joined: 10 Nov 13
Posts: 101
Credit: 15,569,300,388
RAC: 3,786,488
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50142 - Posted: 30 Jul 2018 | 0:11:10 UTC

If you don't find this problem is heat related it could be that you are not using a good driver. You mentioned that you installed Windows 10. That implies that you did a clean or new install rather than an upgrade.

You did not state that you installed the current video drivers directly from Nvidia. If you didn't do so I would suggest downloading those first and then doing a clean install of the video drivers.

Use DDU (Display Driver Uninstaller) first. It will also set Windows 10 so it won't install the Win 10 default drivers. Then install the current Nvidia drivers.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50144 - Posted: 30 Jul 2018 | 5:18:09 UTC - in response to Message 50142.

Driver crashes are a common problem when a GPU is overheated or overclocked.
Check ventilation and dust bunnies: if overclocked, knock it back a couple of notches.

heat and/or overclocking should not be the problem. As before in Windows XP, the GPU temp is around 61/62 °C, clock around default value.

I would rather guess that is has to do with what is described here:

If you don't find this problem is heat related it could be that you are not using a good driver. You mentioned that you installed Windows 10. That implies that you did a clean or new install rather than an upgrade.

You did not state that you installed the current video drivers directly from Nvidia. If you didn't do so I would suggest downloading those first and then doing a clean install of the video drivers.

I used the driver that originally came with the new install of Windows 10,it was version 388..

The driver I now downloaded from NVIDIA is 398.36. Installation worked without problems, so I restarted BOINC / GPUGRID and will see what happens (even the tasks which I interruped for the new driver installation were continued normally)

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50147 - Posted: 30 Jul 2018 | 9:00:25 UTC - in response to Message 50144.

... The driver I now downloaded from NVIDIA is 398.36. Installation worked without problems, so I restarted BOINC / GPUGRID and will see what happens (even the tasks which I interruped for the new driver installation were continued normally)

The problem still exists, despite of the new driver :-(((

Again, I looked up the Event log of Windows (System), and it shows the above cited warning ("the graphic driver nvlddmkm does no longer react and was restored") many times from 10:19a.m. on, in exactly 4-seconds-intervals, until 10:52 - the time I pushed the "off"- button.

Anyone any idea what could be the reason? What can I do in order to get GPUGRID work properly?

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 50148 - Posted: 30 Jul 2018 | 12:29:40 UTC
Last modified: 30 Jul 2018 | 12:30:37 UTC

as I wrote,

in case the GPU temperatures are OK, that unfortunately does not mean anything as the memory chips or voltage regulators temps don't show up


I would take the 980ti GPUs out one after another, to see if the problem is related to a particular card. If the system works properly, having only one GPU installed, then you know. You may also want to move the 980ti's into another PC to see if the error moves along with one or another card.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50149 - Posted: 30 Jul 2018 | 12:51:31 UTC - in response to Message 50147.

The problem still exists, despite of the new driver :-(((

In Windows, the only proper way to install a new driver is to first uninstall the old driver. And it has to be a clean uninstall using Display Driver Uninstaller (DDU), to get rid of all the traces of the old one (chose the option to reboot into Safe Mode).
https://www.wagnardsoft.com/forums/viewtopic.php?f=5&t=1174&sid=38069867de013db1e7c3bd469b98c82a

You might think that Nvidia would do that themselves, but they don't.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50150 - Posted: 30 Jul 2018 | 13:15:52 UTC - in response to Message 50149.

In Windows, the only proper way to install a new driver is to first uninstall the old driver. And it has to be a clean uninstall using Display Driver Uninstaller (DDU), to get rid of all the traces of the old one (chose the option to reboot into Safe Mode).

that's exactly what I did, anyway.

I am trying now various methods to "delimit" the problem:

- right now, I am crunching SETI@home tasks, so I'll see, whether the problem occurs also there. If so, then I might install

- Folding@Home, which is working with OpenGL (in contrast to GPUGRID and SETI, both of which work with CUDA).

If the problem persists in both of the above cases, then I will revert back to Windows XP (on the same machine, with dual boot) which I have used for the past 2 1/2 years, and see,if the problem also occurs then.

In case it does, then I am afraid that JoergF may be right when assuming that there may be some kind of hardware failure :-(((

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 50151 - Posted: 30 Jul 2018 | 13:25:24 UTC

I had the same problem, it was a power saving issue with the BIOS and Windows 10

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50152 - Posted: 30 Jul 2018 | 13:32:03 UTC - in response to Message 50151.

I had the same problem, it was a power saving issue with the BIOS and Windows 10

How did you resolve it?

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50153 - Posted: 30 Jul 2018 | 13:46:33 UTC - in response to Message 50151.

I had the same problem, it was a power saving issue with the BIOS and Windows 10

the strange thing is, though, that the problem did NOT occur within the first 2-3 days after the installation of Windows 10; but only after I started GPUGRID crunching (before, I was only crunching LHC tasks via the CPU).

Still your reply to Zoltan's question
How did you resolve it?
would be very interesting.

tullio
Send message
Joined: 8 May 18
Posts: 190
Credit: 104,426,808
RAC: 0
Level
Cys
Scientific publications
wat
Message 50154 - Posted: 30 Jul 2018 | 14:47:37 UTC - in response to Message 50150.
Last modified: 30 Jul 2018 | 15:00:43 UTC


- Folding@Home, which is working with OpenGL (in contrast to GPUGRID and SETI, both of which work with CUDA).

On my Windows 10 PC SETI@home uses opencl_nvidia_SoG. On GPUGRID the app uses cuda80 and all tasks fail.
Tullio
On my Linux box SETI@home uses opencl_nvidia_sah. Einstein@home uses FGRPopenclK-nvidia.
GPUGRID used cuda80 and it worked on Linux.
GPU boards are GTX 1050 Ti on Windows and GTX 750 Ti on Linux.
____________

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50155 - Posted: 30 Jul 2018 | 14:48:45 UTC - in response to Message 50153.

the strange thing is, though, that the problem did NOT occur within the first 2-3 days after the installation of Windows 10; but only after I started GPUGRID crunching (before, I was only crunching LHC tasks via the CPU).

Windows was always very erratic for me when I was running it with a screensaver, for example. Have you disabled all screen savers, sleep and power-down features? Also, in the BIOS, I would disable the various power control modes. They are known to be problematic.

But I like JoergF's idea of removing the cards one at a time. I think you could just unplug the PCIe power cables one at a time to do a simple test.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50156 - Posted: 30 Jul 2018 | 14:57:26 UTC - in response to Message 50155.

I think you could just unplug the PCIe power cables one at a time to do a simple test.
This is a very bad idea.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 50157 - Posted: 30 Jul 2018 | 15:08:00 UTC - in response to Message 50156.
Last modified: 30 Jul 2018 | 15:18:26 UTC

I think you could just unplug the PCIe power cables one at a time to do a simple test.
This is a very bad idea.


I agree. For a test, you should remove the card completely. Because If you leave one in the Mainboard slot without further 6/8pin supply, it will not get supplied properly and (likely) the PC not power up. If you are lucky you'll hear the common BIOS beep codes and no more, but it also could result in further hardware damage, as some components e.g. gates will be operated at undefined states or even oscillate, resulting in local excess current. DONT try that.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50158 - Posted: 30 Jul 2018 | 16:00:15 UTC - in response to Message 50154.

On my Windows 10 PC SETI@home uses opencl_nvidia_SoG. On GPUGRID the app uses cuda80 and all tasks fail.
Tullio

I just noticed that SETI has tasks with opencl_Nvidia_SoG as well as with Cuda42 and Cuda50.

(I tried to test also Einstein, but beside GPU tasks, it downloads CPU tasks as well which fill up my total 12 CPU cores, which I don't want to happen. I am sure this can be controlled somehow, but I havn't found out yet).

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50160 - Posted: 30 Jul 2018 | 17:36:49 UTC - in response to Message 50157.

For a test, you should remove the card completely. Because If you leave one in the Mainboard slot without further 6/8pin supply, it will not get supplied properly and (likely) the PC not power up. If you are lucky you'll hear the common BIOS beep codes and no more, but it also could result in further hardware damage, as some components e.g. gates will be operated at undefined states or even oscillate, resulting in local excess current. DONT try that.

My experience is that the card won't power up at all, and won't draw much power for anything.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1620
Credit: 8,822,866,430
RAC: 19,442,844
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50162 - Posted: 30 Jul 2018 | 17:44:44 UTC - in response to Message 50160.

My understanding is that NVidia cards are designed to power up using the 75W available from the PCIe slot, detect that the additional power cables are unconnected, and refuse to move out of a protective low-power state.

Thus, in a different state from total removal. Possibly safe, but nor very informative.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50163 - Posted: 30 Jul 2018 | 17:51:34 UTC

I don't think that any of the signal inputs are left "floating", if that is the concern. They will all be tied to a supply voltage, and clamped in a known state. Nvidia would not leave that situation unprotected by any means.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50165 - Posted: 30 Jul 2018 | 19:33:37 UTC - in response to Message 50150.
Last modified: 30 Jul 2018 | 19:34:22 UTC

earlier today I wrote:

I am trying now various methods to "delimit" the problem:

- right now, I am crunching SETI@home tasks, so I'll see, whether the problem occurs also there. If so, then I might install

- Folding@Home, which is working with OpenGL (in contrast to GPUGRID and SETI, both of which work with CUDA).

If the problem persists in both of the above cases, then I will revert back to Windows XP (on the same machine, with dual boot) which I have used for the past 2 1/2 years, and see,if the problem also occurs then.

In case it does, then I am afraid that JoergF may be right when assuming that there may be some kind of hardware failure :-(((


I had run first SETI and then Einstein for several hours now, no failure occured.

Hence, a minute ago I changed from Win10 to WinXP and now crunch two GPUGRID tasks overnight, plus a few LHC tasks (CPU only), as I used to do it for long time.

I am curious what I will see tomorrow morning.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 50166 - Posted: 30 Jul 2018 | 20:10:38 UTC - in response to Message 50152.

I had the same problem, it was a power saving issue with the BIOS and Windows 10

How did you resolve it?



It had something to do with multiple cards and UEFI BIOS, if your monitor was set to turn off after a time limit mine wouldn't come back on. I turned off that feature untill a new BIOS came out, that and the "Fall creators update" fixed it for me.

It seems to me that update and the April 2018 update fixed alot of issues with Windows 10, I know they never mention everything that is fixed because of interdependencies.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50167 - Posted: 30 Jul 2018 | 20:33:29 UTC - in response to Message 50166.
Last modified: 30 Jul 2018 | 20:38:46 UTC

I turned off that feature untill a new BIOS came out, that and the "Fall creators update" fixed it for me.

which means: in the Windows Energy settings you switched "turn off monitor" to "never" ? So your monitor was on 24 hours per day?

Also: did the problem occur only when crunching GPUGRID? Or anytime else?

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 50168 - Posted: 30 Jul 2018 | 20:49:28 UTC - in response to Message 50167.
Last modified: 30 Jul 2018 | 21:34:15 UTC

When ever I was using all the cards, I dug into it so far as it had to do with your utilization being high and no SLI bridge being on or present.

My motherboard was like shaking down a battleship, all kinds of problems.


Edit: When I switched to Windows 10 I hated it, the looks, where stuff was. Then I remembered reading about a small applet called Classic Shell that gave you the option of a Windows 7 or XP start menu with desktop icons instead of those stupid boxes.


It has a ton of options like Windows explorer all the menus and pathways once I installed it I was very happy with Windows 10. It would even create backups of the changes I made and ask me if I wanted to restore my changes after installing those major Windows updates.


Anyway, for those people who will be forced to switch 10, it's a pretty cool alternative.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 50169 - Posted: 31 Jul 2018 | 1:48:24 UTC - in response to Message 50167.

I turned off that feature untill a new BIOS came out, that and the "Fall creators update" fixed it for me.

which means: in the Windows Energy settings you switched "turn off monitor" to "never" ? So your monitor was on 24 hours per day?

Also: did the problem occur only when crunching GPUGRID? Or anytime else?


Only if you leave the monitor on. I just turn off the monitor.

Couple of things, find energy profiles. Change it to Max performance.

I turn off Screen saver, switch power off monitor to never, screen to blank never, disable spin down hard drive (if you are still using one) vs an SSD

Make sure that no where you have put machine to sleep after x minutes of no use.

Those error messages you are getting are coming from when the computer shuts down the GPUs and the drivers crash. You need to figure out why the GPUs are being put to sleep.

As far as keeping Einstein from using both CPU and GPU, that is in your preference setting on your account page ---->Preferences--->Project. select a location, change use CPU to no and use Nvidia to yes. Save and then make sure the location of your computer is correct.
____________

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50170 - Posted: 31 Jul 2018 | 5:18:33 UTC - in response to Message 50168.

... and no SLI bridge being on or present.

this reminds me - after re-installing the NVIDIA driver yesterday morning, on the right lower corner of the screen I got some kind of warning that the SLI bridge is missing. I don't remember whether the warning came from NVIDIA or from Windows.
Whether this (also) has to do with my problem or not - no idea ...

Anyway, all last night I crunched GPUGRID and LHC on Windows XP, no problem at all. It's still running fine.

Conclusion:
1) there does not seem to be any defective hardware
2) the problem clearly has to do with Windows10, and maybe (although I am not 100% sure yet at this point) only when GPUGRID is running.

So today I will take a closer look into the energy savings settings, and I'll again run either Einstein or SETI tasks for a lenghty period of time.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 50171 - Posted: 31 Jul 2018 | 5:43:15 UTC - in response to Message 50170.

You don't want to run your GPU's in SLI mode while crunching WU's, it doesn't cause problems to have the bridge on. Just make sure SLI is turned off while running GPUGrid.


I'm not saying this will fix your problem, but I'm pretty sure it is a power saving issue with your monitor\GPU's.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50172 - Posted: 31 Jul 2018 | 6:37:44 UTC - in response to Message 50171.

...but I'm pretty sure it is a power saving issue with your monitor\GPU's.

as I said, I need to do more testing with SETI and/or Einstein.

But if my first impression from yesterday's testing is right, then the problem might come up only when crunching GPUGRID, and NOT when crunching the other projects.

But this is not quite sure yet, probably I'll know more during the course of today.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50196 - Posted: 4 Aug 2018 | 14:09:35 UTC - in response to Message 50172.
Last modified: 4 Aug 2018 | 14:11:24 UTC

... probably I'll know more during the course of today.

well, by now it seems pretty sure that the problem comes up only when crunching GPUGRID, but not with other projects like SETI and Einstein.

I saw an interesting article in the Anandtech Forum:

https://forums.anandtech.com/threads/gpu-tasks-are-causing-win10-machine-to-become-unresponsive-restart-fixed-by-disabling-sli.2526566/page-2

under the headline "GPU tasks are causing Win10 machine to become unresponsive/restart (fixed by Disabling SLI)"

Well, on my machine, SLI is NOT enabled, anyway.
What I am trying right now is to crunch with only 1 GPU instead with 2. So let's see what will happen.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1620
Credit: 8,822,866,430
RAC: 19,442,844
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50197 - Posted: 4 Aug 2018 | 15:03:11 UTC - in response to Message 50196.

You posted earlier that "I got some kind of warning that the SLI bridge is missing."

That can only have come from the NVidia driver. It sounds like SLI support is enabled in the driver, but not in your current hardware configuration. I'd turn it off from the 3D settings page in the NVidia Control Panel.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,205,482,676
RAC: 29,855,510
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50198 - Posted: 4 Aug 2018 | 15:35:38 UTC - in response to Message 50197.

... I'd turn it off from the 3D settings page in the NVidia Control Panel.

I looked it up now - it's deactivated there.

What I am doing right now is to crunch only with 1 GPU instead of 2. Let's see what happens.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,123,724
RAC: 13,404,739
Level
Tyr
Scientific publications
watwatwatwatwat
Message 50199 - Posted: 4 Aug 2018 | 17:22:55 UTC - in response to Message 50162.

My understanding is that NVidia cards are designed to power up using the 75W available from the PCIe slot, detect that the additional power cables are unconnected, and refuse to move out of a protective low-power state.

Thus, in a different state from total removal. Possibly safe, but nor very informative.

Depends on the age of the card or family type. Don't try that trick with a Kepler card. I forgot to plug in the PCIe power connectors for the dual 670's and when I turned on the computer, both fans raced screamingly to full rpm's for a few seconds and the computer promptly shut down. Thankfully no damage done but scared me silly.

Post to thread

Message boards : Number crunching : Monitor sometimes becomes black while crunching GPUGRID

//