Message boards :
Graphics cards (GPUs) :
nVidia driver 340.52
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't have (evga) Precision-X. Should I dump (msi) Afterburner for it? |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I believe you can use Afterburner, as it functions nearly identically to Precision-X. You are looking to control the "GPU Clock" to ensure maximum stability, and Kepler GPUs ramp up and down in 13 Mhz intervals. |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Back to a full night test of Heaven, going by the 13 interval at -39 it ran for 9 hours before the uningine stopped responding. Interesting enough at the next interval -52 the uningine stopped responding 7 hours in. Now they both pass the 5 hour mark but I don't know if technically it's supposed to go on like the Energizer bunny. I believe you can use Afterburner, as it functions nearly identically to Precision-X. You are looking to control the "GPU Clock" to ensure maximum stability, and Kepler GPUs ramp up and down in 13 Mhz intervals. The Core Clock slider changes the base speed which is then boosted. However the boost always seems to be 78 MHZ and always seem to be on if the card is running 3D apps. What was your source of the 13 mhz interval? |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Source is by experience. You can slide the slider around in 1Mhz intervals, and then watch whether your GPU's clock goes up/down or not. The registered clock should only "move" in 13 Mhz intervals. I believe it works much like a CPU, where the resulting clock is 13 Mhz times some multiplier. Similarly, I believe boost is "add a certain number of multipliers on top of the base". That is why adjusting the base, affects boost too. In 13 Mhz intervals. Watch closely as you test it, record your values, and please tell me if I'm wrong. Finally, you should be able to run Heaven for 5 months straight with no issues, if your system is fully stable. I recommended 5 hours previously, but overnight is a very good test. If it crashed at all, doesn't matter how far into the run, if it crashed AT ALL, it means your core clock is too high. Again, from experience. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Jacob, would the similar Unigine Valley benchmark test work as well as Heaven for this purpose? |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Valley would work, but it has been my experience that Heaven pushes the GPU harder than Valley. |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Per GPU-Z it showed a 1mhz boost change per 1mhz clock change. So the boost is staying constant at 78 mhz above. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You need to be clearer in your findings. For me: - I start GPU-Z - I click the question mark, to begin the render test, which puts the GPU under load - In GPU-Z, I monitor the "GPU Core Clock" value on the "Sensors" tab - With Precision-X "GPU Clock Offset" set to 0, GPU-Z shows 1241 "GPU Core Clock", which is my max boost. - If I change Precision-X's "GPU Clock Offset" down a single Mhz, to -1, and click Apply, I see GPU-Z's "GPU Core Clock" go from 1241 to 1228Mhz. - If I change Precision-X to any value between -1 to -13 (using arrow keys), and click Apply, GPU-Z still reports 1228Mhz. - If I change Precision-X to -14, and click Apply, GPU-Z now reports 1215Mhz. Using those steps, I conclude that my GTX 660 Ti GPUs (which use Boost v1.0), are incrementing and decrementing in 13Mhz intervals. Are you saying your GPU doesn't increment like that? Can you confirm with exact steps? Thanks, Jacob |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've tried to DL your Precision-X however it's currently unavailable. Some sort of copyright controversy with it. MSI has just released v4 of it's Afterburner so I've upgraded. I've also upgraded back to the 340.52 nvidia drivers. Source is by experience. Are you SeanPoe? Are you saying your GPU doesn't increment like that? Can you confirm with exact steps? Conditionally yes and no. When I was adjusting the GPU speed I wasn't under a full load. Many times when I've adjusted with gpu grid running it crashed the WU. When I was running Heaven fullscreen there was also no way of adjusting speed without, at a minimum, exiting the window which would instantly change what was shown in GPU-Z. So going from your last post and the guide in the link my max kepler boost before things start getting dropped by temperature is 7 offset steps (91mhz). Running the GPU-Z render test does confirm the 13 drop when I manually drop it by 1 as shown in your example. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Precision X v4 can be downloaded here: http://www.techspot.com/downloads/5348-evga-precision-x.html Also, you can run Heaven in non-full-screen mode, by making sure the checkbox isn't checked when you run it. I am not SeanPoe -- did you give me that link as a hint of some sort, that I should read through it? The goal, in my opinion, should be to set the fan curve so that it reaches max-fan before the [70*C if Kepler Boost v1.0 GPU like yours, 80*C if Kepler Boost v2.0 GPU] thermal-mhz-limiting-threshold ... and then to keep decreasing core clock in Precision-X, in 13 Mhz intervals, until Heaven can run overnight with no crashes and no TDRs logged in C:\Windows\LiveKernelReports\WATCHDOG Regards, Jacob |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Heaven ran overnight at -65 with no crashes and no watchdog dmps. However even at 80% fan speed it was holding at 71C. With the noise at that speed I pretty much have a hair dryer at arm's length; which I can't have since this is my gaming rig and not a cruncher in the garage. Max temp under Heaven has never reached 80C even when it was 80F inside the house and the fan speed set to auto - sometimes will hit 60% which is audible but not bothersome. I can live with the card throttling down a step. (And my max setting will be somewhere between -65 and -53.) The link is more for my benefit since I lost it once, but it could have been written by you. I see no harm in asking. Thanks for the Precision link. I have it now and will see how it compares to Afterburner - although too much noise is still too much. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You can setup a fan curve to suit your tastes. I prefer the curve to eventually hit max fan before the thermal limit, so MHz don't get limited. We were doing that here, for testing, just to ensure that the GPU did not downclock due to the thermal limit. Hope that makes sense. |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It makes sense. Thanks Jacob. Here is something interesting. I did a boot scan (Avast) and that turned up a couple of corrupted BOINC files. ProgramData\boinc\symbols\Kernelbase.pdb ProgramData\boinc\symbols\ole32.pdb So I completely uninstalled boinc, wiped it off the drive and out of the registry, and reinstalled. Right now it's crunching away a cuda60 where normally it would've crashed. I'm hoping I'll get lucky and this would have been the cause of the problem although I have no idea what these files do and why they would only effected gpugrid cuda60. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would envision that those files wouldn't matter. They are symbol files that are downloaded, I think, whenever a BOINC debug version crashes or is debugged. Don't count on anything related to those files, having any effect on how GPUGrid runs. |
|
Send message Joined: 11 Jan 13 Posts: 216 Credit: 846,538,252 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just wanted to confirm the 13MHz steps in Kepler boost. My GTX 680 and GTX 780Ti cards both step up and down in 13MHz increments. I have a Temp Target of 68C set on my 780Ti cards and they will clock up or down in 13MHz steps until they reach a speed where they can maintain the desired temperature. |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would envision that those files wouldn't matter. They are symbol files that are downloaded, I think, whenever a BOINC debug version crashes or is debugged. True - but I can't help but wonder maybe some file (the cuda60 exe) got borked. I've turned in 4 straight valid WUs since the wipe and clean install. Well at least now I know more about my card than I ever thought I would need. I know where the stable point is. And the problem appears to be fixed. Overnight I'll return the card to its default values and see if the instability churns up any errors. At least that way I can determine if the card contributed to the problem. |
MisfitSend message Joined: 23 May 08 Posts: 33 Credit: 610,551,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Cuda60 has been churning good, and I've ditched the MSI app for the one from EVGA. Thanks for the help. |
©2026 Universitat Pompeu Fabra