Message boards :
Graphics cards (GPUs) :
Gigabyte GTX 780 Ti OC (Windforce 3x) problems
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
| Author | Message |
|---|---|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
GV-N78TOC-3GD (Rev. 1.0) is the same card as I got in Thailand which has the problems. It works ok now with the memory downclocked but runs at the same speed as my 780 oc'd which is disappointing (which is also the same speed as my titan which is sitting unused :/) Oh dear, such a waste. Just send me that unused Titan, and I'll put it in one of my hosts. :) |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just for information, I went to the Gigabyte site and they are now selling a card called GV-N78TOC-3GD (Rev. 1.0). Is it the same as the one you described having problems? Yes, it's the same, my card is rev 1.0 Other question, does this problem occur also with the similar GV-N78TGHZ-3GD? Good question, I hope someone will answer that, as I don't plan to buy one just to find out. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I still think their memory voltages are wrong. http://www.gpugrid.net/forum_thread.php?id=3584&nowrap=true#34849 FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Gigabyte has released a new frimware (ver F3) for this card in april. Its description begins with this: "Release for HYNIX Memory", so I thought that it could fix my memory clock issue. I haven't had the time and motivation back then to upgrade - and forgot about this -, but today I did upgrade the card's BIOS to F3. The good news is that the card is crunching fine for about 15 minutes now (at RAM clock 3500MHz, GPU clock 1137MHz). I'll report the card's crunching status when I'll have something important to report :) |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
|
Send message Joined: 19 Oct 13 Posts: 15 Credit: 578,770,199 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks for the update! Oh well :/ |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Forgive me, as I'm late to this thread, but... have you tried setting the GPU fans manually, via Precision-X or MSI Afterburner, to the maximum fan % allowed for that GPU, just to see if keeping the GPU cooler will have an effect? Set it to maximum for 2 days, to test, maybe? |
|
Send message Joined: 5 May 13 Posts: 187 Credit: 349,254,454 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I sympathize with you Retvari, having problems with a new, shiny piece of kit is a frustrating experience... I'm in a similar situation with you, having a 750Ti acting in a psychotic manner. I am refusing to RMA it just yet and am trying to make it work (with mixed results), but it won't be long now, if it keeps getting many errors I will go ahead and return it. I feel you should do the same, especially with an expensive card like yours. Seriously, if something (anything) is wrong in the hardware, how feasible is it to repair it yourself? Maybe it would be realistic 15-20 years ago, when many things were soldered by hand, but today's PCBs are multi-layered and full of tiny components soldered by robot hands to amazing precision. It's far more possible that you will break something if you try to fix it and void your warranty along the way, injury and insult at the same time! I say, just RMA it and keep your peace of mind! On the other hand, you may be very experienced with such labor, in which case I wish the best of luck to you!
|
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Forgive me, as I'm late to this thread, but... have you tried setting the GPU fans manually, via Precision-X or MSI Afterburner, to the maximum fan % allowed for that GPU, just to see if keeping the GPU cooler will have an effect? Set it to maximum for 2 days, to test, maybe? I've set a manual fan 'curve' in MSI Afterburner before I got this card: 20°C:40% -> 80°C:100% I've tried every trick in the book on this card, none of them helped except reducing the RAM clock to 2700MHz. This card rarely goes above 70°C: This workunit was processed at 33°C ambient temperature, and GPU 1 max temp was 70°C. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There was a successful I770-SANTI_p53final at 3300MHz, but a e2s948_e1s373f83-SANTI_marsalWTbound2, and a 2x118-NOELIA_TRPS1S4 has failed. I've set the GPU RAM to 3200MHz, but this I1072-SANTI_p53final had some "Simulation unstable" messages, so now the card is down at 3100MHz, and processing this 15x30-NOELIA_BI_3. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My Gigabyte GTX 780Ti OC is crunching fine (i.e. without "Simulation unstable" messages) at 3100MHz RAM clock for more than 1 day now. The source of the original problem could be a memory voltage / timing (latency) problem, which was addressed by the new BIOS release (F3), but wasn't solved completely. With the new BIOS I could achieve 400MHz higher clock speed, while it's still 400MHz lower than the nominal. Is there any tool to tweak a Kepler GPU's memory settings beside the clock? (Voltage, latency etc.) |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Your 3.1GHz ties in well with what I thought the situation might be, Assuming H5GQ2H24AFR R2C, these require 1.6V to support 3.5GHz. My solution would be to stick with it at 3.1GHz, if it proves to be stable, or sell the card and get an equivalent second hand card that does run at 3.5GHz. 288larsson who posted in this thread also has a Gigabyte GTX 780 Ti OC (Windforce 3x) GPU. Alas I don't know how to change the GDDR5 voltage. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I wrote to Gigabyte support, they replied that I should try my card the way I did back in December. (Which I'm sure will end with the same results - so I didn't redo my tests yet.) I'll dismount the cooler once again and check the type of every RAM chip individually, I'll also try to measure the operating voltage on the buffering capacitors. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
H5GQ2H24AFA R2C or, You were right: my card is built on 8 pieces of H5GQ2H24AFR R2C. The only excuse for my mistake is that the font they use makes it very hard to tell apart the "A" from the "R", especially when the oil from the thermal pads is covering the chip's package. I've got two more failures (NOELIA_THROMBIN units) on this card, so now my card is down at 3.0GHz. I've found a capacitor around the RAM chips (on the other side of the PCB), on which the voltage is measured as 1.58~1.633 volts. I think I should check it using an oscilloscope, to find out if the capacitor is undersized. But first I have to know the right spot. I didn't find any info on this board's wiring. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There's a new BIOS version (F4) for this card on Gigabyte's website. I've flashed it to my card, but a task immediately failed on default clocks (3.5GHz GDDR5). I have to reiterate the right GDDR5 frequency with this new BIOS. Now it's down by 100MHz (at 3.4GHz). |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The task running on this card got a couple of "Simulation became unstable" messages in the stderr.txt file, so I took down the GDDR5 clock by another 100MHz (now it runs at 3.3GHz). |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There were "Simulation became unstable" messages, and a failed WU at 3.3GHz. Now I'm testing the card at 3.2GHz. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Have you considered running Heaven, to determine how far you may need to downclock? If you can get Heaven to run at max settings overnight with no issues, then I'd consider it stable. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Have you considered running Heaven, to determine how far you may need to downclock? If you can get Heaven to run at max settings overnight with no issues, then I'd consider it stable. When I first tested the card, I did. The only application failed is GPUGrid's. See the first post of this thread. BTW the card seems to be stable at 3.2GHz, but different workunit batches could use different parts of the GPU. I suspect that something messed up with the GDDR5 voltage, or the PSU of the memory subsystem on this card series. |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I apologize - although I did read most of the thread, I did miss the part where you said you tested with Heaven. Nevertheless... did you just run a single benchmark with it? That's often not enough. Usually, it takes several hours to confirm stability. I'd be curious to see if you can get through an *overnight* test, running at [DirectX 11, Ultra, Extreme, x8 AA, Full Screen, 1920x1080]... with: - no application crashes - no TDRs (as evidenced by dmp files in C:\Windows\LiveKernelReports\WATCHDOG - no strange display glitching (which would indicate memory corruption possibly due to memory being clocked too high) Sorry if you've done this, or you feel this is an inappropriate test. But I do recommend that you try it, if you haven't already. Just trying to help. |
©2025 Universitat Pompeu Fabra