Advanced search

Message boards : Graphics cards (GPUs) : A little warning to GTX690 owners

Author Message
Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27390 - Posted: 23 Nov 2012 | 19:24:34 UTC

You might be interested in my long story if you run this type of card with the factory made cooling.
After nearly 3 months of crunching 24/7 the temperature of the inner GPU on my GTX 690 went above 90°C while I was on vacation for a week. The other one was ok. Remotely I couldn't do anything else, so I disabled this GPU in BOINC to prevent from something worse happening to it. I thought that something blown into the grille was blocking the air, so it would be fine after I cleaned the grille. To my surprise, there was nothing in the grille, and this GPU's temperature was still high after a thorough cleaning with high pressure duster. Then I thought that the thermal compound dried out, so I removed the heatsinks. It's easy to say, yet very hard to do: the small hex socket screws are mostly used for decoration: they hold the polycarbonate windows, while the tiny ones hold the plated aluminium frame (which is in the way of the heatsink), and every screw is threadlocked. The tiny ones' hex socket is so weak that the threadlock wins, so the screwdriver turns round in the socket but the screw doesn't. Because their heads are recessed, I had to cut a slot in the head of 3 screws to be able to unlock them with a much larger screwdriver. NVidia definitely do not want anyone to disassemble this card. (At least from the heatsink's direction. From the PCB, there are torx TX06 screws, they might be easier to remove, but there are so many of them, and there are too many SMDs, that I didn't want to accidentally remove any of them.) Then I had to remove the magnesium fan hosing also, but it was a piece of cake (only 4 Philips headed screws holds it). After all of this disassembling adventure I put some fresh thermal grease between the GPU and the heatsink. There was another surprise: there is no IHS (Integrated Heat Spreader) on the GPU, and the heatsink has a vapor chamber (I forgot that). But after I assembled the card, and I put it back to my PC, I was shocked by experiencing that the temperature of the GPU is still 92°C. I checked the power consumption of both GPUs on this card, and they are the same (while the other one is only at 65°C). I disassembled the card again (much easier this time), to check the spreading of the new thermal compound (Noctua NT-H1), and it was quite thin and smooth.
So I came to the conclusion that the vapor chamber of the heatsink has failed on the inner GPU.
I've planned to change the cooler anyway, so I've ordered an Arctic Cooling Twin Turbo 690. I'll see if I'm right about this faliure, and I'll post about my findings with the new cooler.

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27394 - Posted: 24 Nov 2012 | 0:28:13 UTC

Cooling problems on graphiccards are always a big shit ^^ wish ya good luck with the new one.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27395 - Posted: 24 Nov 2012 | 0:39:29 UTC - in response to Message 27390.

If it's any consolation I would have done the same thing, and I've used some cryo research kit. I expect you are right, the vapour chamber leaked - it happens.
I'm sure you checked the fan was turning, heatsink was tight...
Although you could have RTM'ed it, that's always a pain to deal with. I hope the problem isn't anything to do with the VRM. So long as it's something to do with cooling the Arctic GPU Cooler should fix it. They make great kit, though I recently had to remove a motherboard to dismount a wide Arctic heatsink (screwed into a CPU backplate; on the back of the motherboard), just to replace RAM.

Anyway, Good Luck
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Alexey Kotenev
Send message
Joined: 19 Sep 12
Posts: 14
Credit: 272,804,881
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwat
Message 27397 - Posted: 24 Nov 2012 | 2:50:06 UTC

Interesting story, thank you. Sometimes I have surges of temptation to buy a GTX 690 (or even two). But, given its price, I am not ready to risk having any mulfunctioning and I would not like troubling myself with fixing it. On the other hand, any equipment can fail, crunching one even at a higher risk, I think.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27399 - Posted: 24 Nov 2012 | 15:09:20 UTC - in response to Message 27397.

interesting story. Let's see what happens.

gdf

Profile AdamYusko
Send message
Joined: 29 Jun 12
Posts: 26
Credit: 21,540,800
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 27404 - Posted: 24 Nov 2012 | 19:10:55 UTC

Heat issues always scare me, I figure now that I am running multiple machines, and crunching so often, I will eventually have to conqueror my fears of dealing with thermal paste.

Thank you for the story, and I am sorry to hear about the difficulties you have had.

____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27405 - Posted: 24 Nov 2012 | 19:45:11 UTC - in response to Message 27395.

I expect you are right, the vapour chamber leaked - it happens.

Sure it happens, but all of my existing coolers have vapour chambers or tubes, and none of them leaked before, even after two years of operation.

I'm sure you checked the fan was turning, heatsink was tight...

I did. If the fan wasn't rotating, the other GPU would overheat as well.

Although you could have RTM'ed it, that's always a pain to deal with.

I've bought it as a used part from Slovakia (it was quite cheap), it is a replacement card (so the original one was RTM'ed), altough I have it's invoice, so I could RTM this one also, it would be a long and difficult process.

I hope the problem isn't anything to do with the VRM.

To rule this one out, I've checked the power consumption of each GPU on the GTX 690, and they are within 5%, so the hot chip doesn't dissipates significantly more heat than the normal one.

So long as it's something to do with cooling the Arctic GPU Cooler should fix it. They make great kit, though I recently had to remove a motherboard to dismount a wide Arctic heatsink (screwed into a CPU backplate; on the back of the motherboard), just to replace RAM.

That's a bad construction. I use Noctua NH-D14. It has two big screws on the upper side of the MB holding the heatsink to a mount. It's still difficult to dismount, because I have to remove the middle fan before I can access these two big screws, and on some motherboards the GPU in the first PCIe slot is so close to the heatsink that I have to remove the GPU first to access the fan's lever.

Anyway, Good Luck

I keep my fingers crossed, that I haven't spent another 100 euros in vain.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27514 - Posted: 3 Dec 2012 | 11:56:40 UTC - in response to Message 27405.

Hi,
is it possible to remove the fan from a gtx690?
Would it complain that there is no fan provided that it is well ventilated?

As far as I understand the 690 spits air out of the front and of the back, while we would like the air to flow front to back.

gdf

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27516 - Posted: 3 Dec 2012 | 12:04:06 UTC - in response to Message 27514.
Last modified: 3 Dec 2012 | 12:04:16 UTC

Do you know the power consumed running two acemd instances?

gdf

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27517 - Posted: 3 Dec 2012 | 12:18:46 UTC - in response to Message 27514.

is it possible to remove the fan from a gtx690?

It's possible, but what for? I'm sure that the fan (and the airflow) is good.

Would it complain that there is no fan provided that it is well ventilated?

If the temps stay low, it'll work.

As far as I understand the 690 spits air out of the front and of the back, while we would like the air to flow front to back.

Yes, but this card has two GPUs, with separated heatsinks, and it's not a good idea to cool one GPU with the hot air coming from the other GPU.
This card is in the open air, so there's no heat buildup.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27518 - Posted: 3 Dec 2012 | 12:23:15 UTC - in response to Message 27516.

Do you know the power consumed running two acemd instances?

The power consumption is went up by 260W while both GPUs were crunching (and the fan rev up).
I've just received the new cooler, so I've removed this GTX 690 from my host, but I'll put it back as soon as I've finished changing the cooler.
Stay tuned.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27519 - Posted: 3 Dec 2012 | 13:23:28 UTC - in response to Message 27514.
Last modified: 3 Dec 2012 | 16:49:15 UTC

Hi,
is it possible to remove the fan from a gtx690?
Would it complain that there is no fan provided that it is well ventilated?

As far as I understand the 690 spits air out of the front and of the back, while we would like the air to flow front to back.

gdf

Is this for your own GTX690?

I would be inclined to keep the fan and try to modify the casing so that it's blowing out the back/side/top.

A couple of days ago I was playing around with trying to better cool a GTX660Ti and a GTX470 in the same case. Both cards have 2 fans and blow the air all over the place. When I put an extra fan at the back of the case their temps actually got worse. Ditto when I added a fan to the front. I then placed a fan blowing directly onto the cards and both dropped their temperatures and then their fan speeds. Blasting works best for cooling, and good case fans help.

Normally you can remove a fan, but you can't crunch with it; it'll get too hot. I've done this several times with smaller cards when the fans start rattling. That said, I once removed a fan from a Gigabyte GT240 and it prevented the system starting.
The power draw from many GF600 cards when crunching tends to be around 95% of reference TDP, which might prove challenging to a newer more power hungry app, though I expect the cards just won't boost as high.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27520 - Posted: 3 Dec 2012 | 14:11:13 UTC - in response to Message 27518.

So 1500W power supply should be able to cope with 4 gtx690?



Is it easy to disassemble the standard single fan as in this picture?
http://images.anandtech.com/doci/5805/GeForce_GTX_690_3qtr.jpg
I could not find any video or photos.

We will use external fans.

gdf

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27523 - Posted: 3 Dec 2012 | 16:55:29 UTC - in response to Message 27520.
Last modified: 3 Dec 2012 | 17:00:39 UTC

https://www.youtube.com/watch?v=KGXBZvS6qJc

1500W might not be enough for four GTX690's. Depends on the other specs. What are they?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27524 - Posted: 3 Dec 2012 | 17:20:28 UTC - in response to Message 27520.

So 1500W power supply should be able to cope with 4 gtx690?

I've seen on a video, that a guy was using two GTX 690s with a 800W PSU, so 1500W should be enough for 4 GTX 690s.

Is it easy to disassemble the standard single fan as in this picture?
http://images.anandtech.com/doci/5805/GeForce_GTX_690_3qtr.jpg
I could not find any video or photos.

The fan is fastened by 3 Philips type screws, but they are threadlocked, so you have to press the screwdriver very hard while you turning it.

We will use external fans.

4 GTX 690 with external fans? It's a very bad idea. You will fry your cards. These dual GPU cards were designed to have two of them at maximum in a single PC. This results a quad-SLI, so there is no point to put more than two dual GPU cards in a single PC from the gamer point of view. Placing 4 dual slot GPUs in a single PC using air cooling is very dangerous.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27525 - Posted: 3 Dec 2012 | 17:32:28 UTC

I'm finished changing the cooler on my GTX 690.
GPU temps are 56°C and 59°C.
It was much easier to remove the whole cooler assembly from the card, than removing only its front side.
This cooler is quiet and huge, it's a bit tricky to install.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27526 - Posted: 3 Dec 2012 | 17:36:10 UTC - in response to Message 27524.
Last modified: 3 Dec 2012 | 17:37:07 UTC

Zoltan, how did you get on with your heatsink and Fan replacement? OK, slow post!

https://www.youtube.com/watch?v=nSbDSwmvxjI&NR=1&feature=endscreen
Four GTX680's (probably with an OC'ed CPU). At 56sec it hits 1KW. There is 105W TDP difference per card, so I'm just saying it's pushing it. Would depend a lot on the other components. I would also be concerned about PCIE bandwidth on the 3rd and 4th slots.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27527 - Posted: 3 Dec 2012 | 18:18:18 UTC - in response to Message 27524.

So 1500W power supply should be able to cope with 4 gtx690?

I've seen on a video, that a guy was using two GTX 690s with a 800W PSU, so 1500W should be enough for 4 GTX 690s.

I've found this video (sorry, it's in Hungarian)
Corsair 800Watt PSU + 2x GTX690 QUAD SLI
Core i7 3770K @4.4GHz
Corsair GS800
4x4GB

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27528 - Posted: 3 Dec 2012 | 18:40:11 UTC

Sounds like GDF wants to build a monster-cruncher to finish very important or huge jobs faster than GPU-Grid could. That's why he's shooting for the maximum number of cards. The system might be mounted in a server rack, so there will be massive air flow. I can't tell if it's enough, though.. 4 x 260 W = 1.04 kW is extreme.

GDF, did you already do something similar with older dual GPU cards? If this worked the current config should work as well.

Supporting the GPU should probably be a socket 2011 system to get 4 x16 PCIe slots. Too bad they can't run PCIe 3 yet. The smallest 6-Core CPU or maybe the Quad should be enough. Make sure to use all 4 memory channels.

For the PSU: I'd try the Enermax Platimax 1.5 kW.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27529 - Posted: 3 Dec 2012 | 20:53:03 UTC - in response to Message 27527.
Last modified: 3 Dec 2012 | 21:02:00 UTC

Zoltan, Good to hear your GTX690 is up and running well. 56°C and 59°C is excellent for a dual card. A good purchase.

By my calculations the power draw from a four GTX690 system would be at least 1350W, but probably over 1400W, and that's on an efficient system, similar to the ones quoted. I would definitely test it at the wall, and possibly downclock here and there.

I would caution against any sort of 130W to 150W CPU. 4core/8threads would be sufficient. Definitely check the 12V rail. I'm guessing you want to run the same task across as many GPU's as possible, so I see your need. I would make sure the RAM is 1.5V. Seemingly small things like using a SATA6 SSD drive would be important.

The PCIE sockets are key. Although 2011 isn't as good in some areas, having 40 PCIE lanes it does allow for 2x16 PCIE2.0 + 1x8 PCIE 2.0. Perhaps some boards allow for four lanes at PCIE2.0 x8. Don't know if that's enough but 1366 is also natively limited to 40lanes. If you tried to go with 1155 you would only have 32lanes, although you do get one or PCIE3.0 slots.

Fortunately there is a new alternative; motherboards with two PLX chips. These basically multiplex lanes and allow for Four way PCIE3.0 x16. For me it's the only realistic solution to fully accommodate four GTX690's, without losing a significant amount of GPU performance.
Ref: ASRock X79 Extreme11 Review: PCIe 3.0 x16/x16/x16/x16 and LSI 8-Way SAS/SATA by Ian Cutress, Anandtech.

I would forget the case, and build it directly into a lab rack. That way you could use cable raisers if you were worried about heat, and even add a second PSU.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27531 - Posted: 3 Dec 2012 | 21:25:42 UTC

Risers is a good idea. If I was going to build a monster 4-gpu machine I would plan on using risers to get the GPUs apart from each other.

I know PSU calculators aren't perfect but here is something to at least consider:


____________
XtremeSystems.org - #1 Team in GPUGrid

mikey
Send message
Joined: 2 Jan 09
Posts: 290
Credit: 2,035,091,115
RAC: 10,299,357
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27541 - Posted: 4 Dec 2012 | 13:46:15 UTC - in response to Message 27404.
Last modified: 4 Dec 2012 | 13:46:49 UTC

Heat issues always scare me, I figure now that I am running multiple machines, and crunching so often, I will eventually have to conqueror my fears of dealing with thermal paste.


Thermal paste is NOT a problem, just get some good paste, I personally like Arctic Silver, but there ARE others too. ALSO it IS possible to put too much of a blob on there, I have seen examples of 'the size of a pea' but it is just doing it several times that will tell you how much is enough and not too much. You want enough to be able to spread out over the whole cpu, but NOT out over the edges! Remember heat MUST go thru the paste and into the heatsink/fan and too thick of a layer is not good for heat transfer. Basically don't worry EVERYONE gets it wrong a few times before they sort of 'figure it out'. I have NEVER burnt up a cpu by putting too much, but when the cpu DOES run hotter than I thought it should I take it apart and redo it. I use Alcohol Prep Swabs to clean the fan and cpu, making sure it is completely dry before redoing it.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27542 - Posted: 4 Dec 2012 | 16:33:50 UTC - in response to Message 27541.

Thermal paste should be spread as thin as possible; although its designed to transfer heat, being a liquid phase it inevitably can't do it as well as the metal solids either side. So just enough to actually cover and link the CPU and heatsink. This is why some people lap both the CPU and heatsink surfaces.

You can also clean fans with water and a cotton bud, but the best fix is prevention - filters. I have a system almost a year old with filters at the front and side, it's still pristine inside, and I just need to vacuum the filters every few weeks. The other systems are not so clean ;p
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27549 - Posted: 4 Dec 2012 | 19:44:44 UTC

Some people may go mad about thermal paste.. but it's not that critical. Personally I like to position the heat sink and then twist it a bit forth and back before I fasten it. Doing so squeezes excess paste to the edges (it doesn't hurt there, it's just wasted) and upon several repetitions I can feel a stronger resistance to my torque, which I interpret as "sitting tightly". Never had thermal paste related problems.. so just give it a try (if you have to).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27551 - Posted: 4 Dec 2012 | 19:59:34 UTC - in response to Message 27531.

Yes, we would like to try to build a 8 gpu machine made of 690.

What are the screws to remove (3 or 4)?

gdf

mikey
Send message
Joined: 2 Jan 09
Posts: 290
Credit: 2,035,091,115
RAC: 10,299,357
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27566 - Posted: 5 Dec 2012 | 14:52:20 UTC - in response to Message 27549.

Some people may go mad about thermal paste.. but it's not that critical. Personally I like to position the heat sink and then twist it a bit forth and back before I fasten it. Doing so squeezes excess paste to the edges (it doesn't hurt there, it's just wasted) and upon several repetitions I can feel a stronger resistance to my torque, which I interpret as "sitting tightly". Never had thermal paste related problems.. so just give it a try (if you have to).

MrS


I give mine a couple of little twists too before clamping it down.

GDF are making a BitCoin machine?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27572 - Posted: 5 Dec 2012 | 19:02:54 UTC - in response to Message 27566.

Definitely not!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27577 - Posted: 5 Dec 2012 | 21:07:01 UTC - in response to Message 27551.

What are the screws to remove (3 or 4)?

3 scrwes. Not the visible ones. (The fan blades are shrouding them)

I didn't find any picture of them, so I can make some if you like me to do.

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27657 - Posted: 11 Dec 2012 | 21:41:31 UTC

I'm considering the hypothesis of buying a PC with 4xGTX690 water-cooled. SLI will be off. 4 GTX690 works good together? The 690 is a dual GPU, by each GPU there will be 2 WU processing together?
____________
Member of Boinc Italy.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27660 - Posted: 11 Dec 2012 | 21:58:25 UTC - in response to Message 27657.

I'm considering the hypothesis of buying a PC with 4xGTX690 water-cooled. SLI will be off. 4 GTX690 works good together? The 690 is a dual GPU, by each GPU there will be 2 WU processing together?

It's very hard to get 8 GPUs working in a sigle PC. You need special BIOS, and forget about Windows.

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 27661 - Posted: 11 Dec 2012 | 23:39:56 UTC

So couldn't someone then just get 3 690's and then 1 680?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27662 - Posted: 11 Dec 2012 | 23:44:31 UTC - in response to Message 27661.

So couldn't someone then just get 3 690's and then 1 680?

More than 4 GPUs is usually problematic, especially with Windows.

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27667 - Posted: 12 Dec 2012 | 12:21:21 UTC - in response to Message 27660.

I'm considering the hypothesis of buying a PC with 4xGTX690 water-cooled. SLI will be off. 4 GTX690 works good together? The 690 is a dual GPU, by each GPU there will be 2 WU processing together?

It's very hard to get 8 GPUs working in a sigle PC. You need special BIOS, and forget about Windows.
"
That`s the reason why you have "only" 3 GTX690? Is it also not a problem with 3 GTX690 and Windows?

____________
Member of Boinc Italy.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27669 - Posted: 12 Dec 2012 | 14:02:10 UTC - in response to Message 27667.

[3] NVIDIA GeForce GTX 690 (2047MB) driver: 310.33

This is actually one GTX 690 and another GPU (GTX670 or something).
Boinc reports each individual GPU. A GTX690 has two GPU's so Boinc reports two GTX690's! It's doesn't report the card count, just the GPU count, and it has to call it something, so GTX690 it is. Boinc also reports the first (or biggest/most powerful GPU) and then the number of GPU's. So if you have a GTX680 in there, it will report it as another GTX690.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27680 - Posted: 12 Dec 2012 | 22:44:48 UTC - in response to Message 27669.
Last modified: 12 Dec 2012 | 23:03:16 UTC

[3] NVIDIA GeForce GTX 690 (2047MB) driver: 310.33

That`s means also 1X690 and another GTX right?

The maximal number of 690 is also 2?

Another question...with one 690 the PC will crunch 2 WU at the same time?

What`s the better solutions between 2x690 and 4x680?
____________
Member of Boinc Italy.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27681 - Posted: 13 Dec 2012 | 0:00:18 UTC - in response to Message 27680.

[3] NVIDIA GeForce GTX 690 (2047MB) driver: 310.33

That`s means also 1X690 and another GTX right?

Exactly. This host has one GTX 690 (reported by BOINC manager as two GTX 690s) and a GTX 670.

The maximal number of 690 is also 2?

Yes.

Another question...with one 690 the PC will crunch 2 WU at the same time?

Yes.

What`s the better solutions between 2x690 and 4x680?

It's hard to tell. Both configuration have pros and cons.
2x690:
Pros: the motherboard have to have only two PCIe x16 slots, less PCIe power cables required (four 8-pin)
Cons: less overclockability, larger heat dissipation into the case (considering the factory made cooler)
4x680:
Pros: higher overclockability, the heat dissipated only through the rear grille (considering the factory made cooler)
Cons: the motherboard have to have four PCIe x16 slots, and the PSU have to have four 8-pin and four 6-pin PCIe power cables
I would choose the 2x690, with lager than factory made cooler (or water cooler), and a motherboard with 4 PCIe x16 slots (in this kind of MB the GPUs are 1 slot farther apart, allowing better airflow)

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27684 - Posted: 13 Dec 2012 | 8:43:17 UTC - in response to Message 27681.

Thank you very much Zoltan :-)
____________
Member of Boinc Italy.

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27814 - Posted: 23 Dec 2012 | 10:40:01 UTC - in response to Message 27669.
Last modified: 23 Dec 2012 | 10:40:28 UTC

[3] NVIDIA GeForce GTX 690 (2047MB) driver: 310.33

This is actually one GTX 690 and another GPU (GTX670 or something).
Boinc reports each individual GPU. A GTX690 has two GPU's so Boinc reports two GTX690's! It's doesn't report the card count, just the GPU count, and it has to call it something, so GTX690 it is. Boinc also reports the first (or biggest/most powerful GPU) and then the number of GPU's. So if you have a GTX680 in there, it will report it as another GTX690.

[6] NVIDIA GeForce GTX 690 (2048MB) driver: 301.42...what`s this? 6 GPU...
____________
Member of Boinc Italy.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27816 - Posted: 23 Dec 2012 | 12:28:38 UTC - in response to Message 27814.

[6] NVIDIA GeForce GTX 690 (2048MB) driver: 301.42...what`s this? 6 GPU...

The really interesting part is that this host is running Windows 7 Ultimate x64.
I guess that this host has UEFI BIOS. I'm sure that besides crunching there is no use of a 3rd dual GPU card in a PC, because you can't connect the 3rd card with an SLI cable to the other two, since dual GPU cards have only a single SLI connector.

Jorge Alberto Ramos Olive...
Send message
Joined: 13 Aug 09
Posts: 24
Credit: 156,684,745
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 27835 - Posted: 25 Dec 2012 | 22:12:16 UTC

Not new to GPUGrid but recently completed my new rig. It includes 2 GTX690 (intended for GPUGrid) and one GT640 (intended for SETI@HOME) I have one problem with the GPU utilization, hope to get advice from the masters ;)

BOINC recognizes all GPU's in my system:

25/12/2012 03:21:49 p.m. | | NVIDIA GPU 0: GeForce GTX 690 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 2048MB, 8382371MB available, 3132 GFLOPS peak)
25/12/2012 03:21:49 p.m. | | NVIDIA GPU 1: GeForce GT 640 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 1024MB, 836MB available, 692 GFLOPS peak)
25/12/2012 03:21:49 p.m. | | NVIDIA GPU 2: GeForce GTX 690 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 2048MB, 1955MB available, 3132 GFLOPS peak)
25/12/2012 03:21:49 p.m. | | NVIDIA GPU 3: GeForce GTX 690 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 2048MB, 1955MB available, 3132 GFLOPS peak)
25/12/2012 03:21:49 p.m. | | NVIDIA GPU 4: GeForce GTX 690 (driver version 310.70, CUDA version 5.0, compute capability 3.0, 2048MB, 1955MB available, 3132 GFLOPS peak)
25/12/2012 03:21:49 p.m. | | OpenCL: NVIDIA GPU 0: GeForce GTX 690 (driver version 310.70, device version OpenCL 1.1 CUDA, 2048MB, 8382371MB available)
25/12/2012 03:21:49 p.m. | | OpenCL: NVIDIA GPU 1: GeForce GT 640 (driver version 310.70, device version OpenCL 1.1 CUDA, 1024MB, 836MB available)
25/12/2012 03:21:49 p.m. | | OpenCL: NVIDIA GPU 2: GeForce GTX 690 (driver version 310.70, device version OpenCL 1.1 CUDA, 2048MB, 1955MB available)
25/12/2012 03:21:49 p.m. | | OpenCL: NVIDIA GPU 3: GeForce GTX 690 (driver version 310.70, device version OpenCL 1.1 CUDA, 2048MB, 1955MB available)
25/12/2012 03:21:49 p.m. | | OpenCL: NVIDIA GPU 4: GeForce GTX 690 (driver version 310.70, device version OpenCL 1.1 CUDA, 2048MB, 1955MB available)
25/12/2012 03:21:49 p.m. | | NVIDIA library reports 5 GPUs
25/12/2012 03:21:49 p.m. | | No ATI library found.


I have correctly (I think) configured the cc_config file to use the 690 in GPUGrid and the 640 in SETI, this is my config file:

<cc_config>

<log_flags>

<coproc_debug>1</coproc_debug>

<sched_op_debug>1</sched_op_debug>

</log_flags>

<options>

<use_all_gpus>1</use_all_gpus>

<ncpus>0</ncpus>

<report_results_immediately>1</report_results_immediately>

<exclude_gpu>
<url>http://www.gpugrid.net/</url>
[<device_num>1</device_num>]
</exclude_gpu>

<exclude_gpu>
<url>http://setiathome.berkeley.edu/</url>
[<device_num>0</device_num>]
</exclude_gpu>
<exclude_gpu>
<url>http://setiathome.berkeley.edu/</url>
[<device_num>2</device_num>]
</exclude_gpu>
<exclude_gpu>
<url>http://setiathome.berkeley.edu/</url>
[<device_num>3</device_num>]
</exclude_gpu>
<exclude_gpu>
<url>http://setiathome.berkeley.edu/</url>
[<device_num>4</device_num>]
</exclude_gpu>

<max_file_xfers_per_project>4</max_file_xfers_per_project>

</options>

</cc_config>


and this is the response I get from BOINC:

25/12/2012 03:21:49 p.m. | | Config: report completed tasks immediately
25/12/2012 03:21:49 p.m. | | Config: use all coprocessors
25/12/2012 03:21:49 p.m. | GPUGRID | Config: excluded GPU. Type: all. App: all. Device: 1
25/12/2012 03:21:49 p.m. | SETI@home | Config: excluded GPU. Type: all. App: all. Device: 0
25/12/2012 03:21:49 p.m. | SETI@home | Config: excluded GPU. Type: all. App: all. Device: 2
25/12/2012 03:21:49 p.m. | SETI@home | Config: excluded GPU. Type: all. App: all. Device: 3
25/12/2012 03:21:49 p.m. | SETI@home | Config: excluded GPU. Type: all. App: all. Device: 4


The GT640 is is crunching WUs from SETI, as expected. But my problem is that I only have 2 WUs for GPUGrid. And the curious thing is that one WU is one one card (say the card on PCIe slot 1) and the other is on another card (say the one on PCIe slot 2). This leaves 2 GPU cores idle, one on each card.

I can confirm this by using MSI afterburner, following temps are on the cards:

GPU1 (GTX690) 41 C
GPU2 (GTX690) 82 C
GPU3 (GTX690) 81 C
GPU2 (GTX690) 35 C
GPU2 (GT 640) 43 C


and by manually feeling the exhaust air temps on the cards: the first 690 is expelling hot air only one the "inner" card (inside the case), while the other 690 is expelling hot air on the "outer" (away from the system grille).

What am I doing wrong here?

Thanks for your help!

Jorge Alberto Ramos Olive...
Send message
Joined: 13 Aug 09
Posts: 24
Credit: 156,684,745
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 27836 - Posted: 25 Dec 2012 | 22:15:19 UTC - in response to Message 27835.

errata:

GPU1 (GTX690) 41 C
GPU2 (GTX690) 82 C
GPU3 (GTX690) 81 C
GPU4 (GTX690) 35 C
GPU5 (GT 640) 43 C

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 27837 - Posted: 25 Dec 2012 | 22:31:19 UTC

Using a file cc_config to limit the use of the GPU is possible that the Boinc manager stops any new WU. I had a same experience with Poem and GPUGRID...after a little my cache was empty and the Boinc manager didn`t get new WU! By removing the cc_config file the problem was solved :-)
____________
Member of Boinc Italy.

Jorge Alberto Ramos Olive...
Send message
Joined: 13 Aug 09
Posts: 24
Credit: 156,684,745
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 27838 - Posted: 26 Dec 2012 | 0:35:31 UTC - in response to Message 27835.

Mistery solved. I am now in the second batch of my new rig and all four coprocesors are now running cuda tasks. This might be some kind of "testing" of the cards from the project side.

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28138 - Posted: 23 Jan 2013 | 0:21:46 UTC

Zoltan;

I read with interest your problems with the 690 card.

I have a workstation that came with a Quadro 4000 and a Tesla.

I wanted two GTX 590 cards instead but was having problems with the "reverse airflow" causing the GPU closest to the front of the case to operate at least 20C higher than the one venting out the back of the case.

I just finished rigging a dual water cooling solution using a Koolance EXOS and two Danger Den water blocks. Both GTX 590 are now mildy OC'd and running at 53-57C.

I have never understood why Nvidia thought having the airflow going in two directions (front and back) was ever going to work properly in a cases with fans blowing directly into the exhaust of the front GPU (like my Dell Precision). To me water cooling is the only reasonable solution for the longevity of the card.

I'm glad you got yours working properly!

Operator
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28162 - Posted: 23 Jan 2013 | 18:48:32 UTC - in response to Message 28138.

I have never understood why Nvidia thought having the airflow going in two directions (front and back) was ever going to work properly in a cases with fans blowing directly into the exhaust of the front GPU (like my Dell Precision).

Maybe they never did? Reverse the front fans (and have some input somewhere) and the problem is gone.

MrS
____________
Scanning for our furry friends since Jan 2002

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28164 - Posted: 23 Jan 2013 | 20:34:52 UTC - in response to Message 28162.
Last modified: 23 Jan 2013 | 20:45:37 UTC

Problem is on the Precision chassis (as with most, if not all, Dell tower chassis) the airflow goes in the front (usually assisted by one or more fans) and out the back.

Reversing the huge front fan in my chassis would affect more than just the GTX 590 cards, I have to consider the secondary Xeon processor as well. So reversing that fan was not a good option for me. So water cooling was the answer.

I know now that it's easier to build a "killer cruncher" by buying components and doing all the engineering yourself than it is to try and take an off the shelf workstation and adapt it to the task of devouring WUs.

Next time I'll go with something like the EVGA SR2 and whatever the current best-in-class GPUs are, etc.

Operator
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28441 - Posted: 7 Feb 2013 | 15:19:43 UTC - in response to Message 28164.

I know now that it's easier to build a "killer cruncher" by buying components and doing all the engineering yourself than it is to try and take an off the shelf workstation and adapt it to the task of devouring WUs.

Easier, better and less expensive...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29575 - Posted: 27 Apr 2013 | 11:14:14 UTC - in response to Message 27390.

My only GTX 690 (with the failed then changed cooling) has failed yesterday while it was crunching. There was no environmental hazard of any kind.

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 29583 - Posted: 27 Apr 2013 | 21:49:04 UTC - in response to Message 29575.

My only GTX 690 (with the failed then changed cooling) has failed yesterday while it was crunching. There was no environmental hazard of any kind.


What´s the temperature you was running it mate? I don´t let it ever pass 75°C, and my 6 where ok for almost a year now.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29584 - Posted: 27 Apr 2013 | 22:06:59 UTC - in response to Message 29583.
Last modified: 27 Apr 2013 | 22:13:41 UTC

The temperatures of the GTX 690 were 62°C and 65°C. Everything was OK. This host had a GTX 670 beside the GTX 690. The GTX 670 kept on crunching fine after the GTX 690 had failed. I didn't overclocked the GTX 690. It simply broke down in the middle of crunching. This host has an Enermax MaxRevo 1500W power supply, and an ASUS Rampage III Extreme motherboard.

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29585 - Posted: 28 Apr 2013 | 2:04:17 UTC

what was the cause of the error?
If you can tell? Like was it temperature related or a NAN or something else.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29587 - Posted: 28 Apr 2013 | 4:29:16 UTC - in response to Message 29585.

what was the cause of the error?
If you can tell? Like was it temperature related or a NAN or something else.



It wasn't an error from what he wrote, the video card died. I had a GTX670 die 2 months ago for no reason (water cooled), it was a eVga video card with a 3 year warranty and I have it as a spare now after getting the replacement back.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29590 - Posted: 28 Apr 2013 | 10:30:55 UTC

Sometimes hardware just breaks. If there was any observable, even remotely possible reason in his case he probably would have mentioned it.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29592 - Posted: 28 Apr 2013 | 12:41:56 UTC - in response to Message 29585.

what was the cause of the error?
If you can tell? Like was it temperature related or a NAN or something else.

The workunits didn't fail, only the card. The GTX670 finished the two workunits which the GTX 690 could process only partially.

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30767 - Posted: 10 Jun 2013 | 19:28:59 UTC
Last modified: 10 Jun 2013 | 19:29:55 UTC

It seems that one of my 690´s on the 3x690 rig had a leaked vapor chamber. Luckly enough, I was able to catch the temperature increase in the exctly moment and underclock it. Just ordered a Accelero cooler to this unit.
Retvari, a question: i can´t figure is for sure from the pics, but once in place, this card/cooler will use 3 bays right?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30805 - Posted: 12 Jun 2013 | 20:44:24 UTC - in response to Message 30767.

It seems that one of my 690´s on the 3x690 rig had a leaked vapor chamber. Luckly enough, I was able to catch the temperature increase in the exctly moment and underclock it. Just ordered a Accelero cooler to this unit.
Retvari, a question: i can´t figure is for sure from the pics, but once in place, this card/cooler will use 3 bays right?

Yes. It's HUGE in every direction :). It's 5 cm taller and 2 cm longer than the card itself.

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31026 - Posted: 25 Jun 2013 | 20:38:19 UTC

Man, it seems a pain to assemble. When it was all ready, I figure that I didn´t have the little wrench to remove the original screws from the back of the 690. It´s a micro allen wrench? It´s so small that I can´t even see it right!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31031 - Posted: 25 Jun 2013 | 22:39:05 UTC - in response to Message 31026.

Man, it seems a pain to assemble.

It takes awhile. However I've made my job easier by using nail polish (enamel) instead of the insulating tapes.

When it was all ready, I figure that I didn´t have the little wrench to remove the original screws from the back of the 690. It´s a micro allen wrench?

No, it's a Torx (TX5, if I can recall it correctly, but I will check it tomorrow for you)

It´s so small that I can´t even see it right!

They are. I have reading glasses for that purpose :)

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31045 - Posted: 26 Jun 2013 | 23:10:17 UTC - in response to Message 31031.
Last modified: 26 Jun 2013 | 23:37:59 UTC

LOL thanks mate. I will look for that wrench... I really need it to put my hands on the card again. In the meantime, its crunching at 915mhz, 0.975v, 64°C...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31122 - Posted: 29 Jun 2013 | 8:01:19 UTC - in response to Message 31045.

LOL thanks mate. I will look for that wrench...

It's a Torks for sure, the only question is its size. It could be a TX6, or a TX5. I've find my screwdriver set, and the separate scredriver I've bought for disassebling video cards (it's a TX6) but I didn't have time to check which one fits exacly in those screws. (I was busy selling one of my old MB+CPU, and buying a new one)

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31128 - Posted: 29 Jun 2013 | 15:50:10 UTC
Last modified: 29 Jun 2013 | 15:53:05 UTC

I will buy an entire set, don´t bother....
But I was wondering... you say that card died after you change the cooler right? And I was looking the cooler and reading the assemble instruction, and they barely scratch in a short circuit issue on the RAM and VR heatsink, because the assemblage involves a insulation tape on that area (step 6 - preparation).
Did you think that may happened to your card? NOW im scared..

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31130 - Posted: 29 Jun 2013 | 20:33:04 UTC - in response to Message 31128.

I will buy an entire set, don´t bother....
But I was wondering... you say that card died after you change the cooler right? And I was looking the cooler and reading the assemble instruction, and they barely scratch in a short circuit issue on the RAM and VR heatsink, because the assemblage involves a insulation tape on that area (step 6 - preparation).
Did you think that may happened to your card? NOW im scared..

After months of operation? I don't think so. That is where I've used nail polish instead of the insulating tape, because I was afraid that the tape will peel off when the heatsink gets hot.

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31132 - Posted: 30 Jun 2013 | 1:59:05 UTC - in response to Message 31130.

I will buy an entire set, don´t bother....
But I was wondering... you say that card died after you change the cooler right? And I was looking the cooler and reading the assemble instruction, and they barely scratch in a short circuit issue on the RAM and VR heatsink, because the assemblage involves a insulation tape on that area (step 6 - preparation).
Did you think that may happened to your card? NOW im scared..

After months of operation? I don't think so. That is where I've used nail polish instead of the insulating tape, because I was afraid that the tape will peel off when the heatsink gets hot.

That´s what I was thinking! I will make a double insulation there....ty for your thoughts!

GPUGRID
Send message
Joined: 12 Dec 11
Posts: 91
Credit: 2,730,095,033
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 31173 - Posted: 2 Jul 2013 | 21:44:26 UTC
Last modified: 2 Jul 2013 | 22:21:20 UTC

I install the beast today. It´s really huge. That was a TORX 6 toll that was needed. Really necessary to mess with a 690 cooler. Once the original cooler is out, the assemble of the Aero is not that hard. The performance, on the other hand, is kinda worse than I think. I don´t think its even on a pair of the original cooler. I can´t tell about the noise, because the machine has other 2 original 690s, but the cooling is not that good.
So if you have a leaked chamber (a kinda comom flaw it seems), it´s a must have. But if not, don´t mess with your 690......

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31175 - Posted: 2 Jul 2013 | 23:08:45 UTC - in response to Message 31173.
Last modified: 2 Jul 2013 | 23:10:31 UTC

Once the original cooler is out, the assemble of the Aero is not that hard. The performance, on the other hand, is kinda worse than I think. I don´t think its even on a pair of the original cooler.

It's much better than the original cooler, however it needs more fresh (cool) air than the original (that is how it can be better), so the more other GPUs are in the system, the less gain in cooling performance.

I can´t tell about the noise, because the machine has other 2 original 690s, but the cooling is not that good.

It's much less noisy. But its performance is the best when there is at least 1 slot space between the GPUs.

So if you have a leaked chamber (a kinda comom flaw it seems), it´s a must have. But if not, don´t mess with your 690......

I would say: if you have only 1 GTX 690, it's worth the mess anyway :)

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33511 - Posted: 15 Oct 2013 | 22:35:01 UTC

It seems that the long story of my late GTX 690 isn't over. At first one of the vapor chambers failed, later the whole card.
This card was in an ASUS Rampage Extreme III motherboard.
This motherboard is still operational, it runs a GTX 680 and a GTX 670 at the moment.
I share the aftermath of my GTX 690's failure, as I think it reveals the source of that card's final failure, so this can be very useful for others as well. A wise man learns from others' troubles (as we say it in Hungary). So, be wise.

It happened on last Thursday. This host was off when I got home, while the others were running fine. This was suspicious from the first moment, but I turned on this PC to check it could start up normally. It booted fine, both GPUs were working fine, but after about a half hour of operation this host suddenly turned itself off. I've seen such behavior before, when the 8-pin CPU power connector (actually only the four 12V pins) burned out on my other motherboard (also ASUS), so I immediately turned the power switch off, and began checking all power connectors. This MB has two 8-pin CPU power connectors, so I would have been very surprised if they had been burned out, and they weren't. The GPU power connectors were also fine. This PC has a modular PSU (Enermax Maxrevo 1500W), so I had to check both ends of all cables. Only the 24-pin MB ATX power connector remained unchecked, because this one is the hardest to disconnect. As I expected after checking all other power connectors, the two 12V pins were burned out on this one.
This two 12V pins power the RAM, and also the PCIe sockets, therefore the GPUs (they could draw at most 75W from the PCIe slot, that is 6.25A @12V).
Why do I think that this failing connector killed my GTX 690?
Because I have an old GTX 480, and I've put it into this MB after I've fixed these two pins. The VGA output of this old GTX 480 became 'noisy' (darker horizontal lines flashing very rapidly on the screen) when running GPUGrid on it. But not in this MB, since I've soldered the two 12V cables to their pins on the MB. So the two 12V pins could not provide enough power to the PCIe slots while they were connected in their original way, and that could have been the source of my GTX 690's demise.
To fix the connection of the 12V rail, I've removed the two pins from the connector, and I've cut the plastic off of the connector around these two pins on the MB as well. Then I've polished these two pins (they became black from the burning plastic), and I've soldered a thick but short solid copper wire to the two pins on the MB and the two 12V cables to its upper side.
I don't understand why are the fan connectors gold plated, and the power connectors aren't, while the latter have to conduct 10 times more amps? Why are the housing of the PSU's power connectors made of thermoplastic? The parts in my host in question are the most expensive kind, so their price couldn't be an excuse for that silliness. Furthermore there is no point in putting 5V, 3.3V, -12V, -5V rails on the MB and the PSU anymore, because every part is using less and less voltage. Only the +12V rail is used to power everything on the MB (through programmable DC-DC converters), so the ATX power connector standard is obsolete in its recent form. The existence of the separate CPU and GPU power connectors are the proof of it. I've liked much-much more the design of the old power connectors.

So the lesson from this long story is:
- Do not use any power cable converters, extenders as they add extra contact resistance (should I say do not use modular PSUs?)
- regularly check all power connectors for burn marks
- If you put more high-end GPU in a single MB, you should use the extra power connectors on the MB, if there is any. If there isn't such, then don't put more than one high-end GPU in that MB.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33525 - Posted: 16 Oct 2013 | 21:41:05 UTC - in response to Message 33511.
Last modified: 17 Oct 2013 | 17:13:17 UTC

Thanks for sharing your experience. Sounds like the issue was with the lack of quality in the modular power cables/PSU rails, or a connection issue (loose connection's cause burn marks). Did you ever OC that card?

I recently had a 700W PSU fail, and still have the cables - they lack any quality; cheap alloy connectors and sharp edged plastic ends (not something they advertise). It was bought as a spare, supposedly 80%+ but wasn't worth the time, money or effort. Buy quality, buy once, no regrets...

As a general rule of thumb, the more efficient a PSU is the better quality the components are, and you can usually tell from the weight. Power efficiency is lost with any cheap components, including cabling and connectors. Anything below 85% efficiency these days is suspicious; you can now get PSU's with 93% efficiency.

There are powerful PSU's and there are quality PSU's - they are not necessarily both. A quality modular PSU with quality cabling is a great choice for a powerful system and offers flexibility and tidiness. If it weighs next to nothings, and has a sticker of a dragon on the side, expect fire, smoke and failure.

In the past I've used many PSU's that say they can supply 500 to 800W but failed within 18months (usually within 6). I've also used a 550W Corsair PSU (of reasonable quality) to support two 215W GPU's on an overclocked i7 for over a year 24/7 without issue. The system drew >450W continuously.

I have two GPU's which use a 6pin (75W) and an 8pin (150W) connector. While the reference models are rated at 170W and 230W, while crunching GPUGrid WU's it's unlikely that the wattage would require any power from the PCIE slots for these cards. The makes them much more reliable. Similarly the two 6pin power connectors (150W) of my GTX660Ti can supply enough power to the card should the PCIE power be substandard. The 170W TDP for the reference GTX760 made me suspicious. There are models that require one 8pin power connector and models that use one 6pin and one 8pin connector.

A few times I had to use a 24-pin ATX power extender (bad MB/PSU design). They are terrible - avoid/get a proper PSU/MB. The pins can sometimes be pushed into the connector, so that they don't touch.

BTW. Not every PCIE slot can provide 75W!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Post to thread

Message boards : Graphics cards (GPUs) : A little warning to GTX690 owners

//