Video Card Longevity

Message boards : Graphics cards (GPUs) : Video Card Longevity
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 5 · Next

AuthorMessage
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5638 - Posted: 15 Jan 2009, 13:21:21 UTC

Does anyone have any first hand experience with failures related to 24/7 crunching? Overclocked or stock?

I ask because a posting on another forum indicated failures from stress of crunching 24/7. There was no specific information given, so I don't really think that it was a valid statement.

Anyone?
mike
ID: 5638 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile X1900AIW

Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5642 - Posted: 15 Jan 2009, 14:43:16 UTC - in response to Message 5638.  

Of course I got failures with overclocking, immediately after test-parcour (which should be run in any case before working on WUs. In my opinion this is not a matter of 24/7, but of stability and time you invest in testing and adjusting your clock rates and fans (not forget the case fans !).

Both GTX 260 I tested overclocking by Rivatuner in a excessive manner, afterwards I flashed the BIOS, included fan settings. If the cooling and temperature can be controlled, only different WUs (for example in folding@home with upcoming the "big" WUs) can compromise your OC-settings. Hardware issues can never be excluded if you overclock or not. No one can guarantee a 24/7-operating. It´s risky, whatever you do in crunching business, especially in beta projects.

My new 9800GX2 runs stock at first time, because I have no experience with that monster, it was applied directly to GPUgrid, but cooling seems to be fine. [I changed GTX 260 versus 9800GX2 during the term of one WU, it´s working. ]

Find your best settings (stock/ocerclocked with fixed fan) and reduce the clock a bit to get some tolerance. In my opinion don´t count on automatic fan-control by driver settings, I would fix the fan speed in any event you think about 24/7, just to control temperature or to prevent your fans from damage by periodical running up and down.

Good luck.
ID: 5642 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5643 - Posted: 15 Jan 2009, 14:55:40 UTC

Thanks for the response, I guess I should had worded my query differently.

I am interested in complete failure of the video card from crunching.
mike
ID: 5643 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile X1900AIW

Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5645 - Posted: 15 Jan 2009, 15:11:29 UTC - in response to Message 5643.  

You mean irreparable blackout ? Or temporary disfunction ? Just crunching (shader usage), or up to collapse of the 2D-function ?

Heard about some cases in the folding-forum. See
http://foldingforum.org/viewforum.php?f=49
http://foldingforum.org/viewforum.php?f=38
ID: 5645 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5646 - Posted: 15 Jan 2009, 15:14:14 UTC - in response to Message 5645.  

Ruin of the card to the point of being unusable.

There are those that contend 24/7 crunching will destroy a video card.
mike
ID: 5646 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5647 - Posted: 15 Jan 2009, 16:22:55 UTC

The trouble is the lack of context.

I have run computers 24/7 for years with no problems. There is a group that also thinks that when you run computers you must run them up to the edge of stable performance. I have long taken the stance that in the case of over-clocking that it is not a good thing for scientific computing. That does not mean that I think that those that do over-clocking are evil ...

All that being said. It is certainly possible that if you take the computing equipment to the edge and are not that skilled in the maintenance of machines tweaked to this performance level that you can experience machine failures due to heat (primarily), or by voltage (because of mis-adjustment(s)) ...

And some cases are not always configured to allow the heat to be removed when you add several, or even one, high performance GPUs then run them full speed for 24/7 ...

Oh, well, just my thoughts ...
ID: 5647 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5650 - Posted: 15 Jan 2009, 19:02:30 UTC - in response to Message 5647.  

The trouble is the lack of context.

I have run computers 24/7 for years with no problems. There is a group that also thinks that when you run computers you must run them up to the edge of stable performance. I have long taken the stance that in the case of over-clocking that it is not a good thing for scientific computing. That does not mean that I think that those that do over-clocking are evil ...

All that being said. It is certainly possible that if you take the computing equipment to the edge and are not that skilled in the maintenance of machines tweaked to this performance level that you can experience machine failures due to heat (primarily), or by voltage (because of mis-adjustment(s)) ...

And some cases are not always configured to allow the heat to be removed when you add several, or even one, high performance GPUs then run them full speed for 24/7 ...

Oh, well, just my thoughts ...



Can I assume that you have no failures to discuss?
mike
ID: 5650 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Nightlord
Avatar

Send message
Joined: 22 Jul 08
Posts: 61
Credit: 5,461,041
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 5651 - Posted: 15 Jan 2009, 19:03:26 UTC

If it helps, I have several cards here that have run 24/7 on GPUGrid since July last year with no failures.

I have also never lost a CPU, ram or hard drive due to 24/7 crunching. I damaged a mobo some years ago, but that was my stupidity coupled with a live psu and a screwdriver.

Your mileage may vary.

ID: 5651 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5653 - Posted: 15 Jan 2009, 19:21:43 UTC - in response to Message 5650.  

Can I assume that you have no failures to discuss?


No I do not. Do you?

What I was saying is that some report failures and blame GPU Grid, BOINC, etc. when the problem is that because these programs run the system at full speed for long periods of time which will, in fact, stress the system upon which they are run.

If there is a problem, or a weakness in the system, the use of a program such as BOINC is going to probably push the system over the brink ... is that the fault of BOINC? Not really ...

Just like race cars lose engines through explosions and other catastrophic events because they are pushed to the edge, any minor flaw or event will cause failure, so it is with BOINC ...
ID: 5653 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5655 - Posted: 15 Jan 2009, 20:53:59 UTC - in response to Message 5651.  

If it helps, I have several cards here that have run 24/7 on GPUGrid since July last year with no failures.

I have also never lost a CPU, ram or hard drive due to 24/7 crunching. I damaged a mobo some years ago, but that was my stupidity coupled with a live psu and a screwdriver.

Your mileage may vary.


This is what I find everywhere that I have asked. I had assumed that there would be no big issues and had stated so....but was told[without foundation] that the longevity would be severely shortened by crunching.

Thanks for your input.
mike
ID: 5655 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mike047

Send message
Joined: 21 Dec 08
Posts: 47
Credit: 7,330,049
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwat
Message 5656 - Posted: 15 Jan 2009, 20:55:03 UTC - in response to Message 5653.  

Can I assume that you have no failures to discuss?


No I do not. Do you?

What I was saying is that some report failures and blame GPU Grid, BOINC, etc. when the problem is that because these programs run the system at full speed for long periods of time which will, in fact, stress the system upon which they are run.

If there is a problem, or a weakness in the system, the use of a program such as BOINC is going to probably push the system over the brink ... is that the fault of BOINC? Not really ...

Just like race cars lose engines through explosions and other catastrophic events because they are pushed to the edge, any minor flaw or event will cause failure, so it is with BOINC ...


Thank you for your input.
mike
ID: 5656 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5657 - Posted: 15 Jan 2009, 21:08:17 UTC
Last modified: 15 Jan 2009, 21:09:53 UTC

Guys, this is a serious topic. I could talk a lot about this, but will try to stay focussed. Feel free to ask further questions!

Basically there are 3 kinds of chip failures:

1. Transient errors: you push your system to its clock speed limits and it fails (or just fails occasionally). You reboot and everything is fine again. We're not concerned about these, just back off a few MHz and you're good to go.

2. Catastrophic failures: a chip suddenly fails and refuses to work, it's broken. This just happens and can be avoided by not running your machines and not running them under load.. luckily such chip failures are very rare. I think power supply circuitry breaks much more often than chips.

3. Decay of chips: this is something to be concerned about and what I'll talk about a bit more.

How does this decay look like?

At a given voltage any chip can run up to a certain frequency, if pushed higher some transistors (actually entire data paths) can not switch fast enough and the operation fails. This maximum frequency is determined by the slowest element. During operation current flows through the transistor in the form of electrons. This current causes microscopic changes in the atomic structure, which ultimately degrades transistor performance.

Thus, over time the transistors become worse and the chip can not reach as high a frequency any more as it did in the beginning. Or, similarly, it needs a higher voltage to maintain a certain speed.

Usually we don't notice this decay because the manufacturers built enough headroom into the chips so they'll long be retired before the effect kicks in. It's only when you push your chip to its limit that you notice the change. Ever wondered why your OC fails in the beginning of a new summer, when it worked perfectly last year? That's the decay. Usually it's not dramatic: at 24/7 load, stock voltage and adequate cooling I'd estimate 10 - 50 MHz per year.

So what can make this decay matter?

In short: temperature and voltage. Temperature does increase the "decay rate", or, if you just watch components until they finally break, the failure rate a little bit. An old rule of thumb is "half the life time for every 10 degrees more". I'm not sure how appropriate this still is.. the laws of physics tend to be rather time independent, but our manufacturing processes are changing.

So temperature is to be avoided, but now comes the kicker: voltage is a real killer! Its effect on chip lifetime is much more severe. I can't give precise numbers, but as long as you stay within the range of stock voltages you're surely fine. Example: 65 nm C2D / C2Q are rated up to 1.35 V. So increasing your voltage from 1.25 V to 1.30 V does hurt your chip, but the effect is not dramatic - you'll still be able to use the chip for a very long time. But going to 1.45 V or any higher.. I really wouldn't recommend it for 24/7. Personally my OC'ed 65 nm C2Q is set to 1.31 V, which amounts to 1.22 V under load and I'm fine with that.

Some consequences:

If people push their chips to high voltages and they fail "suddenly" this is actually a rapid decay due to voltage. The time scale is different, but the mechanism is the same.

The common wisdom "just increase voltage & clock as long as temps are fine" is not true. If it wouldn't be for the large safety margins built into these chips people would kill many more chips due to such OC.

"Overclocking can kill your chip!" - true, because it actually can do so.. but it's very very unlikely unless you apply high voltages or totally forget about cooling. Overclocking itself means increasing the frequency. Note that this does not necessarily include raising the voltage! A higher OC is not always better.. something which most people are not aware of. I overclock at reasonable voltages (and with good cooling), which doesn't give me the highest numbers but enables me to run BOINC 24/7 on these systems without problems.

So what does this mean for GPU crunching?

Usually we only OC our GPUs a little, but we don't raise the voltage (due to a lack of means to do so easily). That means OCing GPUs does not have a dramatic impact on their lifetime. Power consumption increases linearly with frequency, which is not that much. Temperatures increase a little and the power supply circuitry on the card is stressed a bit more.. but we're not drawing as much power as in games, so that should be fine and well within specs.

What we're not save from, however, is temperature. Compare CPUs and GPUs and you'll see that GPUs usually run much hotter due to the limited cooling available in a 1- or 2-slot form factor. Stock fan settings keep most cards between 70 and 90°C. One can argue that "GPUs are designed for such temperatures". Well, they're not. TSMC can not disable the laws of physics just because it's ATI or NV who requests them to make a GPU. That's really trying to make a fortune out of a mishap (1). GPUs run so hot because it's damn inconvenient to cool them any better. It's not that they couldn't stand 90°C.. they just don't have to do this for too long. Nobody's going to game 24/7 for years.

So I sincerely think heat is our main enemy in GPU crunching and OC isn't. Let me put another reference to my cooling solution here.

I can even go a bit further and argue that OC is somewhat beneficial if you're interested in longevity. Let me explain: if you push your card to its limit, back off a bit for safety and at some point you see it fail you can back off some more MHz and you're likey good again for some time. If these cycles accelerate you know you reached the end of the (crunching-)life of your chip. Now you could still give it away to some gamer on a budget who can use it at stock frequency for quite some time to come. At this point degradation will slow down, as it's neither running 24/7 nor 100% load any more.
So on the upside you know when you should retire your GPU from active crunching. On the otherside you'll have to watch things more closely or you'll produce errors.

Some personal experience:

My OC'ed 24/7 chips | approximate time of use | degradation | comment
Celeron 600@900 | 1 year | yes | retired due to failed OC, very high voltage (1.9)
Athlon XP 1700+ | 1.5 years | yes | 1.53 to 1.47 GHz at slightly higher voltage
Athlon XP 2400+ | 1.5 years | yes | 2.18 to 2.12 GHz at slightly higher voltage
Athlon 64 3000+ | 0.5 years | no | 2.7 / 2.5 GHz
Athlon 64 X2 3800+ | 2 years | yes | higher voltage for 2.50 GHz
Core 2 Quad Q6600 | 1.5 years | yes | 3.00 GHz at slightly higher voltage
Radeon X1950Pro | 4 months | yes | failed OC after 3 months, failed stock after another, 24/7 folding@home at ~70°C
Radeon X1950Pro | 3 months | yes (?) | crunched at ~50°C and stopped after some errors, never really checked
Geforce 9800GTX+ | 5 months | no | 50 - 55°C, OCed

That's all for now!
MrS


(1) I know there's some proper English saying for this..
Scanning for our furry friends since Jan 2002
ID: 5657 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5663 - Posted: 16 Jan 2009, 7:54:56 UTC
Last modified: 16 Jan 2009, 7:55:25 UTC

Good explanation ...

And since I could not figure out what you are trying to say I cannot come to the rescue with an appropriate saying ... your note one.

There is one other cause of failure that you did not mention and that is "latent defects". That is a defect on the chip that is there, but not significant to cause immediate failure of the component. The US Military used to test chips to prove that they met specifications ... then they would get a part number in the 5400 series... the same logic element manufactured to commercial specification would be labeled with a 7400 part number ... and the testing was usually the only difference ... the testing to qualify the part ...

The problem was the testing ran the part up to its limits ... and the stress almost always caused the beginning of a defect that would grow over time ... so paradoxically, the mil-spec parts were less reliable than the commercial equivalents ... I had a commander hit the roof once when he asked me how we were repairing a test bench ... and I told him we were putting in parts I bought at radio shack before deployment ... when he ordered me to stop, I told him I could do that and the tester would be off-line for the rest of the deployment while we shipped the card back to the states, or we could repair the tester as we had on other cards earlier in the cruise ... and never had a failure of the "less qualified" parts ...

When a new airplane comes off the assembly line the test pilots fly the beast and confirm the calculated "flight envelop" before regular pilots fly the darn thing ... before we had computers that allowed simulations and calculations of the flight envelops, many times they were guestimates and these were only confirmed in early flights with some attrition of aircraft and pilots ... the P38 had a an interesting defect where in a steep dive parts of the control surfaces were locked into position by the air moving across the surfaces ... thus the dive ended with the aircraft and pilot pointing out of the dirt ... a minor hidden, latent, defect ... now called shock compressibility (or only compressibility) and was solved with dive brakes and the movement of one of the surfaces up a few inches. (See Richard Bong as the USA's highest scoring ace, P38 Lightning and for contrast Erich Alfred "Bubi" Hartmann whose record is not likely to be equaled soon ...)

I only give these examples as a contrast in that they may be easier to understand ...

But, I agree with ETA that the "problem" with OC is not directly the speed, it is the heat ...

The quibble section ... :)

The highest failure rate components are those that have mechanical actuation, fans (also because they are made cheaply to keep the cost down, but that means their life is expected to be short), and disk drives (CD/DVD too). ALL OTHER THINGS BEING EQUAL ...

Failures are most common on cold starts because of the effect of "inrush" currents (the article discuss this only in some contexts, but the problem is true for all electrical devices, inside the chips we have transistors, capacitors, and resistors ...)

Which is one of the reasons some of us like to leave our PCs on at all times ... :)

The age problem is the balance between "infant Mortality" and natural death of the devices at their normal life time as shown by the "Bathtub Curve" , which in our context is relevant as the running components hot causes the end of life portion of the curve to be pulled to the left ... See Thermal management of electronic devices and systems (for more google "electronic failure heat")

For those REALLY dedicated, Google "gamma ray electronics failure" though most articles discuss high-altitude events where this is a serious problem it is little known that most of the packaging material for chips emits radioactivity, soft gamma and beta that all can impinge on the chips causing a "soft event" but in the presence of a stressed part can be the camel that broke the straws back ...

Oh, and my mind is a very cluttered attic ... and this is as focused as I get ...
ID: 5663 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile dataman
Avatar

Send message
Joined: 18 Sep 08
Posts: 36
Credit: 100,352,867
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 5673 - Posted: 16 Jan 2009, 16:04:17 UTC

Thanks ETA ... that was very interesting.

ID: 5673 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5690 - Posted: 16 Jan 2009, 22:23:41 UTC

Hi Paul,

I actually meant to include the latent defects into "2. Catastrophic failures" without actually mentioning them. Your explanation is much better than "some just fail at some point due to some reason".

And an interesting note about transient errors caused by radiation: Intel uses a "hardened" design and I guess all other major players too. I don't know how they do it, but single bit errors due to radiation should not make the chips fail.

Regarding note (1): it means spinning something negative into something positive. Still I have no idea which English saying I'm looking for..

And dataman, thanks for the flowers :)

MrS
Scanning for our furry friends since Jan 2002
ID: 5690 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5692 - Posted: 16 Jan 2009, 22:48:53 UTC - in response to Message 5690.  

Hi Paul,

I actually meant to include the latent defects into "2. Catastrophic failures" without actually mentioning them. Your explanation is much better than "some just fail at some point due to some reason".

And an interesting note about transient errors caused by radiation: Intel uses a "hardened" design and I guess all other major players too. I don't know how they do it, but single bit errors due to radiation should not make the chips fail.

Regarding note (1): it means spinning something negative into something positive. Still I have no idea which English saying I'm looking for..

And dataman, thanks for the flowers :)

MrS


I have the opposite problem, I can't include things without mentioning them ... why lots of my posts tend to the long ...

Hardening can be combinations of technologies from the design of the structures so that an impinging ray will not be able to create enough of a charge change to make an internal change, to coatings to absorb or negate the ray. But, what I was trying to get at is that not only can a ray cause a soft error but it can also create a local voltage "spike" that it causes the catastrophic failure due to the presence of the latent defect ... which, sans the event, would have caused the failure in the future due to the normal wear and tear we had been discussing.

But you are correct that I was not attempting to make a point that flipping a bit due to soft error change by a cosmic/gamma-ray will cause a failure ...

There are several, the most common is "Turning lemons into lemonade" ... of "If life hands you lemons, make lemonade" ...

Thinking about that, life usually hands me onions and I am not sure that learning how to cry really makes it as an aphorism ... but that is just me ...
ID: 5692 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Scott Brown

Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5693 - Posted: 17 Jan 2009, 0:37:38 UTC - in response to Message 5692.  

...life usually hands me onions and I am not sure that learning how to cry really makes it as an aphorism ... but that is just me ...


Hopefully, at least sometimes they are sweet Vidalia onions. :)

And thanks to both you and MrS for the excellent discussions on this topic. Gives me something to think about with my 9600GSO (tends to run constantly in the low 70's celsius)...



ID: 5693 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5694 - Posted: 17 Jan 2009, 2:45:27 UTC - in response to Message 5693.  

...life usually hands me onions and I am not sure that learning how to cry really makes it as an aphorism ... but that is just me ...


Hopefully, at least sometimes they are sweet Vidalia onions. :)

And thanks to both you and MrS for the excellent discussions on this topic. Gives me something to think about with my 9600GSO (tends to run constantly in the low 70's celsius)...


Except I hate onions ...

All kinds of onions ...

And your temperature, as I recall, is in the nominal zone as we figure these things ... mine is at 78, of course I let the room get warm so I am sure that drove it up some ...

Making me even happier is Virtual Prairie has just issued some new work!!! :)

And I am on track to have Cosmology to goal on the 25th ... and my Mac Pro is raising ABC on its own (while still doing other projects) nicely so that it looks like I should easily be able to make that goal by mid to late Feb, even with the detour of SIMAP at the end of the month ... which I am going to make a real focus for that one week ...

and new applications promised here ... things are really looking up ...
ID: 5694 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile bloodrain

Send message
Joined: 11 Dec 08
Posts: 32
Credit: 748,159
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwatwatwat
Message 6086 - Posted: 28 Jan 2009, 8:24:16 UTC - in response to Message 5694.  

one main thing is to watch out on how hot it gets. that can kill a system by overheating the parts. but really on this topic. no it wont happen.



but their is a very very very small chance it could happen. like 1 in a billion
ID: 6086 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6136 - Posted: 28 Jan 2009, 21:26:45 UTC - in response to Message 6086.  

May I kindly redirect your attention to this post? What you're talking about is failure type number (2), which is indeed not our main concern.

MrS
Scanning for our furry friends since Jan 2002
ID: 6136 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 5 · Next

Message boards : Graphics cards (GPUs) : Video Card Longevity

©2026 Universitat Pompeu Fabra