Video Card Longevity

Message boards : Graphics cards (GPUs) : Video Card Longevity
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11479 - Posted: 29 Jul 2009, 18:45:21 UTC

Let's try not to turn this sticky thread on "Video Card Longevity" into a "I need help with WU xyz" thread.

MrS
Scanning for our furry friends since Jan 2002
ID: 11479 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11567 - Posted: 1 Aug 2009, 13:49:16 UTC - in response to Message 11394.  

Thanks for the on-topic reply!


Permanent errors within the logic parts of the chip are currently unrepairable. One could think about disabling certain blocks after failures, but there's not much redundancy in cpus, so you can't take much away so that they still work. It's different for GPUs: disabling individual shader clusters should be possible by software / bios, maybe requiring little tweaks.

MrS


I would not like to lose 2 cores either (in some sort of mirror failover solution), but I think it might be possible for the consumer market to have a dead core work around - AMD do this in the factory, making their quad cores triple cores, or dual cores, when they are not quite up to scratch. We know that their approach is not quite a permanent one; people have been able to re-enable the cores on some motherboards. So whatever AMD did could in theory be used subsequent to shipping when a core fails.

For business this could be a great advantage. From experience, replacing a failed system can be a logistical nightmare, particularly for small businesses. Usually lost hours = lost income. Losses would be reduced if a CPU replacement could be planned and scheduled.
When 6 and 8 cores become more common place for CPU’s the need to replace the CPU might not actually be so urgent, and the CPU would still hold some value; a CPU with 5 working cores is better than a similar quad core CPU with all 4 cores working!

I was also thinking that if you could set/reduce the clock speeds of cores independently it could offer some sort of fallback advantage. For example, if one of my Phenom II 940 cores struggled for reliability at it’s native 3GHz, and I could reduce it to 1800MHz, or even 800MHz – just by setting it’s multiplier separately – it would be better than having to underclock all 4 cores, or immediately having to replace the CPU.
I like the idea of a software work around / solution for erroneous shaders.

NVidia would do us all a big favour if they developed a proper diagnostic utility, never mind the work around!
ID: 11567 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11638 - Posted: 3 Aug 2009, 20:34:20 UTC - in response to Message 11567.  
Last modified: 3 Aug 2009, 20:35:04 UTC

Hi,

I would not like to lose 2 cores either (in some sort of mirror failover solution), but I think it might be possible for the consumer market to have a dead core work around - AMD do this in the factory, making their quad cores triple cores, or dual cores, when they are not quite up to scratch. We know that their approach is not quite a permanent one; people have been able to re-enable the cores on some motherboards. So whatever AMD did could in theory be used subsequent to shipping when a core fails.


Testing at the factory is done with external probe stations prior to packaging (not to waste money on defect chips). This can not be repeated directly at home ;) (btw, these tests are expensive although each one only requires several seconds. I suppose the cost is mainly due to the time it takes that expensive device to measure the chip.. which wouldn't matter for us)
Therefore such a test would have to be software based. I see at least 2 major problems with that:

1. Whatever you put into the chip, you have to test it. Such software could reveal the chip architecture completely, just due to the way it's doing the tests. Software can be hacked and / or reverse engineered and that's something no chip maker would want to risk. It would open up the door for all sorts of things: full or partly copies, bad press due to discovered design errors, software deliberately targeted to be slow on your hardware (hint: compiler).

2. You'd be executing code on your cpu to test your cpu. How could you know the results are reliable? It would be a shame to get the message "3 of 4 cores defect" due to a minor fault somewhere else. Possible solution: dedicate some specialized logic with self diagnostic functions and error checking for such tests.

For business this could be a great advantage.


That's why the "big iron" servers have RAS features, hot swap of almost everything and such :)

I like the idea of a software work around / solution for erroneous shaders.
NVidia would do us all a big favour if they developed a proper diagnostic utility, never mind the work around!


Yes, that would be very nice. However, seeing how their software version struggles with driver bugs I'm not very confident anything like that is going to happen anytime soon. The problem of "revealing the architecture" would likely be less severe in this case, as communication with the GPU is done by the driver anyway. If such a tool is released i'd imagine them to be careful, i.e. "If you get errors there's a problem [not neccessarily caused by defect hardware] and you may get wrong results under CUDA. But we don't know your exact code and therefore we can not guarantee you that there is not a hardware error just because we didn't find any."

I was also thinking that if you could set/reduce the clock speeds of cores independently it could offer some sort of fallback advantage. For example, if one of my Phenom II 940 cores struggled for reliability at it’s native 3GHz, and I could reduce it to 1800MHz, or even 800MHz – just by setting it’s multiplier separately – it would be better than having to underclock all 4 cores, or immediately having to replace the CPU.


Let's take this one step further: the clock speed of chips is limited by the slowest parts, or more exactly paths which signals must travel within one clock cycle. If they arrive too late an error is likely produced. It's really tough to guess what the slowest paths through all your 100 millions of transistors will be, given the vast amount of possible instruction combinations, states, error handling, interrupts etc. But the manufacturers do have some idea.

So why not design a chip with some test circuitry with deliberately long signal run times and sophisticated error detection, somewhere near the known hot spots. Now you could lower the operating voltage just to the point where you're starting to see errors (and increase again just above the threshold). That would reduce average power consumption a lot and would help to choose proper turbo modes for i7-like designs. It wouldn't help against permanent errors, but in the case of your 940 the bios could have raised the voltage of that core a little (within safety margin).

MrS
Scanning for our furry friends since Jan 2002
ID: 11638 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11650 - Posted: 4 Aug 2009, 9:37:55 UTC - in response to Message 11638.  

Therefore such a test would have to be software based.

As you said later,
So why not design a chip with some test circuitry

Perhaps an on-die instruction set for testing, and if required automatically modifying voltages, frequencies or even disabling Cache banks or a core? A small program could receive reports, analyse them and calculate ideal frequencies automatically. These could be saved to the system drive or bios and reloaded on restart. A sort of built in CPU Optimalization kit.

I still like the idea of independent frequencies and voltages for CPU cores.
Most of the time people dont actually use all 4 cores or a quad, so if the CPU could raise and lower the frequencies independently, or even turn one or more cores off altogether, it would save energy, and therefore the overall cost of the system during it life. Unless you are crunching, playing games or using some serious software, there are few times when you would notice the difference in a quad core at 3.3GHz or 800MHz (8MB Cache). I often forget and have to check and see what my clock is set at – If the system gets loud, I turn it down.

If the cores could independently rise to the occasion, even when you are using intensive CPU applications, you would be saving on electric (temperatures would be lower, as would the noise)!
I’m not sure Intel would go for this, as their cores are paired and it might reveal some underlying limitation (until 8 or more cores are mainstream, then it would be less obvious and less of an issue).

If these ideas were applied to graphics cards, it would save a small fortune in Electric. Even GPUGrid does not always use all the processing power of the graphics cards. I think folding at home probably comes a lot closer, but some GPU Crunching clients such as Aqua often use substantially less (it seems to vary with different tasks – similar to a computer game). GPU’s are far from Green!
ID: 11650 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11693 - Posted: 6 Aug 2009, 20:36:29 UTC - in response to Message 11650.  

I'm a bit confused by your post. That's actually just what "Power Now!", Cool & Quiet, Speed Step etc. are doing. They're not perfect yet, but they do adjust clock speeds and voltages on the fly according to demand, and in the newest incarnations also independently for individual cores. Intel heavily uses the thermal headroom under single / low-threaded load for their turbo mode. So it's not perfect yet, but we're getting there. And now that almost all high performance chips 8CPUs, GPUs) are power limited these power management features are quickly becoming ever more important.

Talking about power management for GPUs: I've been complaining about this wasted power for a decade. Why can the same chips used in laptops be power efficient, downclocked and everything, whereas as soon as they're used in desktops they have to waste 10 - 60 W, even if they're doing nothing?! The answer is simply: because people don't care (as long as it doesn't hurt them too much) and because added hardware or driver features would cost more - and that's soemthing people do care about.

A few days ago I read about some chip, I think it was the GPU integrated into the new 785 chipset. Here they adjust clock speed and voltage to target 60 fps in 3D. Really, that's the way it should have been from the beginning on!

Oh, and the problem I have with all these features: the manufacturer has to set the clock speeds and voltages, regardless of chip quailty, temperature (well, they could factor that in to some extent), chip aging / degradation. So they have to use generous safety margins (which is what overclockers exploit). What I propose is to add circuitry to measure the current chip condition in a reliable way and to adjust voltage accordingly (clock speed is determined by load anyway). That way the hardware could be used in the most efficient way. I'm sure it will be coming.. just not anytime *soon*.

MrS
Scanning for our furry friends since Jan 2002
ID: 11693 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Nognlite

Send message
Joined: 9 Nov 08
Posts: 69
Credit: 25,106,923
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 11772 - Posted: 10 Aug 2009, 11:44:35 UTC

Well Ladies and Gents,

To talk about card longevity I am adding my 2 cents. I have been using two systems to run gpugrid for about 2 years now. On one system 2x XFX 8800GT's in SLI. On the other 2x XFX GTX280's in SLI. While my 8800's have been rock solid since bought (other than a fan replacement but that's another pissy story about XFX), my 280's have been replace a total of three times, possibly a fourth coming. Thank the Lord they are XFX with double lifetime warranty but this is rediculous. The cards lasted a year before they had to be replaced the first time and about 6 for the second replacement. Makes me wonder if XFX sends out refurbished cards as replacements?

I only run Gpugrid on all my cards and they use automatic fans when they get hot, controlled by the driver. What I have noticed over the two years is that when the driver doesn't load properly on startup or goes corrupt the thermal solution does not function correctly and a few times I found my cards at 105 celcius. Again thank the Lord I was at the computer, but how many times have I not been and the system ran at 105.

I don't believe that I should have any issues with my 280's but I don't think that gpugrid is that taxing that it should be killing GPU's. XFX says that there might be power issues on my system that's killing cards but I have a PC P&C 1200 with all voltages right on spec and the cards were on an OCZ PSU the first time they died.

So this leaves me wondering. Should I stop Gpugrid to save GPU's or are they faulty GPU's or is it a faulty GPU design to start with and just get them replaced as they break?

Just my 2 cents.
ID: 11772 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
STE\/E

Send message
Joined: 18 Sep 08
Posts: 368
Credit: 4,174,624,885
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11773 - Posted: 10 Aug 2009, 13:12:53 UTC
Last modified: 10 Aug 2009, 13:16:20 UTC

In less than 1 years time I've RMA'ed 3 GTX 260's already and right now I'm looking at RMA'ing 5 more GTX 260's (4 BFG's & 1 EVGA) plus 1 possibly 2 GTX 295's. Oh and for good measure throw in a Sapphire 4850 X2 & Sapphire 4870 that are going to need to be RMA'ed.

From what I'm hearing about Sapphire that could be a nightmare trying to get them to RMA 1 Card let alone 2 Cards. BFG is good about it and already told me when I get ready to RMA the Cards to let them know & they would set it up. EVGA I haven't had any dealings with but I'll find out I guess.

Personally having this amount of Video Cards go all at once tells me their just not made to run 24/7 @ full load and your going to have trouble with some of them if you do.
ID: 11773 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11790 - Posted: 10 Aug 2009, 20:54:34 UTC

Hey Nognlite,

are you sure you've been here for 2 years? Your account says "joined 9 Nov 2008", which is just 3/4 of a year if I'm not totally mistaken ;)

Not that this makes your card failures any better. What I can tell you, though, is that it would be better to set your fan speeds manually. As high as you're still comfortable with. As I wrote somewhere up there this does increase your GPU lifetime considerably.

Not sure if I wrote it here, but I'm convinced: current GPUs are not made for 24/7 operation. It's not that the chips are much different, it's that the tolerances and priorities are set different. The temperatures which the manufacturers allow are OK for occasional gaming, but not really for 24/7 operation. Sure, some chips / cards can take it for quite some time and some fail anyway, regardless of temperature.. but this is a statistical process after all.

BTW: I'm sure they are sending out refurbished units as replacements (even for HDDs). Just think of all the people who have some whatever-so-nasty software or compatibility problem, RMA their product and then the hardware actually appears stable under different conditions. They woudln't want to throw these things away ;)

MrS
Scanning for our furry friends since Jan 2002
ID: 11790 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11793 - Posted: 10 Aug 2009, 22:04:47 UTC - in response to Message 11693.  

I'm a bit confused by your post. That's actually just what "Power Now!", Cool & Quiet, Speed Step etc. are doing. MrS

Well yes, up to a point, but as you went on to say, they are not perfect! I was trying to be general, as there are so many energy saving variations used in different CPUs, but very few are combined sufficiently. Perhaps Intel’s Enhanced Speed Step is the closest to what I am suggesting, but in itself it does not offer everything.
Many CPU’s only have 2 speeds. Why not 10 or 30? If motherboards can be clocked in 1MHz steps, why not CPU’s? Why develop so many different technologies separately, rather than combining, centralising, streamlining and reducing manufacturing costs. If the technology does not significantly increase production costs and is worthwhile having, have it in all the CPU’s rather than 100’s of slightly different CPU types. In many areas Intel’s biggest rival is Intel; they make so many chips that many are competing directly against each other. Flooding the market with every possible combination of technology is just plain thick.

Why only reduce the multiplier and Voltage? Why not the FSB as well? If the CPU is built to support it the motherboard designs will follow as there is descent competition there.
Why send power to the Cache when it’s doing nothing?
Why send power to all the CPU cores when only one is in use?
Why charge a small fortune for a slightly more energy efficient CPU (SLARP, L5420 vs SLANV, E5420)? Especially when manufacturing costs are the same.
Why use one energy saving feature in one CPU but a different feature in another CPU when both could be used? In many ways it’s not so much about being clever, just not being so stupid.

To be fair to both Intel and AMD, there have been excellent improvements over the last 5 years:
My Phenom II 940 offers three steps (3GHz, 1800MHz and 800MHz) which is one of the main reasons I purchased it. This was a big improvement over my previous Phenom 9750 (2.4GHz and 1.8GHZ). The E2160 (and similar) only use 8Watts when idle, and many of the systems they inhabit typically operate at about 50Watts –much less than top GPU cards!

Mind you, these are exceptions rather than the rule. Many speed steps were none too special – stepping down from 2.13GHz to 1.8GHz was a bit of a lame gesture by Intel!
My opinion is that if it’s not in use, it does not need power. So if it is using power that it does not need, it has been poorly designed.

they do adjust clock speeds and voltages on the fly according to demand, and in the newest incarnations also independently for individual cores.


OK, I was not aware the latest server cores could be independently stepped down in speed.
I hope the motherboard manufacturers keep up; I recently worked on several desktop systems that boasted energy efficient CPUs such as the E2160 (with C1E & EIST), only to see that the motherboard did not support speed stepping! Again this just smells of mismatched hardware/a stupid design flaw, but I do think the motherboard manufacturers need to make more of an effort - perhaps they are more to blame than AMD and Intel.

And now that almost all high performance chips 8CPUs, GPUs) are power limited these power management features are quickly becoming ever more important.


I agree; server farms are using more and more of the grids energy each year so they must look towards energy efficiency. Hopefully many of these server advancements will become readily available to the general consumer in the near future. Some of these advances come at a shocking price though, and the new CPU designs often seem to drop existing energy efficiency systems to incorporate the new ones rather than adding the new energy efficient technology. Presumably so they can compete against each other! Reminds me of the second wave Intel quad cores – clocked faster, but came with less cache, so there was only a slight improvement with some chips and it was difficult to choose which one was actually faster! Ditto for Hyper Threading which competed against faster clocked non HT cores.

Talking about power management for GPUs: I've been complaining about this wasted power for a decade. Why can the same chips used in laptops be power efficient, downclocked and everything, whereas as soon as they're used in desktops they have to waste 10 - 60 W, even if they're doing nothing?! The answer is simply: because people don't care (as long as it doesn't hurt them too much) and because added hardware or driver features would cost more - and that's something people do care about.


The General Public probably don’t think about the running costs as much as IT Pro’s do, but they really should. The lack of ‘green’ desktop GPU’s is a serious problem. Neither ATI or NVIDIA have bothered to produce a really green desktop GPU. It’s as though there is some sort of unspoken agreement to not compete on this front!

Sooner or later ATI or NVIDIA will realise that people like me would rather go on a 2 weeks holiday with a new netbook than pay for two power greedy cards that cost almost as much to run as they do to buy!
ID: 11793 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Nognlite

Send message
Joined: 9 Nov 08
Posts: 69
Credit: 25,106,923
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 11804 - Posted: 11 Aug 2009, 13:36:59 UTC - in response to Message 11790.  
Last modified: 11 Aug 2009, 13:37:43 UTC

You are in fact correct. I had to look at my records. My bad!

However this new information compounds my statement as it's only been 3/4's of a year and two sets of gpu's replaced.

I wonder if other people have has the same issue and as bad.

Cheers

I built the systems two years ago. That's why that sticks in my head.
ID: 11804 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
RalphEllis

Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 11819 - Posted: 12 Aug 2009, 6:10:57 UTC - in response to Message 11772.  

You may wish to set the fan speed manually either with the Evga utility or Ntune in Windows or Nvclock-Gtk in Linux. This would cut down on the heat issues.
ID: 11819 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
STE\/E

Send message
Joined: 18 Sep 08
Posts: 368
Credit: 4,174,624,885
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11821 - Posted: 12 Aug 2009, 11:11:01 UTC

Looks like I'll be RMA'ing 5 GTX 260's with the Clock Down Bug either today or tomorrow, I have a GTX 295 that will do the same thing off and on but hasn't for a few days so I'll keep it for now and see if the Proposed Fix GDF mentioned later this month fixes it permanently or not. As long as it doesn't get any worse I can live with it for a few days more ... :)

Just so ATI doesn't feel left out I RMA'ed 2 of them yesterday, 1 4850 X2 & 1 4870, both were used at the MWay Project but quit working, the 4850 X2 in about a months time, the 4870 took about 6 month's before going bad.
ID: 11821 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11858 - Posted: 13 Aug 2009, 20:31:10 UTC - in response to Message 11793.  

Hey SKGiven,

it's really a matter of cost. First and foremost the buyer has to care about power. If the buyer decisions are not influenced buy power consumption in any way, then every single cent a company spends on power saving is lost (short term). That's why they're not going to do it, until the power requirements start to hurt them (laptop, Pentium 4 etc.)

And it's really not a matter of cents. First there are hardware modifications. The power management in i7 CPUs needs about 3 Mio transistors, that's as much as a Pentium 1! You couldn't have possibly implemented something like that in a Pentium 1 - if you wouldn't want to ruin the company. That's why power saving features develop gradually in an evolutionary process.

Next there's software. A lot can go wrong if you want to implement proper power saving: degraded performance or even bugs and crashes. Testing, debugging and certifying such code is expensive. And the more expensive the more complex the system gets. That's why manufacturers only implement small improvements at a time, as much as they feel they can still handle until product introduction.

An example: at work I've got a Phenom 9850. It eats so much power that it quickly overheats at 2.5 GHz (stock cooler, some case cooling). I usually run it at 2.1 GHz and 1.10V, that prevents it from crashing and keeps the noise acceptable. However, if I want to speed up some single thread Matlab simulation, switch BOINC off and allow it to go to the full 2.5 GHz.. something almost funny happens. Windows keeps bouncing the task between cores and after each move the app runs on a core which was set to 1.2 GHz by Cool & Quiet. It has to adapt and speed up. Shortly afterwards the cycle repeats. Overall the system uses more power but becomes slower than at a constant 2.1 GHz.

The reason for the slow switches is that the CPU draws so much current that the motherboard circuitry would be overloaded if the speeds were switched instantaneously, so AMD choose some delay time. All in all that's an example for a power saving feature going rediculously wrong. Cool & Quiet on the Athlon 64, on the other hand, worked fine. They only got into trouble because they wanted to make it even better, offer more fine grained control and make the system more complex.

Finally you also have top consider proportionality. If you could spend 1 Mio in development costs to cut GPU idle power consumption from 60 W to 1 W you'd be foolish not to do so. However, investing another 1 Mio to cut power even further to 0.9 W wouldn't help your company at all on the desktop.

BTW, I'm not saying this money-focussed approach is the best way to go. But that's how it works as long as money reigns the world.

Oh, there's more:

Why develop so many different technologies separately

It's building upon each other. Each new generation of power saving technologies generally incorparates and superceeds the previous one. It does not suddenly replace it with something different. And some companies are licencing this stuff, but the big players are basically all developing the same stuff on their own - adapted to their special needs, of course.

Why not the FSB as well?

That's being done on notebooks. You wouldn't notice the difference on a desktop.

Why send power to the Cache when it’s doing nothing?

That's being done since some time, minor savings.

Why send power to all the CPU cores when only one is in use?

i7 is first to really shut them off.

Why charge a small fortune for a slightly more energy efficient CPU (SLARP, L5420 vs SLANV, E5420)? Especially when manufacturing costs are the same.

Because costs are not the same. Energy efficient CPUs run at lower voltages, which not all CPUs can do. To first approximation you can decide to sell a CPU as a normal 3 GHz chip or as a 2.5 GHz EE chip. The regular 2.5 GHz chip might not reach 3 GHz at all.

Why use one energy saving feature in one CPU but a different feature in another CPU when both could be used?

I don't think this is being done. The features mainly build upon each other. Exceptions are mobile Celerons, where Intel just removed power saving features (but doesn't include others), which I really dislike. And mobile chips generally get more refined power managing. I think this is mainly due to cost.

MrS
Scanning for our furry friends since Jan 2002
ID: 11858 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12525 - Posted: 17 Sep 2009, 3:31:57 UTC
Last modified: 17 Sep 2009, 3:32:47 UTC

After roughly 8 months of life my GTX280 card (EVGA) died. The good news is that I am in the 1 year warranty, the bad news is I missed the fine print. If you have EVGA cards you have to register them ON THEIR SITE to get long term warranty conversion (90 days from purchase, save receipt you also need that for an RMA).

Word to the wise ... you also need S/N and P/N off the card or box ... though if you got a rebate it is going to have to come off the card.

I suppose my only good take away is that with luck I will be in replacement mode if the other cards start to fail and are not covered ... of course, with next generation cards on the verge now it is also possible I can get replacement cards for a whole lot less than I spent on the originals if I just want to stay at current production levels (or near enough) ...
ID: 12525 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12986 - Posted: 2 Oct 2009, 20:16:16 UTC - in response to Message 12525.  

The thing is, Manufacturers don’t want broken anythings back - they want to keep their money! So they setup so many obstacles for you and everyone else to negotiate that the majority of people will give up, or spend more time than it is worth trying to get some sort of partial refund or refurbished item (say after 3 months). Basically, the law says you can return an item that malfunctions for up to one year. Unfortunately, dubious politicians with unclear financial interests, have sought to undermine this with grey legislation. So you are left mulling through all sorts of dodgy terms and conditions – many of which are just meant to deter you; they have no legal ground, but serve to hold up the proceedings long enough for them to get away with it. By the time you (or say 20percent pf people like you) get through their many hoops there is a fair chance they will have been bought over, merged, renamed, re-launched or have gone under, and you will have another layer of it to go through. If you buy an item in a shop, hang onto the receipt and the packaging. If it breaks within a year, take it back and get a replacement or refund. If you buy online, you may have to deal with their terms and conditions, RTMs and of course the outfit not being there too long. To me it is worth the extra 5 or 10 percent to buy an expensive item in a local store with a good reputation.
ID: 12986 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
STE\/E

Send message
Joined: 18 Sep 08
Posts: 368
Credit: 4,174,624,885
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 13001 - Posted: 3 Oct 2009, 22:00:58 UTC

I've RMA'ed probably 10 GTX 200 Series Cards & 2 ATI 48xx Cards this year alone & haven't had a bit of a problem getting the Manufactures to back their Card & send me a Replacement ASAP ... Of course as Paul said you have to read the Fine Print and Register them as soon as you get them or you may be SOL & have to eat the Costs. Most of my GTX are BFG's which have a Lifetime Warranty so those Cards are good to go for a long time if I choose to continue to run them.

The ATI Cards only have a 1 year Warranty which is due to run out soon so I'll have to eat the costs there but with the new cards coming out I'll be ready to move up anyway ... :)
ID: 13001 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm
Avatar

Send message
Joined: 2 Mar 09
Posts: 159
Credit: 13,639,818
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 13006 - Posted: 4 Oct 2009, 2:14:43 UTC - in response to Message 13001.  

Most of my GTX are BFG's which have a Lifetime Warranty so those Cards are good to go for a long time if I choose to continue to run them.



that's why i got BFG for now on...


Another tip to cool the beast..
if you live in a temperature sensitive climate say like the south of the US, fall and spring are perfect times to bring in the cool air at night....

i've seen 10C temp drops just by letting 52 F air into my room....which is normally 80F.

I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/
ID: 13006 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13147 - Posted: 11 Oct 2009, 23:51:29 UTC - in response to Message 13006.  

You need to watch that trick.
Jnr. Frio might breeze in, and nick your computer!
ID: 13147 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Argus
Avatar

Send message
Joined: 14 Mar 09
Posts: 6
Credit: 5,143,945
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 13160 - Posted: 13 Oct 2009, 12:51:54 UTC
Last modified: 13 Oct 2009, 13:46:22 UTC

I have 2 Leadtek Winfast GTX280's that died on 14 September '09, precisely after 6 months of crunching GPUGrid more or less 24/7 (less because my rigs are gaming rigs, so whenever me and my son were playing we would disable GPUGrid).

Cooling issues are out of the question, as my cases are Tt Xaser VI, with 3x120mm + 3x140mm case fans (soon to be 5x140mm + 1x120mm), plus another 120mm fan on the CPU cooler, and 1x135mm + 1x80mm in PSU (Gallaxy DXX 1000W). Not to mention A/C in every room where I have a PC.

Edit: forgot to mention, no OC. I've never OC'ed, I'm for rock solid stability. I prefer to buy components with high stock (factory) performance instead of low performance components to OC later.

Edit 2: I'm blaming crunching, specifically GPUGrid, because I sleep better knowing I have identified the culprit :))
Semper ubi sub ubi.
ID: 13160 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zpm
Avatar

Send message
Joined: 2 Mar 09
Posts: 159
Credit: 13,639,818
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 13168 - Posted: 13 Oct 2009, 21:38:18 UTC - in response to Message 13160.  

I have 2 Leadtek Winfast GTX280's that died on 14 September '09, precisely after 6 months of crunching GPUGrid more or less 24/7 (less because my rigs are gaming rigs, so whenever me and my son were playing we would disable GPUGrid).

Cooling issues are out of the question, as my cases are Tt Xaser VI, with 3x120mm + 3x140mm case fans (soon to be 5x140mm + 1x120mm), plus another 120mm fan on the CPU cooler, and 1x135mm + 1x80mm in PSU (Gallaxy DXX 1000W). Not to mention A/C in every room where I have a PC.

Edit: forgot to mention, no OC. I've never OC'ed, I'm for rock solid stability. I prefer to buy components with high stock (factory) performance instead of low performance components to OC later.

Edit 2: I'm blaming crunching, specifically GPUGrid, because I sleep better knowing I have identified the culprit :))


could it be that the manufacturers cards don't pass the 24/7 full throttle test!!!! bfg GTX260, still going like the energizer bunny.
ID: 13168 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Graphics cards (GPUs) : Video Card Longevity

©2026 Universitat Pompeu Fabra