Message boards :
Graphics cards (GPUs) :
Great temperatures with GTX-480
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
|
Send message Joined: 17 Jan 11 Posts: 32 Credit: 11,307,639 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
May be this will sound peculiar but I have noticed that when I'm running calculations for SETI with both GPUs, the temps over my GPUs are lower than the ones while running GPUGRID. To be accurate Speedfan indicates that in SETI simulation GPU1 is @ 62oC and GPU2 is @ 60oC and system temp is @ 47oC while in GPUGRID similar temps are 64oC, 65-66oC and 50-51oC. This made me started to thinking if GPUGRID working units are kinda causing my pc to restart with let's say overheating or something. This suggestion is endorsed by the fact that I tested GPUGRID in my pc with under-voltage both GPUs @ 0.925V from default voltage 1.025V and after a while my screen froze and I restarted. For the last test may be it's a driver's issue since I'm using the first driver from NVIDIA supporting CUDA 3.1 (I don't remember the version). |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
UnderVolting would reduce heat, but if/when the tasks require more power the tasks would fail or even freeze your system. In the long term this would damage your system. From 1.025V down to 0.925V is quite a drop (11%). Increasing the Voltage would be more likely to prevent task failure, but you are then competing with increased heat, which could itself be the problem. I would say if you increase voltage slightly over reference and the tasks still fail/you get system hangs or restarts and you observe a heat rise then the problem is heat related. If you don't use your CPU for crunching other tasks this would reduce heat from the CPU and motherboard, possibly RAM and disk too (depending on the tasks). I would be inclined to use NVidia's most up to date driver, rather than an early Gainward version, especially as your problem seems to be with the Asus card. Make sure you remove the Gainward driver fully before installing the NVidia driver. Download the latest driver, Uninstall, restart pressing F8 for a safe mode install, or try driver sweeper. |
|
Send message Joined: 17 Jan 11 Posts: 32 Credit: 11,307,639 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
First I would like to underline that there is no problem with the Wus of the GPUGRID since my pc froze with SETI WUs too while testing how both WUs are affecting my GPUs temps. After this small test, I had installed latest drivers from NVIDIA (version 266.58) and tried to see how under voltage is going (still am). Until this morning temps for my GPUs where @ 64oC and @ 59oC while system's temp @ 47oC. Because I don't use CPU for other BOINC projects or crunching anything at all I don't understand why I have an overheating problem. But to be honest yesterday I found out from luck (I used Furmark & MSI Kombustor)that one of my GPUs is by default working @ 1.037V and the other @ 1.025V. And for that reason I would like to see for the next 2-3 days how are things going with both GPUs working @ 0.938V (under voltaged) because I'm starting to suspecting this difference for all the trouble. If no problems occur, then I will start to increase the voltage till both are working to the lower default voltage of one of the two GPUs. Till then, we'll be in touch! Thank you all! |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I like your tuning idealism. When you find what you think is a stable Voltage, I would suggest you up that Voltage to the next mark (unknown overhead). In the long run you might have to repeat this tuning; new tasks come along fast here, and new tasks have different demands on the GPU. In the very long run GPU's deteriorate and original voltages might not be so solid. I think GPU's are individually (or small batch) Volted at the factory, so clocks will rarely be the same for different cards. At GPUGrid 1 full CPU core/thread per GPUGrid task is used with swan_sync, so some heat will be generated anyway; even if you don't crunch other CPU tasks. Good luck, |
|
Send message Joined: 17 Jan 11 Posts: 32 Credit: 11,307,639 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Hello again! I have been testing my pc for 2 1/2days now @ 1.050V both GPUs (that means over voltage 0.025V & 0.013V for the two GPUs) and everything are running smooth. Temps are @ 68oC, 64oC & 50-51oC for GPU1, GPU2 & system respectively. The above temps are the maximum recorded cause I have noticed that they are changing while running different WUs. For instance if both GPUs are simulating WUs "long runs" the above temps are recorded but when WUs are "ACEMD" the relevant temps are smaller @ 62oC for both GPUs and 49oF for the system. I'm beginning to think that my problem finally is solved. |
|
Send message Joined: 17 Jan 11 Posts: 32 Credit: 11,307,639 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Everything fine so I believe all problems solved. |
©2025 Universitat Pompeu Fabra