Message boards :
Number crunching :
Really long runs
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
yes I know Jacob and I certainly will test your work-around. But I was just sharing the information I see on my system. The 780Ti runs at 1045.3MHz at the moment still with 70°C and no other jobs on the CPU or the 770. To me that is a bit strange, then when the second GPU starts to work and the CPU start to do 5 tasks and the temperature is not changing the core clock of the first (primary) GPU is falling. But that could be easily be lack of knowledge on my side. Greetings from TJ |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The behavior I have seen, even with the latest drivers, is that, without my "Force Max Boost" workaround, then even when a GPUGrid unit is being crunched at a sub-70*C temp, GPU-Z will show the clock being downclocked, for PerfCap reason "Util". Basically, the drivers don't think the GPU is being worked hard enough to warrant Max Boost. The behavior may become more prominent as more CPU tasks "eat into" the CPU time needed by the acemd process, thus lowering GPU Usage %. skgiven believes it is a driver bug. I believe it is just silly driver design by NVIDIA. I've privately reported it to them, but my inclination is that the behavior won't be changed. Thus, I Force Max Boost. To hell with their stupid design. :) |
dskagcommunitySend message Joined: 28 Apr 11 Posts: 463 Credit: 979,266,958 RAC: 69,635 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Today I found a new one: Hehe yes THESE are really long ones, 67k seconds on 570gtx ^^ never saw more then 200k credits for a single workunit. DSKAG Austria: http://www.dskag.at
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The 780Ti runs at 1045.3MHz at the moment still with 70°C and no other jobs on the CPU or the 770. To me that is a bit strange, then when the second GPU starts to work and the CPU start to do 5 tasks and the temperature is not changing the core clock of the first (primary) GPU is falling. TJ, this sounds like your temperature target is set to 70°C for the 780Ti and all you're seeing is just Boost working exactly as it should. If the CPU and 770 are idle, the 780Ti boosts until it reaches 1045 MHz. At this point the GPU consumes a given amount of power, let's call it P. Let's assume an ambient temperature of 25°C in the case (the exact number doesn't matter). The power draw P warms the GPU up and the fan ramps its speed up. At 70°C an equilibrium between heating (P) and cooling via the fan is reached. Note "at 70°C an equilibrium is reached" is not a coincidence, because the GPU has adjusted P via its boost state so that this happens. If CPU and the 770 are loaded they dump some amount of heat into your case. How much is irrelevant, but let's assume the GPU fan on the 780Ti would now suck in 30°C warm air instead of 25°C. The GPU temperature target is still 70°C, so the boost state has to be adjusted again to hit exactly that temperature. At 70°C the GPU fan will spin just as fast as it did without other loads (due to using the same fan curve). That means the "cooling power" is the same, but now the chip can only be allowed to warm up 70-30 = 40°C instead of 45°C without other loads. To achieve this either the cooling must become stronger (and we already ruled this out) or the amount of heat to be removed must be reduced. Hence the power draw has to be reduced to approximately P*40/45 = 89% P - i.e. a lower boost state must be choosen. You should be able to verify this by comparing the power consumption reported by e.g. GPU-Z for both cases. The tool should also show temperature as "PerfCap Reason". This explanation is a bit long-winded.. but I tried to give you some more background to make it clear. Hope it doesn't distract from the main point, which isn't all that complicated :) BTW: running Milkyway on the 770 is a bad idea, as it's really inefficient there. Leave this to the big AMDs. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thank you for the extended explanation ETA. Nice to read. I thought more or less the same, after watching all reading of GPU-Z and changing settings. I see that with the latest driver it is indeed a bit faster, thus skgiven is right about that. But I see higher temperature of the 780Ti. I have no set the target temp. at 72°C and power target to 88%. Temperature of the card is 74°C though and the clock is 875.7MHz thus its base clock. GPU load is 84% I can increase that but then the temperature goes up as well. With the older driver 331.82 I saw also the clock at 875.7MHz and a temp. of 68-71°C and GPU load of 86-91% (depending on the WU type). Could be a hasty conclusion but the 331.82 driver seems better for my card, with lower temp. and higher usage. Edit: I know that AMD are better for Milkyway but I use it for testing with nVidia as there WU's run fast. All nVidia cards I have that are capable for GPUGRID do only GPUGRID. Greetings from TJ |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
These are the first WUs I've seen that won't make the 24 hour bonus limit on 650 Ti cards. Is the 650 Ti reaching EOL for GPUGrid? Hope not. |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There's no reason for the 650 Ti EOL. I was lucky, since the fan on my 650 Ti died, so I RMA'd it and was hoping that there won't be a replacement possible ;-) And indeed, there wasn't, so I got a new 750 Ti in exchange :-) I like that card, it's really low power, low temperature and very nice performance for such a piece. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
EOL would be too harsh.. but it's certainly not recommended to buy one now. Actually, I had been hesitant to recommend such "small" cards since some time (considering GTX660 is quite a bit faster and relatively cheap since quite some time). MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 11 Oct 08 Posts: 1127 Credit: 1,901,927,545 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I still have an eVGA GTX 460 OC, happily crunching GPUGrid tasks, including "Long-run" tasks which occasionally give me a "Really long run". Despite being an older card, it is still providing useful results. Just because a GPU now takes longer than 24 hours to return a GPUGrid result, does NOT mean it is EOL or useless... Even my GTS 240 is working away on other projects, where it is not useless. |
BeyondSend message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Mumak, ET & Jacob: saying the 650Ti was EOL was my attempt at a bit of hyperbole. However if WUs such as these GIANNI_lig become the norm the 650Ti will be at a significant disadvantage (but as you say still quite usable). I'd certainly opt for the 750Ti at this point. According to my Kill-a-Watt, swapping in a 750Ti for a GTX 460 or 560 resulted in a 100 watt lowering of power use. Since electricity here is $0.09/KW and the boxes run 24/7, that's ~ $79.00/year savings if my calculations are correct. The 750 Ti (EVGA SC) yielded more or less a 35% WU speedup compared to the 650Ti (the 650Ti is a bit faster than the 560 and a lot faster than the 460). On low GPU usage WUs (only the GERARD_A2ARNUL and ISBA_MULTISCALELAB_GERARD at this point AFAIK) the speed advantage here is somewhere around 25% compared to the 650Ti. It will be interesting to see what the efficiency of the 20nm Maxwells will be. |
|
Send message Joined: 5 May 13 Posts: 187 Credit: 349,254,454 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I was "annoyed" at first to see the estimated time of the first GIANNI_lig3 I got (~29 hours) on my 650Ti, but the resultant credit of >180K wasn't bad at all! In total, I got two of these: 9758880 6976288 171276 30 Apr 2014 | 2:35:29 UTC 1 May 2014 | 9:42:45 UTC Completed and validated 105,987.00 5,605.40 181,250.00 Long runs (8-12 hours on fastest card) v8.21 (cuda60) I may be missing the full credit bonus, but the half-bonus (25%?) isn't half bad either! Hey, 180K is about my average credit! So, my feeling is, the 650Ti is not reaching EOL or becoming redundant just yet, it still has some juice! :) That said, I am tempted to pair it in my box with a 750Ti, it's a sweet little card! I would be at ~80% of the Titan with a laughable comparable cost!
|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would be at ~80% of the Titan with a laughable comparable cost! While that's probably true (didn't check the numbers) keep in mind that any comparison to Titan based on anything remotely related to "value" is a default win for whoever is the contender. The only thing Titan has going for it is strong DP performance in nVidia land for cheaper than the alternatives (Tesla & Quadro). But luckily DP is completely irrelevant for GPU-Grid. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 17 Feb 13 Posts: 181 Credit: 144,871,276 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I am happy to see Vagelis' opinion of the 650Ti. I have two of them processing GPUGrid projects and cannot afford to replace them. They are a little slow compared to others but they work. I was "annoyed" at first to see the estimated time of the first GIANNI_lig3 I got (~29 hours) on my 650Ti, but the resultant credit of >180K wasn't bad at all! In total, I got two of these: |
©2026 Universitat Pompeu Fabra