Message boards :
Graphics cards (GPUs) :
Poor times with 780 ti
Message board moderation
Previous · 1 · 2 · 3
| Author | Message |
|---|---|
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have increased the voltage of my 780Ti to 1.185mV and not rebooted the system. At first this didn't help. Temperature is steady at 72°C with ambient temperature of 27°C. Eventually clock speed went up to 1032MHz, from 875MHz. It depends on the WU and varies within crunching a WU but clock speed is definitely higher. The GPU load however is not changing, stay around 75% with a Santi and just over 80% with a Noelia. The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :) (I am still not confident enough to go over to Linux.) Greetings from TJ |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Which exact temperatures differ, can you please post which sensor is it and which value? Also what other tools do you use, which show different values ? I don't know which sensors are the problem but I find all different reading when checking the CPU temps. You program has a lot of information, which is great! But I have an Asus MOBO and Asus gives a software package with it for temperature control and readings. However is too high, according to a lot of others with same MOBO. Then there is CPUID HWMonitor, also used by many, (give very high readings with AMD CPU), CoreTemp32, TThrottle and RealTemp. If I check my CPU temperature with all programs, then there is a range in differences of 13 degrees! So for me, as I don't have the technical knowledge, it is difficult to decide which program I can believe. If I use the hottest I should be safe is the main advice. That is true, but if the CPU runs actually 13 degrees colder, I could set the fan lower which reduced sound. And I don't have to shut down my rigs to often in summer when ambient temps go to 35°C. Therefore I need to know which program I can trust. Greetings from TJ |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I don't know which sensors are the problem but I find all different reading when checking the CPU temps. You program has a lot of information, which is great! I'll explain this, maybe more users are interested in this... I suppose you have a Core2 or similar family CPU. These families didn't have a certain marginal temperature value programmed - it's called Tj,max and when a software tries to read core temperature from the CPU, this is not the final value, but offset from that Tj,max. So if reading gives x, all tools do "temperature = Tj,max - x". Now the problem is that actually nobody knows exactly what the correct Tj,max for a particular model of those families should be ! Intel tried to clarify this, but they caused more mess, than a real explanation. So this why these tools differ - each of them believes that a different Tj,max is used for your CPU. But the reality is - nobody knows this exactly.. There have been several attempts to determine the correct Tj,max for certain models, some folks made large tests, but they all failed.. So all of us can just guess. The other issue with core temperatures is the accuracy. If you're interested to know more, I wrote a post about that here: http://www.hwinfo.com/forum/Thread-CPU-Core-temperature-measuring-via-DTS-Facts-Fictions. Basically it means, that on certain CPU families the accuracy of the temperature sensor was very bad, especially at temperatures < 50 C. So bad, that you can't use it at all.. That's the truth ;-) So in your case, you better rely on the temperature of the external CPU diode... |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :) I hope you meant Decreased :) FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks you for the explanation Mumak. Greetings from TJ |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :) Yes indeed skgiven, the times are better (faster) now. Greetings from TJ |
|
Send message Joined: 11 Jan 13 Posts: 216 Credit: 846,538,252 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
TJ, Glad to see you figured out a way to get better performance out of your cards. Mine are at 1187 mV on max boost, so you're probably right where you should be. Your GPU utilization for Santi and Noelia also look about the same as my cards. |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Yes thank you Matt. As I saw better times with other crunchers with same OS, it should be possible for me too. And I like to experiment a bit and change only a little at a time to see the results. Perhaps I increase 1 or 2 mV more to see if it can a bit more better. But so far I am happy with the results. Greetings from TJ |
[VENETO] sabayoninoSend message Joined: 4 Apr 10 Posts: 50 Credit: 650,142,596 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi guys :) I've similar problem with 780ti on Gentoo-Linux "Long-Run" is very very slow to complete "Shorts" run fine if I set an app_config 1CPU+1GPU (~ 1h15m - 1h40m) if run "short" with default value , Wu takes very long time (over 6-8 hours ! ) sometime app_config is skipping and WU's starts with 0.865Cpu+1Gpu Now I am running "Long-Run" Wu (0.865cpu+1gpu - default) and after ~4h30m progress show me 14% supsending all other projects (except Gpugrid) this WU after ~ 20min it is at 25% progress 20 min ---> 11% !! [edit] After a while This WU was gone :( Now Im'm runnin a new Longrun with gpugrid only |
[VENETO] sabayoninoSend message Joined: 4 Apr 10 Posts: 50 Credit: 650,142,596 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
completed this task Only this task was running in this host Long Runs Computation time : ~5h <core_client_version>7.2.0</core_client_version> SWAN_SYNC is enable in my envronment variables ... this task takes 115,650.00 credits only (I see that Windows OS takes 135,000.00 credits :| With new task (longruns) I started other projects (only cpu) and now it is slow in its processing steps :( (only 3% after 30 minutes) . previous task after ~20 minutes was at ~10% (or more...) my systeminfo Portage 2.2.8-r1 (default/linux/amd64/13.0, gcc-4.8.2, glibc-2.17, 3.13.6-gentoo x86_64) ================================================================= System uname: Linux-3.13.6-gentoo-x86_64-Intel-R-_Core-TM-_i7-4770_CPU_@_3.40GHz-with-gentoo-2.2 KiB Mem: 16314020 total, 14649060 free KiB Swap: 0 total, 0 free Timestamp of tree: Sun, 16 Mar 2014 11:15:01 +0000 ld ld di GNU (GNU Binutils) 2.23.2 distcc 3.1 x86_64-pc-linux-gnu [disabled] ccache version 3.1.9 [enabled] app-shells/bash: 4.2_p45 dev-lang/python: 2.7.5-r3, 3.3.3 dev-util/ccache: 3.1.9-r3 dev-util/cmake: 2.8.11.2 dev-util/pkgconfig: 0.28 sys-apps/baselayout: 2.2 sys-apps/openrc: 0.12.4 sys-apps/sandbox: 2.6-r1 sys-devel/autoconf: 2.13, 2.69 sys-devel/automake: 1.12.6, 1.13.4 sys-devel/binutils: 2.23.2 sys-devel/gcc: 4.7.3-r1, 4.8.2 sys-devel/gcc-config: 1.7.3 sys-devel/libtool: 2.4.2 sys-devel/make: 3.82-r4 sys-kernel/linux-headers: 3.13 (virtual/os-headers) sys-libs/glibc: 2.17 Repositories: gentoo ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-O2 -march=native -pipe" CHOST="x86_64-pc-linux-gnu" nvidia-drivers : 331.49 [edit] after 1h of computation , progress is at 4.8% :( |
|
Send message Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I finally got a Noelia task on my GTX780Ti,and runs smooth with a steady GPU use of 90% which is way better then the 74% of Santi's and 66-72% of Gianni's. So not only WDDM is hampering the performance of the 780Ti but also the way a GPUGRID WU is programed. As said before: I like the Noelia WU's. Greetings from TJ |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So not only WDDM is hampering the performance of the 780Ti but also the way a GPUGRID WU is programmed. From your statement above it seems that these are separate factors, but actually they aren't. I would say that for those workunits which (have to) do more CPU-GPU interaction the performance hit of the WDDM is larger. |
©2025 Universitat Pompeu Fabra