Advanced search

Message boards : Graphics cards (GPUs) : General buying advice for new GPU needed!

Author Message
bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55305 - Posted: 18 Sep 2020 | 9:13:04 UTC
Last modified: 18 Sep 2020 | 9:13:52 UTC

Currently I am facing a few questions that I don't know the answers to as I am quite new in the market for GPUs. Felt that due to next gen of NVIDIA cards with their RTX 3000 series, it would be time to consider an "upgrade" myself. At the moment I am running a pretty old rig which is a HP workstation with an older Xeon processor and a GTX 750 Ti OC Edition. So far I am pretty happy with it, but would like to get a more efficient card with the Turing chip which seems to me as a great balance between performance, efficiency and price. As I am currently constrained by the available x16 PCIe slots on my Mobo, as well as cooling/airflow issues and power supply bottleneck (475 W @ 80% efficiency), I only want to consider cards with ~100 W TDP and <250€. Thus, I wouldn't be able to run a dual-GPU rig, but rather had to substitute the old vs. new card. Having read up on the forum, I see that a GTX 1650 seems to run rather efficient. Really can't afford any RTX 2000 or 3000 series card. With a GTX 1650 Super sitting at ~160€ right now, I definitely consider upgrading right now. Thanks for any input!


Here are my questions.

1) Considering a GTX 1650 Super, I wonder if this is a great overall investment at this time? Would you have any other recommendations for a card with similar specs within the mentioned boundaries?

2) Does anyone suspect a spillover effect anytime soon price-wise to the mid-end NVIDIA GPUs or will they probably be unaffected by the release of the new high-end RTX 3000 series?

3) Anyone having experience with hardware purchase around Black Friday? Never done it, so no prior experience to whether this period is a great/better time to consider a GPU purchase than right now. Can you usually score better deals around BF?

4) Looking at other cards from AMD that don't run on GPU Grid due to the lack of CUDA, the NVIDIA cards sometimes seem to offer less bang for the buck and often lack behind similar AMD cards' features such as memory size, etc. Would I "overpay" for a GTX 1650 Super right now?

5) A concern for me is the 4 GB of GDDR6 memory as this seems rather low compared to other mid-tier GPUs. Is this future proof in terms of running BOINC for at least a couple years? Or is this potentially a real issue anytime soon?

6) Does memory size affect speed or only the capability of running a particular WU at all?

7) Considering a specific card model, let's say f.ex. a GTX 1650 Super, how can I differentiate between different makes of various producers? For me the technical specs seem rather undistinguishable, with only various cooling/fan designs being marketed as particularly innovative and quiet.
This card from producers such as ASUS, MSI, Zotac, EVGA, Gigabyte, etc. all have roughly the same boost clock, lie pretty much within a 25$/20€ range within each other, have the same memory and memory type, same power connector requirement (6 pin) etc....? What do I need to watch out for here and in what ranking do I need to consider these points? Is there any particular brand out there that is notoriously known for "dud" cards or bad customer service for card replacement within the warranty period? In particular I reckon with a MSI or ASUS card right now...

8) Regarding the GTX 1660 with a slightly higher TDP, a moderate performance improvement and larger memory, would investing into this higher-end card performance-wise make more sense?

9) As I have previously bought 2 GPUs used on eBay, and one running smoothly but the other one was dead upon arrival, I tend to second-guess buying used cards, as you never know how people have run their cards before. What experiences have you made with used cards? Have you ever had experienced similar issues? Would you consider buying such a card used? At what price difference to a newly bought card, does buying second-hand make sense? What about warranty issues, especially with running the card 24/7 here on BOINC?

Note: Regarding the TDP, I want to run quietly and avoid extreme temperatures, and considering the 80% Bronze PSU, the 100W TDP rated card would easily pull 120W if overclocked and under full load. That's my limit for the running cost of my rig 24/7 and without having the need to upgrade my whole rig.


Any advice much appreciated!

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55306 - Posted: 18 Sep 2020 | 20:47:38 UTC

Have a look a the GPUFlops Theory vs Reality chart. The GTX 1660Ti is top of the chart for efficiency.

https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703

Only 120W and has 6GB of memory. GPUGrid likes fast memory transactions and wide memory bandwidth. Doesn't actually use all that much memory in practice though for tasks. My current task running is only showing 300MB of memory in use on my RTX 2080 card but with 97% utilization.

Other projects do though like Einstein's Gravity Wave tasks which can use over 4GB of memory to crunch a task.

There might be some Black Friday loss-leader sales of the older generation from the people that still haven't been able to snag a new Ampere card. But whether in general the prices will be lower, in past years that hasn't been the case in generational transitional years. Steve at GN actually commented on this topic.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55307 - Posted: 18 Sep 2020 | 21:39:49 UTC
Last modified: 18 Sep 2020 | 21:41:11 UTC

Well, thanks first of all for your timely answer! I appreciate the pointer to the efficiency comparison table very much. Impressive how detailed it is. Kind of confirms what I already suspected as it really seems that the GTX 1660 / Ti would be more costly initially but well worth the investment even though all Turing based cards score pretty well. That also is very similar to the data ServicEnginIC shared with me in another recent post of mine.

Probably was looking for the easy and fast way to boost performance without having to upgrade my whole rig. A GTX 1660 Ti would require an 8 pin connector but my PSU only offers a single 6 pin. What I'll end up doing very likely is to wait for the GPU upgrade until I can afford to start from scratch building a whole new system while substituting my Xeon for a Ryzen chip as well. I guess that's worth a bit more patience.

Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :)

Definitely reassured me in my plans to consider an upgrade in the near future and that a GTX 1660 model either Ti or Super will be a great choice. The decision to wait then is more based on the timing of being able to set up a new rig rather than trying to aim for a great deal on BF while it might still be worth a shot being on the lookout for a great GTX 1660 deal!

I'll keep running my 750Ti 24/7 for now. At the current rate it takes ~10 days for 1m. It's gonna be a long journey :) Thanks Keith Myers for your advice!

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55308 - Posted: 18 Sep 2020 | 22:36:56 UTC - in response to Message 55307.

A GTX 1660 Ti would require an 8 pin connector but my PSU only offers a single 6 pin.

What about some spare Molex connectors from the power supply. A lot of cards include Molex to PCIe power adapters with the card. I'm sure there are adapters to go from Molex to 8-pin available online. Might be a solution.

But a new build solves a lot of compatibility issues since you are spec'ing out everything from scratch.

I like my Ryzen cpus as they provide enough cpu threads to crunch cpu tasks for a lot of projects and still provide enough cpu support to support GPUGrid tasks on multiple gpus.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55311 - Posted: 19 Sep 2020 | 19:28:37 UTC - in response to Message 55308.

Thanks for this idea. I'll investigate this possibility further while I actually prefer to wait now to plan the new rig first. Do you know how much W a molex connector supplies? Have read different values online so far...

Oh man, you got some nice rigs. I'll probably score well below your Threadripper and/or Ryzen 9. At the moment I like a Ryzen 3700X for my new setup. Definitely amazing if you compare the performance of a 3700X paired with a GTX1660 Ti vs. a Xeon X5660 @95W and a GTX750 Ti and especially its efficiency. As this is gonna be my first own build, it'll take some time doing my research and looking for the right parts. It's gonna be fun though :)

Thanks again for your insights!

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 10,906,230,840
RAC: 22,258,642
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 55312 - Posted: 19 Sep 2020 | 20:33:16 UTC - in response to Message 55307.

Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :)

I am running GTX750Ti in two of my old and small machines. And I don't want to miss these cards :-) They have been doing a perfect job so far, for many years.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55313 - Posted: 19 Sep 2020 | 22:02:18 UTC - in response to Message 55308.

Keith is correct that adaptors are available. The power to the 1660Ti is what's also known as a 6+2 pin VGA power plug. That might help with your search. We're also assuming that your PSU is at least 450W.

From what I have seen the 1660 super is a better buy, and the GTX3080 at $750 USD is now a best value in direct computing power (provided you can find one for sale). Check them out here:
https://www.videocardbenchmark.net/directCompute.html

I'm going to wait and see what the market does after the 3000 series hits the streets for a while.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55314 - Posted: 19 Sep 2020 | 22:28:24 UTC

Something to think about, two GTX1660 Supers on a machine can potentially keep pace with a GTX2080.
That's $460 USD vs $800 for the 2080. Both solutions work equally well for research computation.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55315 - Posted: 19 Sep 2020 | 23:37:58 UTC

Oops, I mistakenly called the RTX GPUs GTX. My bad.🙄

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55316 - Posted: 19 Sep 2020 | 23:38:19 UTC - in response to Message 55312.

Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :)

I am running GTX750Ti in two of my old and small machines. And I don't want to miss these cards :-) They have been doing a perfect job so far, for many years.

I built from recovered parts this system to return to work a repaired GTX 750 Ti graphics card.
It is currently scoring a 50K+ RAC at GPUGrid.
A curious collateral anecdote:
I attached this system to PrimeGrid, for maintaining its CPU working also.
OK, finally it collaborated as double checker system in discovering the prime number 885*2^2886389+1
Being 868.893 digits long, it entered T5K list for largest known primes at position 971 ;-)

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55319 - Posted: 20 Sep 2020 | 4:56:38 UTC - in response to Message 55311.

Thanks for this idea. I'll investigate this possibility further while I actually prefer to wait now to plan the new rig first. Do you know how much W a molex connector supplies? Have read different values online so far...

Oh man, you got some nice rigs. I'll probably score well below your Threadripper and/or Ryzen 9. At the moment I like a Ryzen 3700X for my new setup. Definitely amazing if you compare the performance of a 3700X paired with a GTX1660 Ti vs. a Xeon X5660 @95W and a GTX750 Ti and especially its efficiency. As this is gonna be my first own build, it'll take some time doing my research and looking for the right parts. It's gonna be fun though :)

Thanks again for your insights!

The standard Molex connector can supply 11A for the 12V pin or 132W. The quick perusal at Amazon and Newegg.com showed a Molex to 8 pin PCIe adapter for $8.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55320 - Posted: 20 Sep 2020 | 8:22:12 UTC - in response to Message 55319.

Thank you all for your comments. Definitely seems to me that all Turing chip based cards seem rather efficient and a good bang for your buck :)

I'll further look into GTX 1650 vs. 1660 Super vs. 1660 Ti. The last two mentioned actually seem pretty similar to me with 125 W for 5.027 TFLOPs (F32) vs. 120 W for 5.437 TFLOPs according to Techpowerup.com. Currently I tend more towards the 1660 models as they come with 6GB of GDDR6 memory and that seems more future proof to me than only 4GB.

The idea of using a Molex to 8 pin PCIe connector adapter in the meantime to bridge the gap until building my new system due to the lack thereof with my current PSU, seems like a great idea. Just wanted to verify beforehand to avoid any instant ignition surprise with the new card :) As my PSU delivers 475W that shouldn't be issue.

At the moment my GTX 750 Ti with the earlier mentioned OC setting sitting at 1365 core clock, is currently pushing it to nearly 100k credit per day, with RAC taking a hit because I recently powered down the system for a few days for maintenance. So that's decent.

I guess that when I finish the new rig, probably not before the beginning of next year, I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards. Then if budget, and running cost allow, I consider upgrading further. Keep in mind that electricity bills here in Germany tend to be often threefold of what other users are used to and that can definitely influence hardware decision when running a rig 24/7. A OC GTX 1660 Ti with 120 W, keeping in mind the degree of efficiency of most PSUs might easily pull 140-150 W from the wall and with 24/7 that relates to 24h*150W=3.6 kWh which at the current rate of .33€ is about 1.19€ per day only for a single GPU. So efficiency is on mind, as CPU and other peripherals also pull a significant wattage.... Still wouldn't like to have the power bill of some of the users here with a couple RTX 2xxx or 3xxx soon :)

And by the way, congrats ServicEnginIC on finding your first prime!

Thanks again

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 10,906,230,840
RAC: 22,258,642
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 55325 - Posted: 21 Sep 2020 | 11:46:00 UTC - in response to Message 55320.

... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards.

I am not sure whether you can have a mix different GPU types (Pascal, Turing) in the same machine.
One of the specialists here might give more information on this.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55326 - Posted: 21 Sep 2020 | 12:16:43 UTC - in response to Message 55325.
Last modified: 21 Sep 2020 | 12:39:09 UTC

... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards.

I am not sure whether you can have a mix different GPU types (Pascal, Turing) in the same machine.
One of the specialists here might give more information on this.

Yes, the cards can be mixed.

The only issue is on a PC restart (or Boinc service restart) the gpugrid tasks must attach to the same gpu they were started on or both tasks will fail immediately.
Refer this post by Retvari Zoltan for more information on this issue and remedial action.
http://www.gpugrid.net/forum_thread.php?id=5023&nowrap=true#53173

Also, refer the ACEMD3 faq
http://www.gpugrid.net/forum_thread.php?id=5002

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1626
Credit: 9,376,466,723
RAC: 19,051,824
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55327 - Posted: 21 Sep 2020 | 12:32:03 UTC - in response to Message 55325.

... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards.

I am not sure whether you can have a mix different GPU types (Pascal, Turing) in the same machine.
One of the specialists here might give more information on this.

Yes, you can run two different GPUs in the same computer - my host 43404 has both a GTX 1660 SUPER and a GTX 750Ti, and they both crunch for this project.

Three points to note:

i) The server shows 2x GTX 1660 SUPER - that's a reporting restriction, and not true.
ii) You have to set a configuration flag 'use_all_gpus' in the configuration file cc_config.xml - see the User manual. Otherwise, only the 'better' GPU is used.
iii) This project - unusually, if not uniquely - can't start a task on one model of GPU and finish it on a different model of GPU. You need to take great care when stopping and re-starting BOINC, to make sure the tasks restart on their previous GPUs.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55328 - Posted: 21 Sep 2020 | 12:59:18 UTC - in response to Message 55320.

Then if budget, and running cost allow, I consider upgrading further. Keep in mind that electricity bills here in Germany tend to be often threefold of what other users are used to and that can definitely influence hardware decision when running a rig 24/7. A OC GTX 1660 Ti with 120 W, keeping in mind the degree of efficiency of most PSUs might easily pull 140-150 W from the wall and with 24/7 that relates to 24h*150W=3.6 kWh which at the current rate of .33€ is about 1.19€ per day only for a single GPU. So efficiency is on mind, as CPU and other peripherals also pull a significant wattage.... Still wouldn't like to have the power bill of some of the users here with a couple RTX 2xxx or 3xxx soon :)

If running costs are an important factor, then consider the observations made in this post:
https://gpugrid.net/forum_thread.php?id=5113&nowrap=true#54573

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55329 - Posted: 21 Sep 2020 | 15:58:44 UTC

Whatever the decision be, I'd recommend to purchase a well refrigerated graphics card.
Heat pipe or vapour chamber based heatsink, better than a passive one.
My most powerful card currently in production is a factory overclocked GTX 1660 Ti.
Its refrigeration is based on a dual fan passive heatsink, and I've had to fight hard to maintain temperatures outside of crazy limits when processing at full (120 Watts) power.
The last card I purchased was a GTX 1650 Super, and I'm very satisfied with its well employed 100 Watts power.
I'm still evaluating performance at this system, but in a flash:
- GTX 750 Ti OC, Adria Tasks mean time execution: 73741 seconds
- GTX 1650 Super, Adria Tasks mean time execution: 18634 seconds

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55333 - Posted: 21 Sep 2020 | 23:07:20 UTC - in response to Message 55329.
Last modified: 22 Sep 2020 | 0:00:20 UTC

Whatever the decision be, I'd recommend to purchase a well refrigerated graphics card.
Heat pipe or vapour chamber based heatsink, better than a passive one.
My most powerful card currently in production is a factory overclocked GTX 1660 Ti.
Its refrigeration is based on a dual fan passive heatsink, and I've had to fight hard to maintain temperatures outside of crazy limits when processing at full (120 Watts) power.
The last card I purchased was a GTX 1650 Super, and I'm very satisfied with its well employed 100 Watts power.
I'm still evaluating performance at this system, but in a flash:
- GTX 750 Ti OC, Adria Tasks mean time execution: 73741 seconds
- GTX 1650 Super, Adria Tasks mean time execution: 18634 seconds

+1 on attention to cooling ability

(The GTX 1650 Super when power limited to 70W (minimum power), ADRIA Tasks execution Time: 20800 seconds)

EDIT: GERARD has just released some work units that will take a GTX 750 Ti over 1 day to complete. A good time to retire the venerable GTX 750 Ti.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55335 - Posted: 22 Sep 2020 | 8:48:19 UTC

Thank you all! That thread turned out to be a treasure trove of information. I will keep referring to it in the future as it is almost like a timelier version of the FAQs.

Interesting to know that running two or more GPUs within one system is indeed possible even if the chipsets/architectures are different. However it really seems like a lot of pain to implement this in practice (just rebooting for an update...)

Purchasing a powerful 250W card and then directly reducing its power limit seemed a bit counterintuitive at the beginning but I guess efficiency-wise it makes total sense. Just like going with 60mph on a 100 PS car is fine and consumes least, but ramping it up to 90 or even 120mph while doable requires loads of more fuel to even achieve it. So I'll give this a thought.

Dual-fan cooling is a must have for me, but thanks anyway for the pointer. A passive cooled one and especially in a small case, must really be horrible temperature wise.

I have just kind of scrolled through the valid tasks of some non-hidden hosts amongst users in this thread, and some ADRIA tasks took nearly 10 hrs (~33000sec) on even newer cards such as the GTX 1650 and GTX 1650 Super. Never saw a GERARD tasks, but would love to get either one of the mentioned ones myself. But these must be real monsters :)

That definitely supports my idea of considering an upgrade. Thanks again guys

Erich56
Send message
Joined: 1 Jan 15
Posts: 1142
Credit: 10,906,230,840
RAC: 22,258,642
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 55336 - Posted: 22 Sep 2020 | 9:46:27 UTC - in response to Message 55320.

bozz4science wrote:

At the moment my GTX 750 Ti with the earlier mentioned OC setting sitting at 1365 core clock, is currently pushing it to nearly 100k credit per day...

may I ask you at which temperature this card is running 1365 MHz core clock?

Question also to the other guys here who mentioned that they are running a GTX 50 Ti: which core clocks at which temperatures?

Thanks in advance for your replies :-)

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55337 - Posted: 22 Sep 2020 | 19:34:14 UTC

Anybody found a RTX 3080 crunching GPUGrid tasks yet?

Found one that crunched Einstein, Milkyway and Primegrid.

27% faster on GR tasks at Einstein.
30% slower on MW separation tasks because of FP64 being halved to 1:64 from Turing's 1:32 FP64 compute.

2X faster on PPSieve CUDA app at Primegrid compared to RTX 2080 Ti.

3X faster on PPSieve CUDA app at Primegrid compared to GTX 1080 Ti.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55338 - Posted: 22 Sep 2020 | 22:29:19 UTC - in response to Message 55336.

Question also to the other guys here who mentioned that they are running a GTX 50 Ti: which core clocks at which temperatures?

On this 4-core CPU system, based on a GTX 750 Ti running at 1150 MHz, Temperature is peaking 66 ºC, as seen at this Psensor screenshot.
At the moment of taking screenshot, there were in process three Rosetta@home CPU tasks, thus using 3 CPU cores, and one TONI ACEMD3 GPU task, using 100% of GPU and the remaining CPU core to feed it.
Processing room temperature : 29,4 ºC

PS: Some perceptive observer might have noted that in previous Psensor screenshot, Max. "CPU usage" was reported 104%... Nobody is perfect.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55339 - Posted: 23 Sep 2020 | 8:08:01 UTC
Last modified: 23 Sep 2020 | 8:09:11 UTC

Question also to the other guys here who mentioned that they are running a GTX 50 Ti: which core clocks at which temperatures?


For starters keep in mind that I have a factory overclocked card, an Asus dual fan gtx 750 ti OC. Thus to achieve this OC, I don't have to add much on my own. Usually I apply a 120-135 MHz overclock to both core and memory clock that seems to yield me a rather stable setup. Minor hiccups with the occasional invalid result every week or so.

This card is running in an old HP Z 400 workstation with bad to moderate airflow. See here: http://www.gpugrid.net/show_host_detail.php?hostid=555078 Thus, I adjusted the fan curve of the card slightly upwards to help with that. 11 out of 12 HT threads run, always leaving one thread overhead for the system. 10 run CPU tasks, 1 is dedicated to the GPU task.

Cards is usually sitting between 60-63 ºC. Never seen temps above that range. When ambient temperatures are low to moderate between 18-25 ºC fans usually run at the 50% mark, for higher temps 25-30 ºC fans run at 58% and for above 30 ºC ambient temp, fans run usually at 66%. Running this OC setting at higher ambient temps means that it is harder to maintain the boost clock, so it rather fluctuates around it. Card is always under 100% CUDA compute load.

Still hope that the card's fans will go slower into the autumn/winter season with ambient temps being great for cooling. The next lower fan setting is at 38% and you can't usually hear the card crunching away at its limit.

Hope that helps.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55340 - Posted: 23 Sep 2020 | 9:48:25 UTC - in response to Message 55339.

Thank you very much for your pleasant comments. Forums have gained with you an excellent explainer!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,376,266,666
RAC: 3,493,740
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55341 - Posted: 23 Sep 2020 | 20:26:17 UTC - in response to Message 55339.

I have a factory overclocked card, an Asus dual fan gtx 750 ti OC. Thus to achieve this OC, I don't have to add much on my own. Usually I apply a 120-135 MHz overclock to both core and memory clock that seems to yield me a rather stable setup. Minor hiccups with the occasional invalid result every week or so.
If there are any invalid results, you should lower the clocks of your GPU and/or its memory. An invalid result in a week could cause more loss in your RAC than the gain of overclocking. Overclocking the memory of the GPU is not recommended. Your card tolerates the overclocking because of two factors:
1. the GTX 750Ti is a relatively small chip (smaller chips tolerate more overclocking)
2. you over commit your CPU with CPU tasks, and this hinders the performance of your GPU tasks. This has no significant effect on the points per day of a smaller GPU, but a high-end GPU will loose significant PPD under this condition. Perhaps the hiccups happen when there are not enough CPU tasks running simultaneously making your CPU feed the GPU a bit faster than usual, and these rare circumstances reveal that it's overclocked too much.

11 out of 12 HT threads run, always leaving one thread overhead for the system. 10 run CPU tasks, 1 is dedicated to the GPU task.
I recommend to run as many CPU tasks simultaneously as many cores your CPU has (6 in your case). You'll get halved processing times on CPU tasks (more or less depending on the tasks), and probably a slight decrease in the processing time of GPUGrid tasks. If your system had a high-end GPU I would recommend to run only 1 CPU task and 1 GPU task, however these numbers depend on the CPU tasks and the GPU tasks as well. Different GPUGrid batches use different number of CPU cycles, so some batches have higher impact of an over-committed CPU.
Different CPU tasks utilize the memory/cache subsystem to a different extent:
Running many single-threaded CPU tasks simultaneously (this is the most common approach in BOINC) is the worst case, as this scenario results in multiplied data sets in the RAM. Operating on those simultaneously need multiplied memory cycles, and this results in increased cache misses, and using up all the available memory bandwidth. So the tasks will spend their time waiting for the data, instead of processing it. For example: Rosetta@home tasks usually need a lot of memory, so these tasks hinder not just each other's performance, but the performance of GPUGrid tasks as well.

General advice for building systems for crunching GPUGrid tasks:
A single system cannot excel in both CPU and GPU crunching at the same time, so I build systems for GPU (GPUGrid) crunching with low-end (i3) CPUs, this way I don't mind if the CPU's only job is to feed a high-end GPU unhindered (by CPU tasks).

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 55342 - Posted: 24 Sep 2020 | 1:50:04 UTC - in response to Message 55341.
Last modified: 24 Sep 2020 | 2:00:07 UTC

Broader statement to say small Maxwell chips don't last as long Pascal or Turing larger chips. In my experience equal chance they die. GTX 750 at 1450MHz dead in 18 months when Turing 2.0GHz dead in 24 hours. GPU are worthless random no matter the size of GPU die.

Yes I overclock - the card going malfunction which the user settings choose - bunch of different GPU gen have problems. Turing had worst lasting lifetime for me than others.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55400 - Posted: 30 Sep 2020 | 22:15:54 UTC
Last modified: 30 Sep 2020 | 22:16:36 UTC

Thank you all for getting back to me! I guess I first had to digest all this information.

Thanks as well ServiceEnginIC for your kind comment :)

Thanks Zoltán for your detailed explanations!

If there are any invalid results, you should lower the clocks of your GPU and/or its memory. An invalid result in a week could cause more loss in your RAC than the gain of overclocking.

This thought has already crossed my mind, but I never thought this through. Your answer is very logical so I guess I will monitor my OC setting a bit more closely. Haven't had any invalid results for at least 1 week, so the successively lowered OC setting finally reached a stable level. Temps and fans are at a very moderate level. But it definitely makes sense that the "hiccups" together with a overcommitted CPU would strongly penalise the performance of a higher end GPU.

I recommend to run as many CPU tasks simultaneously as many cores your CPU has (6 in your case).

Thanks for this advice. I have read this debate about HT vs. not HT your physical cores and especially coming from the WCG forums of how to improve your speed in reaching runtime based badges, I thought by HT I don't only double my runtime as virtual core threads also count equally but also would see some efficiency gains. What I had seen so far was an improvement of throughput in points of roughly 2-5% over the course of a week based solely on WCG tasks. However I didn't consider what you outlined here very well explained and thus I have already returned to using 6 threads only. – One curious question though: Will only running 6 out of 12 HT threads while HT is enabled in the BIOS setting, effectively result in the same result as running 100% of cores while HT is turned off in the BIOS?

So the tasks will spend their time waiting for the data, instead of processing it.

This is what I could never really put my finger around so far. Because what I saw while HT was turned on was that some tasks more than doubled their average runtime while some stayed below the double average mark. What I want to convey here is that there was much more variability in the tasks what is consistent with what you describe. I guess some tasks got priority in the CPU queue while others were really waiting for data and those who skipped the line essentially didn't quite doubled while others did more than double by quite some margin. Also, having thought that the same WUs on 6 physical cores would generate the same strain on my system as running WUs on all 12 HT threads, I saw that CPU temps ran roughly 3-4 degrees higher (~ 4-6%) while at the same time my heatsink fan revved up about 12.5-15% to keep up with the increase in temps.

A single system cannot excel in both CPU and GPU crunching at the same time.

As I plan my new system build to be a one-for all solution, I won't be able to execute on this advice. I do plan however to keep more headroom for GPU related tasks. But I am still speccing the system as I am approaching the winter months. All I am sure of so far is that I want to base it on a 3700X.

When I read
Turing had worst lasting lifetime for me than others.
I questioned my initial gut feeling as to go with a Turing based GTX 1660 Ti. For me it seemed like the sweet spot in terms of TDP/power and efficiency as seen in various benchmarks. Looking at the data I posted today from F@H, I do however wonder a GTX 1660 Ti will keep up with the pace of hardware innovation we currently see. I don't want to have my system basically rendered outdated in just 1 or 2 years. Keep in mind that this comes from someone running a i5 4278U as the most powerful piece of silicone at the moment. I don't mean to keep up with every gen and continually upgrade my rig and at the same time I know that no system will be able to maintain an awesome relative benchmark to the ever rising average compute power of volunteers' machines over the year, but want to build a solid system that will do me good for at least some years. And now, in retrospective a mere GTX 1660 Ti seems to be rather "low-end". Even an older 1080Ti can easily outperform this card.

Something to think about, two GTX1660 Supers on a machine can potentially keep pace with a GTX2080.
That's $460 USD vs $800 for the 2080.

From what I see 2x 1660 Ti would essentially yield the same performance of a RTX 2060. That goes in the direction of what Pop Piasa initially also put up for discussion. Basically yielding the same performance but for a much more reasonable price. While power draw and efficiency is of concern to me, I do see a GTX 16xx / Super / Ti especially constrained on the VRAM side.

And as discussed with Erich56, rod4X4 and Richard Haselgrove, I now know that while I have to pay attention to a few preliminary steps, running a system with 2 GPUs simultaneously is possible.

Then I am coming back to the statement of Keith Myers who kindly pointed me in the direction of the GTX 1660 Ti, especially after I stressed that efficiency is a rather important issue for me.
Have a look a the GPUFlops Theory vs Reality chart. The GTX 1660Ti is top of the chart for efficiency.

That is what supported my initial gut feeling of looking within the GTX 16xx gen for a suitable upgrade. At least I am still very much convinced with my upgrade being a Ryzen chip! :)

Maybe wrapping this up and apologies for this rather unconventional and long post. I have taken away a lot so far in this thread.

1) I will scale back to physical cores only and turn HT off.
2) OC a card will ultimately decrease its lifetime and there is tradeoff between performance improvement vs. stability issues.
3) OC a card makes absolutely no sense efficiency wise if this is a factor you consider. Underclocking might be a thing to reach the efficiency sweet spot.
4) Some consumer grade cards have a compute penalty in place that can potentially be solved by overclocking the memory clock to revert back to P0 power state.
5) An adapter goes a long into solving any potential PSU connectivity issues.
6) Planning a system build/upgrade is a real pain as you have so many variables to consider, hardware choices to pick from, leaving headroom for future upgrades etc...
7) There is always someone around here who has an answer to your question :)

Thanks again for all your replies. I will continue my search for the ultimate GPU for my upgrade. I am now considering a RTX 2060 Super which currently retails at 350€ vs a GTX 1660 Ti at 260€. An RTX would sit at 175 TDP which would be my 750 Ti combined with a new 1660 Ti. So many considerations.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,376,266,666
RAC: 3,493,740
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55401 - Posted: 30 Sep 2020 | 23:47:47 UTC - in response to Message 55400.
Last modified: 30 Sep 2020 | 23:49:21 UTC

Will only running 6 out of 12 HT threads while HT is enabled in the BIOS setting, effectively result in the same result as running 100% of cores while HT is turned off in the BIOS?
Of course not. The main reason for turning HT off in the BIOS is to strengthen the system against cache sideband attacks (like spectre, meltdown, etc.). I recommend to leave HT on in the BIOS, because in this way the system still can use the free threads for its own purposes, or you can increase the number of running CPU tasks, if the RAC increase follows it.
See this post.

What I want to convey here is that there was much more variability in the tasks what is consistent with what you describe.
That's true.

Also, having thought that the same WUs on 6 physical cores would generate the same strain on my system as running WUs on all 12 HT threads, I saw that CPU temps ran roughly 3-4 degrees higher (~ 4-6%) while at the same time my heatsink fan revved up about 12.5-15% to keep up with the increase in temps.
I lost you. The temps got higher with HT on? It's normal btw. Depending on the CPU and the tasks the opposite could happen also.

As I plan my new system build to be a one-for all solution, I won't be able to execute on this advice.
That's ok. My advice was a "warning". Some might get frustrated that my i3/2080Ti performs better at GPUGrid than an i9-10900k/2080Ti (because of its overcomitted CPU).

When I read
Turing had worst lasting lifetime for me than others.
I questioned my initial gut feeling as to go with a Turing based GTX 1660 Ti.
I run 4 pieces of 2080Tis for 2 years without any failures so far. However I have a second hand 1660Ti, which has stripes sometimes (it needs to be "re-balled").

I do however wonder a GTX 1660 Ti will keep up with the pace of hardware innovation we currently see. I don't want to have my system basically rendered outdated in just 1 or 2 years.
I would wait for the mid-range Ampere cards. Perhaps there will be some without raytracing cores. Or if not, the 3060 could be the best choice for you. Second hand RTX cards (2060S) could be very cheap considering their performance.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55421 - Posted: 4 Oct 2020 | 15:59:12 UTC
Last modified: 4 Oct 2020 | 15:59:36 UTC

Thanks Zoltán for your insights!

Of course not. ... I recommend to leave HT on in the BIOS
Great explanation. Will do!

I lost you. The temps got higher with HT on?
Yeah, it got pretty confusing. What I tried to convey here was that when HT was off and I ran all 6 physical cores at a 100% system load as opposed to HT turned on and running 12 virtual cores on my 6 core system, the latter one produced more heat as the fans revved up considerably over the non-HT scenario.

Running only 6 cores out of 12 HT cores, produces a comparable result as leaving HT turned off and running all 6 physical ones. Hope that makes sense.

That's ok. My advice was a "warning". Some might get frustrated that my i3/2080Ti performs better at GPUGrid than an i9-10900k/2080Ti (because of its overcomitted CPU).
Couldn't this be solved by just leaving more than 1 thread for the GPU tasks? What about the impact on a dual-/multi GPU setup? Would this effect be even more pronounced here?

I run 4 pieces of 2080Tis for 2 years without any failures so far.
Well that is at least a bit reassuring. But after all, running those cards 24/7 at full load is a tremendous effort. Surly the longevity decreases as a result of hardcore crunching.

I would wait for the mid-range Ampere cards. Perhaps there will be some without raytracing cores. Or if not, the 3060 could be the best choice for you. Second hand RTX cards (2060S) could be very cheap considering their performance.
Well, thank you very much for your advice! Unfortunately, I am budget constrained for my build at the ~1000 Euro mark and have to start from scratch as I don't have any parts that can be reused. Factoring in all those components (PSU, motherboard, CPU heatsink, fans, case, etc.) I will not be able to afford any GPU beyond the 300€ mark for the moment. I'll probably settle for a 1660 Ti/Super where I currently see the sweetspot between price and performance. I hope it'll complement the rest of my system well. I will seize the next opportunity then (probably 2021/22) for a GPU upgrade. We'll see what NVIDIA will deliver in the meantime and hopefully by then, I can get in the big league with you :)

Cheers

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,376,266,666
RAC: 3,493,740
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55431 - Posted: 5 Oct 2020 | 20:53:34 UTC - in response to Message 55421.
Last modified: 5 Oct 2020 | 21:34:41 UTC

Some might get frustrated that my i3/2080Ti performs better at GPUGrid than an i9-10900k/2080Ti (because of its overcomitted CPU).
Couldn't this be solved by just leaving more than 1 thread for the GPU tasks?
I'm talking about the performance of the GPUGrid app, not the performance of other projects' (or mining) GPU apps. The performance loss of the GPU depends on the memory bandwidth utilization of the given CPU (and GPU) app(s), but generally it's not enough to leave 1 thread for the GPU task(s) to achieve maximum GPU performance. There will always be some loss of GPU performance caused by the simultaneously running CPU app(s) (the more the worse). Multi-threaded apps could be less harmful for the GPU performance. Everyone should decide how much GPU performance loss he/she tolerates, and set their system accordingly. As high-end GPUs are very expensive I like to minimize their performance loss.
I've tested it with rosetta@home: when more than 1 instance of rosetta@home were running, the GPU usage dropped noticeably.
The test was done on my rather obsolete i7-4930k.
However, this obsolete CPU has almost twice as much memory bandwidth per CPU core than the i9-10900 has:
CPU CPU #memory memory memory mem.bandwidth CPU cores threads channels type&freqency bandwidth per CPU core i7-4930k 6 12 4 DDR3-1866MHz 59.7GB/s 9.95GB/s/core i9-10900 10 20 2 DDR4-2933MHz 45.8GB/s 4.58GB/s/core
(These CPUs are not in the same league, as the i9-10920X would match the league of the i7-4930k).
My point is that it's much easier to saturate the memory bandwidth of a present high-end desktop CPU with dual channel memory, because the number of CPU cores increased more than the available memory bandwidth.

What about the impact on a dual-/multi GPU setup? Would this effect be even more pronounced here?
Yes. Simultaneously running GPUGrid apps hinder each other's performance as well (even if they run on different GPUs). Multi GPUs share the same PCIe bandwidth, that's the other factor. Unless you have a very expensive socket 2xxx CPU and MB, but it's cheaper to build 2 PCs with inexpensive MB and CPU for 2 GPUs.

...running those cards 24/7 at full load is a tremendous effort. Surly the longevity decreases as a result of hardcore crunching.
It does, especially for the mechanical parts (fans, pumps). I take back the power limit of the cards for the summer, also I take the hosts with older GPUs offline for the hottest months (this year was an exception due to the COVID-19 research).

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55653 - Posted: 31 Oct 2020 | 11:50:51 UTC

Thanks Zoltán again for your valuable insights. The advice about the power limit lead me to limit it down and set the priority of temperature over performance in MSI afterburner. All GPU apps now run at night comfortably sitting at 50-55 degrees at 35% fan speed vs. 62/63 degrees at 55% fan speed which is now not audibly noticeable at all anymore. That comes along with a small performance penalty but I don't notice a radical slowdown. The power limit of this card is now set to 80 percent which corresponds to a maximum temp of 62 degrees. This had a tremendous effect on overall operating noise of my computer as prior to that adjustment with the card running at 62 degrees in my badly ventilated case and the hot air continually heated up the CPU environment (CPU ran only at 55-57 degrees) and the CPU heatsink fan had to run faster to compensate for the GPU's hot air exhaust. Now both components are running at similar temps and the heatsink fan is now working at a slower speed reducing the overall noise.

This had me think once more about operating noise and airflow/cooling. With the new RTX 30xx series and the new AMD RX 6000 series which aren't compatible here on GPU Grid but offer similar performance at a competitive price level IMO, the GTX 1660 Super/Ti arguably seem less and less attractive to me. Sitting at comparatively similar prices to a RTX 2060 card now as prices reacted recently to the new GPU launches, seem now as a better investment to me. Looking at the current RTX 2060 universe, there seems to be good cards for ~315€ vs. ~280€ for a GTX 1660 Ti. Now to my question: Is a dual fan card at these high TDPs of the cards (1) sufficient for cooling and maintaining a comfortable operating temp and (2) have dual card fans an advantage over triple fans cards in terms of operating noise (due to less fans) or do triple fan cards run quieter due to every fan running at a lower RPM? (3) Is there any particular brand that anyone of you can recommend for superior cooling and/or low noise levels? Thx

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55657 - Posted: 31 Oct 2020 | 21:33:36 UTC

Depends on the cooler design. Typically a 3 fan design cools better than a 2 fan design simply because the cooler heat sink is larger because of the need to mount 3 axial fans side by side onto the heat sink and thus has a larger amount of radiating surface area compared to a 2-fan heat sink.

So you have more cooling capacity and don't need to ramp the fan speeds as high to get the required amount of heat dissipation. End effect is lower noise and lower temps.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,376,266,666
RAC: 3,493,740
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55658 - Posted: 1 Nov 2020 | 0:12:09 UTC - in response to Message 55653.
Last modified: 1 Nov 2020 | 0:23:24 UTC

Is there any particular brand that anyone of you can recommend for superior cooling and/or low noise levels?
Superior cooling is done by water, not air. You can build your own water cooled system (it's an exacting process, also expensive and needs a lot of experience), or you can buy a GPU with a water cooler integrated on it. They usually run at 10°C lower temperatures.
I have a Gigabyte Aorus Geforce RTX 2080Ti Xtreme Waterforce 11G, and I'm quite happy with it. This is my only card crunching GPUGrid tasks at the moment: it runs at 250W power limit (originally it's 300W), 1890MHz GPU 64°C, 13740MHz RAM, 1M PPD.
As for air cooling the best cards are the MSI Lightning (Z) and MSI Gaming X Trio cards. They have 3 fans, but these are large diameter ones, as the card itself is huge (other 3-fan cards have smaller diameter fans, as the card is not tall and long enough for lager fans).
I have two MSI Geforce Gaming X Trio cards (a 2080 Ti and a 1080 Ti) and these have the best air cooling compared to my other cards.
If you buy a high-end (=power hungry) card, I recommend to buy only the huge ones. (look at their height and width as well).

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55665 - Posted: 1 Nov 2020 | 18:44:41 UTC
Last modified: 1 Nov 2020 | 18:46:52 UTC

Thank you both for your response. As always I really appreciate the constructive and kind atmosphere on this forum!

After framing your answer in such a compact and logical statement Keith, I feel stupid to not having figured this one out myself... Definitely will be on the lookout for 3-fan cards now! And the general notion of more fans/larger card dimensions due to the larger heatsink size totally corresponds to Zoltan's advice as well.

Your water-cooled 2080Ti sounds like a real beast... And all this at only 64°C. I hope to achieve the technical expertise and confidence to build my own custom water-cooled system in the future, but as for now neither budget nor skill level allow me to go forward with this. I meant to only refer to air-cooled cards, but thanks for providing me with the full picture here as well.

Unfortunately, I only see MSI X Trio cards for RTX 20xx series models and upwards. For 1660 Ti/Super MSI only offers dual-fan designs as far as I can tell. Currently among the 1660 Ti/Super models, I like the ASUS Strix 3-fan models and the Gigabyte AORUS 3-fan model best, but everywhere I look the 3-fan ASUS cards are unavailable. I'll probably look now into the Gigabyte AORUS GTX 1660 Ti card.

What really bothers me with the MSI cards (MSI Gaming X/Z 1660 Ti) is that their RGB feature cannot be permanently be disabled. Apparently, after every reboot they revert back to the default setting.

I would also like to initially run the new system as a dual-GPU system with my 750Ti paired with the new 1660Ti and eventually retire/change the 750Ti to a RTX 3070 which seems to be priced rather reasonably once availability isn't an issue anymore sometime next year.

Sorry that I kept annoying you all with my questions over the course of the last weeks. I feel like that I already learnt a lot about hardware in general and even more though about GPUs in particular thanks to your replies here. That's why I also changed my mind about the hardware selection about my new rig so often and had many conflicting thoughts about this. Thanks for bearing with me :)

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55677 - Posted: 2 Nov 2020 | 7:17:43 UTC

I have been purchasing nothing but EVGA Hybrid cards since the 1070 generation.
They have a water cooled gpu chip as well as radial fan cooled memory and VRM components. All based on a 120mm radiator.

My current EVGA 2080 Hybrid cards run less than 55° C. on the GPUGrid tasks which are the most stressful and power hungry of all my project tasks.

My other projects never break 50° C. and mainly stay between 40-45°C.

The hybrid cards avoid the complexity and cost of a custom cooled loop for the gpus but give you the same temperatures as a custom loop.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55680 - Posted: 2 Nov 2020 | 9:52:26 UTC - in response to Message 55677.

That's interesting. Didn't even know that hybrid cards existed up until now. Those temps definitely look impressive and are really unmatched compared to air cooling only solutions. Seems like a worthwhile option that I'll likely consider down the road but not for now as I am still very much budget constrained.

The recent numbers that I have seen for GTX 1660 Ti cards also matched with what rod4x4 posted earlier today as well as with I have seen across many hosts here on GPUGrid. Efficiency wise this card seems to always be in the top 10% quantile even compared to newer cards and this is what I also take into consideration if crunching 24/7. Trying to have my electricity produced from sustainable sources if possible and trying to increase efficiency overall by considering these factors prior to the final hardware purchase.

Sadly, in general the availability of EVGA cards here in Germany tends to be sparse at best. Only very few shops sell these cards and usually only offer a few models. Getting a hybrid EVGA card that is decently priced here is, as far as I can tell, almost impossible. From what I could read about the EVGA cards in the GPU Techpowerup database is that they usually tend to offer great value and are very competitively priced against the competition. Might try to get an EVGA branded RTX (hybrid) card in the future.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55684 - Posted: 2 Nov 2020 | 19:03:16 UTC
Last modified: 2 Nov 2020 | 19:50:17 UTC

https://www.evga.com/products/product.aspx?pn=08G-P4-3178-KR

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55738 - Posted: 14 Nov 2020 | 13:04:18 UTC
Last modified: 14 Nov 2020 | 13:07:22 UTC

Now, I have narrowed it down further to finally go with a 1660 Super. The premium you end up paying beyond (1660Ti) is just not worth the additional marginal performance. The intention behind this is to now get an efficient card that still has adequate performance (~50% of RTX 2080Ti) at an attractive price point and save up more to invest in a latest gen low-end card (preferably a hybrid one) that will boost overall performance and stay comfortably within my 750W power limit.

Don't want to make a science out of it, but my choice is now between 2 cards (Gigabyte vs Asus) and I would love to get your advice on some open questions that came up after further comparisons of the 2 models. I get that your consensus was that the bulkier the card, the cooler it'll run and the more headroom for overclocking the card will offer.

Now to the following issues as the card compare as follows:
Card 1 (Asus) / ASUS ROG Strix GeForce GTX 1660 SUPER OC
- 2.7 slots (Triple) / 47mm for larger heatsink volume
- 2 massive vertical heatpipes
- 2 fans / 100mm
- 1 continuous bulky heatsink
- no dedicated metal plate contact with the VRMs
- 1875 MHz boost clock

Card 2 (Gigabyte) / Gigabyte GeForce GTX 1660 SUPER Gaming OC 6G
- 2 slots (Dual) / 40mm
- 3 horizontal heatpipes
- 3 fans / 80mm
- 3 separate heatsinks behind each fan
- dedicated heatsink plate for VRMs
- 1860 MHz boost clock

Similarities, so these points don't differentiate them from one another in my final purchase decision.
--> 3 yrs warranty / boost clock / GDDR6 6GB / price @ ~260$. And connectivity doesn't play a major role for me as I only connect it to one external monitor.

Anyone has those cards and can share his/her own personal experiences with me? Any general advice on where you would lean to if you had to choose between these 2 air-cooled 1660 Super cards? Any pointer much appreciated

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55739 - Posted: 14 Nov 2020 | 17:30:11 UTC

The ASUS ROG Strix cards have an overall well liked reputation for performance, features and longevity.

The Gigabyte cards tend to be more mediocre from my observations.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55740 - Posted: 14 Nov 2020 | 17:32:01 UTC

Thanks for your feedback! As always Keith, much appreciated. Definitely value your feedback highly.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55741 - Posted: 15 Nov 2020 | 0:43:39 UTC - in response to Message 55739.

I second what Keith said, and I also have found that to be true of motherboards, when comparing Asus to Gigabyte.
Their price points are similar but Asus stuff just performs better and longer in my experiences.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55742 - Posted: 15 Nov 2020 | 1:05:14 UTC - in response to Message 55738.

The choice of 1660 SUPER for price and performance, combined with Asus ROG for it's features, longevity and thermal capacity are both excellent choices.

The only other thing to consider is how desperate are you to buy now?

The Ampere equivalent GPU to 1660 SUPER should be released by March 2021, Gpugrid should have Ampere compatibility by March as well. (I don't have a crystal ball so don't hold me to either of those claims!!)

However, if we always wait for the next best thing in technology, we would never buy anything....

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55747 - Posted: 15 Nov 2020 | 14:45:07 UTC
Last modified: 15 Nov 2020 | 14:58:10 UTC

Thank you both for your replies as well. I guess consensus is that Asus might offer the superior product. That's definitely reassuring :)

@rod4x4: What would the equivalent Ampere card to the 1660 Super be? The RTX 3060/3050? I am really fighting with myself right now, as this thought hasn't crossed my mind just once. Waiting now and saving up for a larger upgrade later early next year might be worthwhile but again, the half time of computational performance is not very long these days. Not that a 1660 Super would become obsolete but it already lacks far behind the current gen's capacity. I feel that especially this year, CPUs and GPUs alike experienced a rather steep improvement in performance ... and competition. The latter one hopefully proving to be benefit all of us in the long run.

I guess, it'd be much clearer for me of I only were to consider upgrading my GPU. But as I am building the PC from scratch, starting with PSU/mainboard etc. I guess the time is ripe now. And I would still be pretty much constrained by the very same ~300€ for a GPU in Q1/2021. That wouldn't get me close to a ~500$ RTX 3060 card. Hopefully, I will be able to add to it later next year with a latest gen RTX card while prices may already have come down and supply stabilised.

@Pop Pissa: Interesting! I have been torn between the B550 Aorus Pro AC from Gigabyte/Aourus and the Rog Strix B550-F Gaming for a long time now. And I really didn't see much difference neither from the comparisons of the specs, nor from reading a couple reviews. My gut feeling told me to go with the Gigabyte board, as reviews mostly said it'd run a few degrees cooler overall, but I might reconsider carefully again. I'll have (another) close look! Nothing's been finalised yet.

Never anticipated the anxiety that comes with choosing the right parts. :)

I quickly drafted a comparison table to compare the price per TFLOP F16 and price per TFLOP F64 performance of a few card models for which I found prices on some of Germany's biggest PC hardware shops. I averaged out the performance data from the respective models from the GPU techpowerup database and found once again that at least by this measure the 1660 Super comes out top. I hope the premium over the 1660 Super per TFLOP benchmark will narrow in the future.

Model Price [€] TFLOPs (F16) GFLOPs (F64) €/TFLOP (F16) €/TFLOP (F64) ∆ F16 [%] ∆ F64 [%]
1660 Super // 250 // 10,39 // 162,4 // 24,06 € // 1,54 € // 0,0% // 0,0%
1660 Ti // 290 // 11,52 // 180,0 // 25,17 € // 1,61 € // 4,6% // 4,6%
RTX 2060 Super // 435 // 14,62 // 228,5 // 29,75 € // 1,90 € // 23,7% // 23,6%
RTX 2070 Super // 525 // 18,12 // 283,2 // 28,97 € // 1,85 € // 20,4% // 20,4%
RTX 3070 // 699 // 21,55 // 336,7 // 32,44 € // 2,08 € // 34,8% // 34,8%

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55749 - Posted: 15 Nov 2020 | 18:33:32 UTC - in response to Message 55747.
Last modified: 15 Nov 2020 | 19:13:28 UTC

I am in the process of upgrading my GTX 750 Ti to a GTX 1650 Super on my Win7 64-bit machine, and thought I would do a little comparing first to my Ubuntu machines. The new card won't arrive until tomorrow, but this is what I have thus far. The output of each card is averaged over at least 20 work units.

I mainly compare efficiency (output per unit of energy), but collect speed along the way.

Note that the GTX 1650 Super has 1280 CUDA cores, the same as the GTX 1060, but faster memory (GDDR6 instead of GDDR5), and lower power (100 watts vs. 120 watts TDP).

Work units: GPUGRID 2.11 New version of ACEMD (cuda100) (TONI_MDAD, for example)

i7-4771 Win7 64-bit
GTX 750 Ti (446.14-CUDA10.2 driver)

Power (GPU-Z): 88.4% TDP = 53 watts
Average over 21 work units: 13340 seconds = 222 minutes
So energy is 53 x 222 = 11,784 watt-minutes per work unit

---------------------------------------------------------------------
Ryzen 2700, Ubuntu 20.04.1 GTX 1060, 450.66 driver (CUDA 11.0)
Average time per work unit: 5673 seconds
Power: 104 watts average (nvidia-smi -l)
So energy is 589,992 watt-seconds, or 9833 watt-minutes per work unit.

----------------------------------------------------------------------
Ryzen 2600, Ubuntu 20.04.1 GTX 1650 Super, 455.28 driver (CUDA 11.1)
Average time per work unit: 4468 seconds
Power: 92 watts average (nvidia-smi -l)
So energy is 411,056 watt-seconds, or 6851 watt-minutes per work unit.
=======================================================================


Conclusion: The GTX 1650 Super on Ubuntu is almost twice as efficient as the GTX 750 Ti on Win7.
But these time averages still jump around a bit when using BOINCTask estimates, so you would probably have to do more to really get a firm number.
The general trend appears to be correct though.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55750 - Posted: 15 Nov 2020 | 19:58:28 UTC

It's great seeing you again here Jim. From the discussion over at MLC I know that your top priority in GPU computing is efficiency as well. The numbers you shared seem very promising. I can only imagine the 1660 Super being very close to the results of a 1650 Super. I don't know if you already know the resources hidden all over in this thread, but I recommend you take a look if you don't.

Let me point one out in particular that Keith shared with me. It's a very detailed GPU comparison table at Seti (performance and efficiency): https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703

After all the advice I received on various topics throughout this thread, I want to express my sincere thanks to the following volunteers for your valuable contributions so far and for turning it into a lively discussion!
rod4x4 / Pop Piasa / Keith Myers / Retvari Zoltán / eXaPower / ServicEnginIC / Erich56 / Richard Haselgrove – Note pls that I didn't put these names in any particular order, but rather wanted to take this as an opportunity to express my gratitude for sharing your knowledge with me and giving advice.


Adding to my earlier post, my gut feeling to go with the Asus 1660 Super card stems from my current 750Ti which happens to also be an Asus card. Even though this second-hand bought card is already more than 6 years old, except for a renewal of thermal paste, it has been running for the last couple months nearly 24/7 and always stayed below 63% without the fans ever going above 65%.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55751 - Posted: 15 Nov 2020 | 20:36:40 UTC - in response to Message 55750.

Let me point one out in particular that Keith shared with me. It's a very detailed GPU comparison table at Seti (performance and efficiency): https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703
Thanks. The SETI figures are quite interesting, but they optimize their OpenCL stuff in ways that are probably quite different than CUDA, though the trends should be similar. I am out of the market for a while though, unless something dies on me.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55752 - Posted: 15 Nov 2020 | 21:12:45 UTC - in response to Message 55751.
Last modified: 15 Nov 2020 | 21:34:00 UTC

Incidentally all, have you seen the Geekbench CUDA benchmark chart yet?


https://browser.geekbench.com/cuda-benchmarks

Edit; I wonder if their test is relevant to ACEMD? The ratings are quite surprising to me and somewhat contradict what rod4x4 found using recent data here, as I glanced at it.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55753 - Posted: 16 Nov 2020 | 1:02:05 UTC

When the science app is optimized for CUDA calculations, it decimates ANY OpenCL application.

The RTX 3050 and RTX 3050 Ti are already leaked for release early next year with 2304 CUDA cores, same as the RTX 2070.

If you can maintain your patience, then that might be the sweet spot for performance/efficiency.

The Seti gpu performance/efficiency chart had NO included CUDA applications in it. Only OpenCL. The Seti special CUDA application was 3X-10X faster than the OpenCL application used in the charts.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55755 - Posted: 16 Nov 2020 | 12:16:24 UTC

Well, that thought has crossed my mind before. But I guess that availability will become an issue again next year and current retail prices still seem quite a bit higher than the suggested launch prices from Nvidia. And unfortunately I will still be budget constrained. Currently depending on the deals I would get, I look at ~650/700$ for all my components excluding the GPU. That leaves just a little over 300$ for this component. Looking at the rumoured price predictions for the 3050/3060 cards, at cheapest I will look at ~350$ for a 3050 Ti and that is without the price inflation we'll likely see on the retailer side at launch. Small inventory and an initial lack of supply will likely make matters even worse.

For benchmarking purposes, I consider the ø current retail price for a 1660 Super at ~250€. If you were to believe the leaked specs, a GTX 1660 Super would be the equivalent of:
- 84% of a RTX 3050 Ti --> 119% vs. GTX 1660 Super
- 75% of a RTX 2060 Super --> 133%
- 73% of a RTX 3060 --> 137%
- 65% of a RTX 2070 Super --> 154%
- 64% of a RTX 3060 Ti --> 156%
- 52% of a RTX 2080 Ti --> 192%
- 51% of a RTX 3070 --> 196%
- 39% of a RTX 3080 --> 256%
- 33% of a RTX 3090 --> 303%

Looking again at the rumoured specs taken from techpowerup, I drafted the following comparison list:
RTX 3050 Ti
- on par with a RTX 2060
- 3584 vs. 1920 CUDA cores
- 12.36 vs. 12.90 TFLOPs (F16)
- 150W vs. 160W
- same memory type + size, memory bandwidth, memory bus
- price ? (300$)

RTX 3060
- on par with a RTX 2060 Super / 2070
- 3840 vs. 2176 vs. 2304 CUDA cores
- 180W vs. 175W vs. 175W
- same memory type, smaller memory size, lower memory bandwidth, lower memory bus speed
- price ? (350$)
--> apparently 2 variations with 2 different VRAM sizes and bandwidths (?)

RTX 3060 Ti
- on par with a RTX 2070 Super
- 4864 vs. 2560 CUDA cores
- 200W vs. 215W
- same memory type + size, memory bandwidth, memory bus
- price ? (399$)

Do you think that major projects will have implemented the Ampere architecture in their GPU apps by early next year?


Only components left for me to choose now are the mobo where I am still torn between the 2 mentioned ones and the GPU. I'll probably look out for the 3060 launch (17th Nov or 2nd Dec) and watch prices like a hawk. While I happily wait for next year's launches, I probably won't be able to resist a 1660 super now if I come across a sweet deal. It just offers the best value IMO. And probably will continue doing so well into 2021 until prices settle down. At worst, I'll have a cool and efficient runner with 3yrs of warranty and getting a RTX 30xx card is not off the table, but rather delayed for a bit.

Looking at the price per TFLOP (F16) again as my final benchmark, the 1660 models score at ~23-24€/TFLOP, the RTX 20xx cards score in the range of 27-32€ depening on make and model and the latest RTX 30xx cards will score right in between if prices come down to their suggested launch prices to a range of 24-26€ (RTX 3050/3060/3070 models). Needless to say that at current prices, you'll get an efficiency boost over last gen cards, but you'll end up paying nearly the same if not more per TFLOP as for any RTX 20xx card (ø 30€/TFLOP). However, you won't get F16 = F32 performance as seen in the RTX cards... So, as of right now a new RTX card doesn't seem like a value buy to me. Whatever my final decision may be, I still plan to run at least 1 out of the 2 GPUs in the 750W system on my x16 PCIe 4.0 slot. I could see the RTX 3060 Ti settling at around the 400$ mark and as you pointed out, that might be the sweet spot for performance/efficiency that I'd like to add to the new system later next year at this price level.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55756 - Posted: 16 Nov 2020 | 17:17:36 UTC

I am only aware of the projects I run. That said, we are still waiting for Ampere compatibility here.

RTX 3000 series cards work at Einstein already. Haven't found any at Milkyway.
Think they are running at Folding@home and Collatz too.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55757 - Posted: 16 Nov 2020 | 19:15:41 UTC - in response to Message 55756.

I am only aware of the projects I run. That said, we are still waiting for Ampere compatibility here.

RTX 3000 series cards work at Einstein already. Haven't found any at Milkyway.
Think they are running at Folding@home and Collatz too.


I think the RTX3000 "should" work at MW since their apps are openCL like Einstein. But it's probably not worth it vs other cards since Nvidia nerfed FP64 so bad on the Ampere Geforce line, even worse than Turing.

____________

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55758 - Posted: 16 Nov 2020 | 23:33:06 UTC - in response to Message 55755.

- 84% of a RTX 3050 Ti --> 119% vs. GTX 1660 Super
- 73% of a RTX 3060 --> 137% (vs. GTX 1660 Super)


RTX 3050 Ti
- on par with a RTX 2060
- 3584 vs. 1920 CUDA cores
- 12.36 vs. 12.90 TFLOPs (F16)
- 150W


RTX 3060
- on par with a RTX 2060 Super / 2070
- 3840 vs. 2176 vs. 2304 CUDA cores
- 180W


Based on stats you quoted, I would definitely hold off on Ampere until Gpugrid releases Ampere compatible apps.
GTX 1660 SUPER vs Ampere:
RTX 3050 Ti - 119% performance increase, 125% power increase.
RTX 3060 - 137% performance increase, 150% power increase.
Not great stats.

I am sure these figures are not accurate at all. Once an optimised CUDA app is released for Gpugrid, the performance should be better. (but the increase is yet unknown)

To add to your considerations.
Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU. Wait until May, by then a clearer picture of how the market and BOINC projects are interacting with Ampere, and then purchase an Ampere card. This gives you time to save up for the Ampere GPU as well.
The GTX 1650 SUPER could be relegated to your retired rig for projects, testing etc, or sold on eBay.
Pricing on Ampere may have dropped a bit by May, (due to pressure from AMD and demand for Nvidia waning after the initial launch frenzy) so the extra money to invest in a GTX 1650S now, may not be that much.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55759 - Posted: 17 Nov 2020 | 0:48:46 UTC
Last modified: 17 Nov 2020 | 0:56:04 UTC

we are still waiting for Ampere compatibility here
Thanks Keith! Well, that's one of the reasons that I am hesitant still about purchasing an Ampere card yet. I'd much rather wait for the support at major projects and see from there. And that is very much along the same lines as rod4x4 has fittingly put it.
Wait until May, by then a clearer picture of how the market and BOINC projects are interacting with Ampere, and then purchase an Ampere card.
Delaying an RTX 3000 series card purchase, from my perspective, seems like a promising strategy. The initial launch frenzy you are mentioning is driving prices up incredibly... And I predict that the 3070/360Ti will attract the most demand that'll inevitably drive prices up further – at least in the short term.

Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU.
Very interesting train of thought rod4x4! I guess, the 160-180$ would still be money well worth spent (for me at least). Any upgrade would deliver a considerable performance boost from a 750 Ti :) I'd give up 2GB of memory, a bit of bandwidth, and just ~20% performance but at a 30-40% lower price... I'll think about that.
This gives you time to save up for the Ampere GPU as well.
Don't know if this is gonna happen this quickly for me, but that's definitely the upgrade path I am planning on. Would an 8-core ryzen (3700x) offer enough threads to run a future dual GPU setup with a 1650 Super + RTX 3060 Ti/3070 while still allowing free resources to be allocated to CPU projects?

Based on stats you quoted, I would definitely hold off on Ampere until Gpugrid releases Ampere compatible apps.
Well, there is definitely no shortage of rumours recently and numbers did change while I was looking up the stats from techpowerup. So, there is surely lots of uncertainty surrounding these preliminary stats, but they do offer a bit of guidance after all.

Pricing on Ampere may have dropped a bit by May, (due to pressure from AMD and demand for Nvidia waning after the initial launch frenzy) so the extra money to invest in a GTX 1650S now, may not be that much.
I fully agree on your assessment here! Thanks. Now it looks as I am going down the same route as Jim1348 after all! :)

since Nvidia nerfed FP64 so bad on the Ampere Geforce line, even worse than Turing.
I saw that too Ian&Steve C. Especially compared to AMD cards, their F64 performance actually looks rather poor. Is this due to a lack of technical ability or just a product design decisions to differentiate them further from their workstation/professional cards?

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55760 - Posted: 17 Nov 2020 | 1:25:12 UTC - in response to Message 55759.
Last modified: 17 Nov 2020 | 1:30:33 UTC

Would an 8-core ryzen (3700x) offer enough threads to run a future dual GPU setup with a 1650 Super + RTX 3060 Ti/3070 while still allowing free resources to be allocated to CPU projects?

I am running a Ryzen 7 1700 with dual GPU and WCG. I vary the CPU threads from 8 to 13 threads depending on the WCG sub-project. (Mainly due to the limitations of the L3 cache and memory controller on the 1700)

The GPUs will suffer a small performance drop due to the CPU thread usage, but this is easily offset by the knowledge other projects are benefiting from your contributions.

The 3700X CPU is far more capable than a 1700 CPU and does not have the same limitations, so the answer is YES, the 3700X will do the job well!

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55761 - Posted: 17 Nov 2020 | 2:21:11 UTC
Last modified: 17 Nov 2020 | 2:36:55 UTC

I saw that too Ian&Steve C. Especially compared to AMD cards, their F64 performance actually looks rather poor. Is this due to a lack of technical ability or just a product design decisions to differentiate them further from their workstation/professional cards?

Yes, it is a conscious design decision since the Kepler family of cards. The change in design philosophy started with this generation. The GTX 780Ti and the Titan Black were identical cards for the most part based on the GK110 silicon.

The base clock of the Titan Black was a tad lower but the same number of CUDA cores and SM's. But you could switch the Titan Black to 1:3 FP64 mode in the video driver when the driver detected that card type while the GTX 780Ti had to run at 1:24 FP64 mode.

Nvidia makes design changes in the silicon of the gpu to de-emphasize the FP64 performance in later designs and didn't rely on just a driver change.

So it is not out of incompetence, just the focus on gaming because they think that anyone that buys their consumer cards is solely focused on gaming.

But in reality we crunchers use the cards not for their intended purposes because it is all we can afford since we don't have the industrial strength pocketbooks of industry, HPC computing and higher education entities to purchase the Quadros and the Teslas.

The generation design of the silicon is the same among the consumer cards and professional cards, but the floating point pathways and registers are implemented very differently in the professional card silicon.

So there is actual differences in the gpu dies between the two product stacks.
The professional silicon gets the "full-fat" version and the consumer dies are very cut-down versions.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55762 - Posted: 17 Nov 2020 | 7:01:46 UTC

On November 16th rod4x4 wrote:

To add to your considerations.
Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU

At this point, perhaps I can help a bit the way I like: with a real-life example.
I have both GTX 1650 SUPER and GTX 750 TI graphics cards running GPUGrid 24/7 in dedicated systems #186626 and #557889 for more than one month.
Current RAC for GTX 1650 SUPER is settled at 350K, while for GTX 750 TI is at 100K: about 3,5 performance ratio.
Based on nvidia-smi:
GTX 1650 SUPER power consumtion: 96W at 99% GPU utilization
GTX 750 Ti power consumtion: 38W at 99% GPU utilization
Power consumtion ratio is about 2,5. Taking in mind that performance ratio was 3,5, power efficiency for GTX 1650 SUPER is clearly beating GTX 750 Ti.
However, GTX 750 Ti is winning in cooling performance: it is maintaining 60 ºC at full load, compared to 72 ºC for GTX 1650 SUPER, at 25 ºC room temperature.
But this GTX 750 Ti is not exactly a regular graphics card, but a repaired one...

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55763 - Posted: 17 Nov 2020 | 13:29:03 UTC
Last modified: 17 Nov 2020 | 13:30:01 UTC

The 3700X CPU is far more capable than a 1700 CPU and does not have the same limitations, so the answer is YES, the 3700X will do the job well!
Well, that is again tremendously reassuring! Thanks
GPUs will suffer a small performance drop due to the CPU thread usage
Well, Zoltan elaborated a couple times on this topic very eloquently. While I can definitely live with a small performance penalty, it was interesting to me that newer CPUs might actually see their memory bandwidth saturated faster than earlier lower-thread count CPUs. And that by comparison, the same high-end GPU running along a high-end desktop processor might actually perform worse than if being run along an earlier CPU with higher # of memory channels and bandwidth.

Yes, it is a conscious design decision since the Kepler family of cards.
Unfortunately, from the business perspective, it makes total sense for them to introduce this price discrimination barrier between their retail and professional product lines.
But in reality we crunchers use the cards not for their intended purposes because it is all we can afford ... industry, HPC computing and higher education entities to purchase the Quadros and the Teslas.
Still, it is a rather poor design decision if you compare it to the design philosophy of AMD and the F64 they are delivering with their retail products...

At this point, perhaps I can help a bit the way I like: with a real-life example.
You never disappoint with your answers as I love the way you go about giving feedback. Real-life use cases are always best to illustrate arguments.
Definitely interesting. I thought, that given most card manufactures offering very similar cooling solutions for the 1650 Super and 1660 Super cards, the 1650 Super rated @ only 100W TDP would run at similar temps if not lower than the 1660 Super cards. Might I ask what model you are running on? And at what % the GPU fans are running. Eying the Rog Strix 1650 Super, it seems to offer the same cooling solution as its 1660 Super counterpart and test reviews I read suggested that this card runs very cool (< 70 ºC) and this is at 125W. Would be keen on your feedback.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55764 - Posted: 17 Nov 2020 | 16:28:16 UTC - in response to Message 55762.

Current RAC for GTX 1650 SUPER is settled at 350K, while for GTX 750 TI is at 100K: about 3,5 performance ratio.
Based on nvidia-smi:
GTX 1650 SUPER power consumtion: 96W at 99% GPU utilization
GTX 750 Ti power consumtion: 38W at 99% GPU utilization
Power consumtion ratio is about 2,5. Taking in mind that performance ratio was 3,5, power efficiency for GTX 1650 SUPER is clearly beating GTX 750 Ti.
However, GTX 750 Ti is winning in cooling performance: it is maintaining 60 ºC at full load, compared to 72 ºC for GTX 1650 SUPER, at 25 ºC room temperature.
But this GTX 750 Ti is not exactly a regular graphics card, but a repaired one...

I just received my GTX 1650 Super, and have run only three work units, but it seems to be about the same gain. It is very nice.

I needed one that was quiet, so I bought the "MSI GeForce GTX 1650 Super Ventus XS". It is running at 66 C in a typical mid-ATX case with just a rear fan, in a somewhat warm room (73 F). Now it is the CPU fan I need to work on.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55765 - Posted: 17 Nov 2020 | 17:07:34 UTC - in response to Message 55764.
Last modified: 17 Nov 2020 | 17:52:55 UTC

That's great news Jim! Thanks for providing feedback on this. Definitely seems that the 1650 Super is a capable card and will accompany my 750Ti for the interim time :)

The variation seen in the operating temperature is probably mostly due to case airflow characteristics right? ... as I can't really see how such a big gap in temp could only result from the different cooling solutions on the cards. (That's ~9% difference!)

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55766 - Posted: 17 Nov 2020 | 19:10:06 UTC - in response to Message 55765.

The variation seen in the operating temperature is probably mostly due to case airflow characteristics right? ... as I can't really see how such a big gap in temp could only result from the different cooling solutions on the cards. (That's ~9% difference!)

Maybe so. I can't really tell. I have one other GTX 1650 Super on GPUGrid, but that is in a Linux machine. It is a similar size case, with a little stronger rear fan, since it does not need to be so quiet. It is an EVGA dual-fan card, and is running at 61 C. But it shows only 93 watts power, so a little less than on the Windows machine.

If I were to choose for quiet though, I would go for the MSI, though they are both nice cards.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,265,339,066
RAC: 16,049,998
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55767 - Posted: 17 Nov 2020 | 20:03:53 UTC - in response to Message 55763.
Last modified: 17 Nov 2020 | 20:13:37 UTC

Might I ask what model you are running on? And at what % the GPU fans are running.

Sure,
The model I've talked about is an Asus TUF-GTX1650S-4G-GAMING: https://www.asus.com/Motherboards-Components/Graphics-Cards/TUF-Gaming/TUF-GTX1650S-4G-GAMING/
As Jim1348 confirms, I'm very satisfied with its performance, considering its power consumption.
Regarding working temperature and fan setting, I usually let my cards to work at factory preset clocks and fan curves.



As can be seen in this nvidia-smi command screenshot, this particular card seems to feel comfortable at 72 ºC (161,6 ºF), since it isn't pushing fans beyond 47% at this temperature...
It is installed at an standard ATX tower case, with a front 120 mm fan, two rear, and one upper 80 mm low noise chassis fans.
Current room temperature is 25 ºC (77 ºF)

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55769 - Posted: 17 Nov 2020 | 22:46:42 UTC - in response to Message 55767.

Interesting to hear about your personal experiences, now about 3 different models already. Seems to be a very decent card overall!

And I came up with the (maybe very naive) conclusion that at least for the combination low power + air cooled card, the differences in air cooling performance between the cards is negligent and it all comes down to fan curve customisation and airflow in the case. While this might be oversimplified, as some cards might have larger heatsinks and/or cooling capabilities, such as cooling pads or dedicated heatsinks, for their VRMs, I reckon that the competitive advantage of one air cooling solution over another, really plays a minor role in operating temps – again, at least at this rather low TDP rating.

And for longevity and performance, Asus and MSI seem to often outperform their peers, including longer warranty, more and higher quality VRMs, larger overall heatsinks, larger fans, etc. While this naturally comes at a certain price premium, I reckon that this is money rather well spent. So I'll look for a Rog Strix Asus 1650 Super (larger fans than TUF) or MSI 1660 Super Gaming X, which should fall in the range in between 160-180€ according to the current retail prices that are available for me. Thanks for helping me come this process to a final decision finally! :)

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55771 - Posted: 18 Nov 2020 | 16:29:24 UTC - in response to Message 55769.
Last modified: 18 Nov 2020 | 16:38:25 UTC

As can be seen in this nvidia-smi command screenshot, this particular card seems to feel comfortable at 72 ºC (161,6 ºF), since it isn't pushing fans beyond 47% at this temperature...

Thanks again for pointing this out. As I wasn't aware of this command before, I was stoked to try it out with my card as well and was quite surprised with the output... Positively, as the card showed a much lower wattage than anticipated, but negatively as it did thereby obviously not reach its full potential. As NVIDIA specified the 750 Ti card architecture with 60W TDP, I suspected the card being very close to it. As it is additionally an OC edition, particularly an Asus 750 Ti OC, and it is powered with an additional 6 pin power adapter, in theory that command should theoretically output much more than the recommended 60W by NVIDIA.

But to my surprise it was maxed out at 38.5W which was obtained by using
nvidia-smi -q -d POWER
Especially as it should give 75W (6 pin) + 75W (PCIe) = 150W. What the heck do I need the additional power for if the card would only need ~25% of the available power?

As I heard many times by now, that overclocked cards can easily go above their specified power limit as their PCB normally support up to 120-130% depending on make and model, I am honestly quite stumped by this. Is there a way to increase that specified power limit in the card's BIOS somehow? I feel like, even by increasing it a bit, but staying close to the 60W, I could increase performance while still operating with safe settings as cooling looks to be sufficient. Any idea on how to do this? Currently I am 64.2% of NVIDIA's TDP rating. That seems to be way too cautious...

As discussed in the beginning of this thread, my card sometimes threw a compute error while it was overclocked and Zoltan suggested this might be the cause. I can now better verify this as the
nvidia-smi -q -d CLOCK
command now said that when tasks failed, I operated the card at a memory clock that was higher (2,845 MHz) than the maximum supported clock speed (2,820 MHz), while according to the output there is still some headroom with the core clock (~91 MHz to 1,466 MHz). This mainly happened due to me setting of mem OC = core OC, which might not have been tested long enough with MSI Kombustor to attain an accurate stability reading from the benchmarks that were running.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55772 - Posted: 18 Nov 2020 | 17:38:20 UTC

You can't force a card to use more power than the application requires.

It all depends on the application running.It will use as much as it needs and no more.

You can only restrict the power used by the card with the -pl command.

To increase the utilization of the card, you generally run two or more work units on the card.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55773 - Posted: 18 Nov 2020 | 17:45:40 UTC - in response to Message 55772.
Last modified: 18 Nov 2020 | 17:46:17 UTC

But in this case one single GPUGrid task utilises 100% of the CUDA compute ability and running into the power limit. It uses 38.5W out of the specified max wattage of 38.5W that can be read from the nvidia-smi -q -d POWER output. Isn't there a way to increase the max power limit on this card even if the application is demanding for it and the NVIDIA reference card design is rated at the much higher TDP of 60W?

I just feel that is a lot of "wasted" compute potential. Or should I rather say "unaccessible" for now with the 38.5W power limit.

To increase the utilization of the card, you generally run two or more work units on the card.
That's what I did so far with GPU apps that demanded less than 100% CUDA compute capacity for a single task such as Milkyway (still rather poor F64 perf.) or MLC which usually use 60-70%.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55777 - Posted: 18 Nov 2020 | 23:55:35 UTC - in response to Message 55773.

To get more performance on a card, you overclock it.

First thing to do with the higher stack Nvidia cards is to restore the memory clock to P0 power state that the Nvidia drivers deliberately downclock when a compute application is run.

There isn't much point in overclocking the base clock on Nvidia because the firmware in the cards generally self overclock based on the thermal and power limits of the card.

To get better base overclocks, cool the card better so it can boost higher.

The lower stack Nvidia cards like the 1050 and such don't get penalized by the drivers when doing compute loads and we guess Nvidia figured nobody in their right mind would ever run compute loads on those cards so they don't get penalized.

You can overclock the Nvidia cards by running the "coolbits" tweak script and rebooting and then use the Nvidia X Server Settings app to control the fan speeds, overclock the base clock and restore the memory clock to the stock P0 power state clocks and override the downclocking by the drivers.

That would cause the card to consume more power due to the higher clocks.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55778 - Posted: 18 Nov 2020 | 23:59:33 UTC
Last modified: 19 Nov 2020 | 0:04:16 UTC

This is what you can do with the coolbits set for the cards in the system.




Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1626
Credit: 9,376,466,723
RAC: 19,051,824
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55779 - Posted: 19 Nov 2020 | 7:22:45 UTC - in response to Message 55778.

This is what you can do with the coolbits set for the cards in the system.





Need to knock the s off the https for the img tags, with this old server software.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55783 - Posted: 19 Nov 2020 | 16:36:14 UTC

Thanks for the fix Richard.

Couldn't figure out why it wasn't working.

Was confused about how to post images with the method you have to use on Einstein's old software.

Didn't realize the same applied here. Hope I remember the trick for future use.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55785 - Posted: 19 Nov 2020 | 17:07:51 UTC - in response to Message 55783.
Last modified: 19 Nov 2020 | 17:12:10 UTC

Thanks for those awesome tips Keith! I will definitely give it a try as soon as my currently running climate prediction tasks will finish and I can safely reboot into Linux on my system. Thanks also for making it so clear by attaching the screenshots!

First thing to do with the higher stack Nvidia cards is to restore the memory clock to P0 power state that the Nvidia drivers deliberately downclock when a compute application is run.
Will take that advice to heart in the future for the next card :)

Curious about what improvement these suggested changes might result in, I made some changes in MSI afterburner on Windows. Might be the "beginner solution", but after having changed to a bit more aggressive fan curve profile, increasing core overclock to now 1.4 GHz, and maxing out the temp. limit to 100C, I already start to see some of the desired changes.

Prior:
- 1365 MHz core/2840 MHz mem clock
- occasional compute errors thrown on GPUGrid
- 60-63C
- 55% fan @20C ambient or 65% @30C ambient
- GPUGrid task load: (1) compute: 100%, (2) power: 100% (38.5W)
- boost clock not sustained for longer time periods

Now:
- 1400 MHz core/2820 MHz mem clock
- no compute errors so far
- 63C constantly (fan curve adjust. temp. point)
- 70% fan@20C ambient
- GPUGrid task load: (1) compute: 100%, (2) power: ø ~102% (39.2) / max 104.5% (40.2W)
--> got these numbers by running a task today and averaging out the wattage numbers that I obtained over a 10 min period while under load with the following command "nvidia-smi --query-gpu=power.draw --format=csv --loop-ms=1000". Maybe I should have done it for a shorter period but at a lower "resolution".
- boost clock now sustained at 1391 MHz for longer periods.

While this is already an easy way of improving performance slightly, I still haven't figured out how to circumvent the card to always run into the power limit or increase it altogether for that matter. According to what I found out so far, the PCB of this card is not rated to operate safely for longer periods at higher than stock voltages, but what I have in mind is rather a more subtle change in max. power. I would like to see how the card would operate at 110% of its current 38.5W and then see from there. The only way I have come is to modify the BIOS and then flash it to the card. I did the first step successfully and have now a modified BIOS version with a power limit of 42.3W besides a copy of the original version, but unfortunately no way to flash it to the card. - Even just for trying, as I don't want to fry the card... Nvflash is apparently not working on Windows and I would have to boot into DOS in order for the Nvflash software to work and deploy the new BIOS. That is way over my head, especially as I don't feel comfortable to not understand all of the technical specifications of the BIOS, even though I only touched the power limit. For now, I am hitting a roadblock and feel much more comfortable with the current card settings. Thanks for the quick "power boost". Definitely looking forward to take a closer look when booted into Linux

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55787 - Posted: 19 Nov 2020 | 18:52:58 UTC

Well as I mentioned earlier, Nvidia doesn't usually hamstring the lower stack cards like your GTX 750Ti. So you don't need to overclock the memory to make it run at P0 clocks because it already is doing that.

But you could try and bump it a bit above stock clocks and see if runs without errors. You can certainly increase the fan speeds and it will run faster because it is cooler.

To get access to the overclocking mode in Linux you need to set the coolbits with a command line entry.

sudo nvidia-xconfig --thermal-configuration-check --cool-bits=28 --enable-all-gpus


That will un-grey the fan sliders and the entry boxes for core and memory clocks.

Logout and log back in or reboot to enable the changes. Then open up the Nvidia X Server Settings application the drivers install and start playing around with the settings.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55788 - Posted: 19 Nov 2020 | 18:56:06 UTC

To flash card BIOS' you normally make a DOS boot stick and put the flashing software and the ROM file on it.

Plenty of posts in various fora about how to do that. Try Google.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55789 - Posted: 19 Nov 2020 | 19:02:22 UTC - in response to Message 55788.
Last modified: 19 Nov 2020 | 19:05:04 UTC

I certainly will, just hadn't much today to do my research. But yeah, this is how far I have been already with my google research. I strongly plan on further researching this using google just out of curiosity on the weekend when I have more time. :)

So you don't need to overclock the memory to make it run at P0 clocks because it already is doing that.
I will scale it back to stock speed then and see how/whether it'll impact performance.

That command line is greatly appreciated.


I just wish to understand the rationale of the cardmaker (Asus) to include a 6-pin power plug requirement but then restricting it to a much lower value on the other hand at which the PCIe wattage supply would easily have sufficed. Just can't wrap my head around that...

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55791 - Posted: 19 Nov 2020 | 19:36:11 UTC - in response to Message 55789.

If I remember correctly, the GTX 750 Ti was a transition product that straddled the Kepler and Maxwell families.

It could very well have just inherited the PCB from a previous Kepler board that needed the 6 pin PCIe connector and they just plopped the new Maxwell silicon on it.

Or the Nvidia engineers decided that the 60W TDP of the card was cutting it too close to the 66W 12V PCIE slot power limit and decided to bolster the card with the 6 pin connector which was barely needed.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55795 - Posted: 19 Nov 2020 | 19:56:50 UTC - in response to Message 55791.

Well, thanks! Both are plausible explanations. I will let things rest now :)
Cheers

Profile tito
Send message
Joined: 21 May 09
Posts: 22
Credit: 1,916,690,043
RAC: 5,534,947
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 55796 - Posted: 19 Nov 2020 | 20:17:51 UTC

There is nice OC tool on Linux - GreenWithEnvy. Not powerful like MSI Afterburner on Windows, but still usable. And it has Windows look like apperance.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55797 - Posted: 19 Nov 2020 | 20:26:51 UTC - in response to Message 55796.

There is nice OC tool on Linux - GreenWithEnvy. Not powerful like MSI Afterburner on Windows, but still usable. And it has Windows look like apperance.

I tried to run it when it first came out. Was incompatible with multi-gpu setups.

I haven't revisited it since. Had to install a ton of precursor dependencies.

Not for a Linux newbie in my opinion.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55798 - Posted: 19 Nov 2020 | 20:28:35 UTC - in response to Message 55796.
Last modified: 19 Nov 2020 | 20:51:56 UTC

Great! Didn't know that such a software alternative existed for Linux. Will take a closer look as soon as I get the chance. Thanks for sharing!

Just having seen your comment Keith, I'll might reconsider. I'll definitely check out the GitLab page/manual and skim through the linked review articles.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55802 - Posted: 20 Nov 2020 | 0:33:20 UTC

Well I just revisited the GWE Gitlab repository and read through the changelogs.
There still isn't any multi-gpu support working as of 1 month ago.

Says it is still broken and doesn't have time to fix it.

So not an option for me still. I always run multi-gpu hosts.

Would not be a problem for you bozz4science with just your single GTX 750 Ti.

Also the developer has simplified the installation greatly by dropping PyPi repositories.

Even better he has a Flatpak image installation which is even simpler.

I would just download the Flatpak installation image and run that.

Read the readme.md file on the main page for the flatpak installation instructions.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55856 - Posted: 30 Nov 2020 | 13:15:47 UTC
Last modified: 30 Nov 2020 | 13:16:27 UTC

Recently I tried out to run some WUs of the new Wieferich and Wall-Sun-Sun Prime Grid subproject out of curiosity. Usually I don't run this project, but in my experience these math tasks are usually very power hungry. However, I was still very much surprised to see, that this was the the first task that did push well beyond the power limit of this card with a max wattage draw of 64.8W and and average of 55W while maintaining the same settings. Even GPUGrid can't push the card this far. Though, this is well above the 38.5W defined power limit for this particular card, it is now close to the reference card's TDP rating of 65W. All is running smoothly at this wattage draw and temps are very stable, so I reckon that this card could hypothetically demand much higher power if it wanted to.

Interestingly, from what I could observe, the overall compute load was much lower than for GPUGrid (ø 70% vs. 100%) and similar strain on the 3D engine. What does/could cause the spike in wattage draw even though I cannot observe what is causing this? Any ideas? I just don't understand how a GPU can draw more power even though overall utilisation is down... What am I missing here? Any pointers appreciated

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2356
Credit: 16,376,266,666
RAC: 3,493,740
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55864 - Posted: 1 Dec 2020 | 0:55:45 UTC - in response to Message 55856.
Last modified: 1 Dec 2020 | 0:58:31 UTC

Interestingly, from what I could observe, the overall compute load was much lower than for GPUGrid (ø 70% vs. 100%) and similar strain on the 3D engine. What does/could cause the spike in wattage draw even though I cannot observe what is causing this? Any ideas? I just don't understand how a GPU can draw more power even though overall utilisation is down... What am I missing here? Any pointers appreciated

The GPU usage reading shows the utilization of the "task scheduler" units of the GPU, not the utilization of the "task execution" units.
The latter can be measured by the ratio of the power draw of the card and its TDP.
Different apps cause different loads of the different parts of the GPU.
Smaller math tasks can even fit into the cache of the GPU chip itself, so it won't interact with its memory much (making the "task execution" units even more busy).
The GPUGrid app interacts a lot with the CPU and the GPU memory.

bozz4science
Send message
Joined: 22 May 20
Posts: 110
Credit: 114,275,746
RAC: 195,156
Level
Cys
Scientific publications
wat
Message 55923 - Posted: 9 Dec 2020 | 21:53:19 UTC - in response to Message 55864.

Thank you Zoltan for your answer. As always much appreciated.


Also I have to finally share with you what components I chose for my system. I guess, after all the advice I have received from you, I owe you at least a short update on my project, that the thread was initially all about. (but ended up to be so much more)

My main motivation for the chosen parts for my first system build was a) longevity, b) efficiency, c) headroom for future upgrades. While I went a bit over budget, I am happy to finally have ordered all parts. Depending on how smooth the build process will work out in the end over the holidays, I am planning to bring the new system online in early 2021.

The supply shortage of the 1600 series cards honestly took me by surprise as I thought that everyone would only be looking into the Ampere cards. While back in Sep/Oct the supply still seemed decent and pricing fair, the 1650 Super and 1660 Super cards were nowhere to be found. After a couple weeks now seeing absolutely no stock, I discovered some retailers getting the cards back in stock and I pulled the trigger. I settled for a 1660 Super. At the time of purchase the prices for the Asus Strix 1650 Super and 1660 Super cards were 215€ and 255€ respectively and the price differential was too small to justify going with the 1650Super IMO. In the end, I believe this is still worth the money and a great addition to the new system. Adding to that difficulty, I didn't anticipate that many people upgrading to Ampere apparently also had to simultaneously upgrade their PSUs. Thus, it was just impossible to find any 750W+ PSU. This was the last component missing and is now in transit. Thanks again for helping me out here over in the hardware thread as I wasn't sure about the 4-pin EATX vs. 8-pin EATX connectors.

Specs:
- CPU: Ryzen 3700X
- GPU: Asus Strix 1660 Super Adv.
- RAM: 16 GB DDR4 3200MHz 16-18-18-36
- PSU: Platinum 850W
- Mobo: "higher-end" Asus B550 (2 x16 PCIe gen 4 slots in dual x8 mode, 1 x16 PCIe gen 3 slot in x4 mode)

Definitely gonna be a steep upgrade from my current setup:
- CPU: Intel Xeon X5660
- GPU: GTX 750 Ti
- RAM: 12 GB DDR3 ECC 1333 MHz
- PSU: Bronze 475W

My update path I plan to execute on in the future is the following.
- CPU: upgrade from Zen 2 8 core to Zen3 16 core CPU (5950X) or waiting for Zen4 and new chipset/socket
- GPU: from single GPU GTX 1660 Super to dual GPU system by adding a RTX 3060Ti/3070
- RAM: upgrade from 16 GB to 32GB

Post to thread

Message boards : Graphics cards (GPUs) : General buying advice for new GPU needed!

//