Message boards :
Graphics cards (GPUs) :
General buying advice for new GPU needed!
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Currently I am facing a few questions that I don't know the answers to as I am quite new in the market for GPUs. Felt that due to next gen of NVIDIA cards with their RTX 3000 series, it would be time to consider an "upgrade" myself. At the moment I am running a pretty old rig which is a HP workstation with an older Xeon processor and a GTX 750 Ti OC Edition. So far I am pretty happy with it, but would like to get a more efficient card with the Turing chip which seems to me as a great balance between performance, efficiency and price. As I am currently constrained by the available x16 PCIe slots on my Mobo, as well as cooling/airflow issues and power supply bottleneck (475 W @ 80% efficiency), I only want to consider cards with ~100 W TDP and <250€. Thus, I wouldn't be able to run a dual-GPU rig, but rather had to substitute the old vs. new card. Having read up on the forum, I see that a GTX 1650 seems to run rather efficient. Really can't afford any RTX 2000 or 3000 series card. With a GTX 1650 Super sitting at ~160€ right now, I definitely consider upgrading right now. Thanks for any input! Here are my questions. 1) Considering a GTX 1650 Super, I wonder if this is a great overall investment at this time? Would you have any other recommendations for a card with similar specs within the mentioned boundaries? 2) Does anyone suspect a spillover effect anytime soon price-wise to the mid-end NVIDIA GPUs or will they probably be unaffected by the release of the new high-end RTX 3000 series? 3) Anyone having experience with hardware purchase around Black Friday? Never done it, so no prior experience to whether this period is a great/better time to consider a GPU purchase than right now. Can you usually score better deals around BF? 4) Looking at other cards from AMD that don't run on GPU Grid due to the lack of CUDA, the NVIDIA cards sometimes seem to offer less bang for the buck and often lack behind similar AMD cards' features such as memory size, etc. Would I "overpay" for a GTX 1650 Super right now? 5) A concern for me is the 4 GB of GDDR6 memory as this seems rather low compared to other mid-tier GPUs. Is this future proof in terms of running BOINC for at least a couple years? Or is this potentially a real issue anytime soon? 6) Does memory size affect speed or only the capability of running a particular WU at all? 7) Considering a specific card model, let's say f.ex. a GTX 1650 Super, how can I differentiate between different makes of various producers? For me the technical specs seem rather undistinguishable, with only various cooling/fan designs being marketed as particularly innovative and quiet. This card from producers such as ASUS, MSI, Zotac, EVGA, Gigabyte, etc. all have roughly the same boost clock, lie pretty much within a 25$/20€ range within each other, have the same memory and memory type, same power connector requirement (6 pin) etc....? What do I need to watch out for here and in what ranking do I need to consider these points? Is there any particular brand out there that is notoriously known for "dud" cards or bad customer service for card replacement within the warranty period? In particular I reckon with a MSI or ASUS card right now... 8) Regarding the GTX 1660 with a slightly higher TDP, a moderate performance improvement and larger memory, would investing into this higher-end card performance-wise make more sense? 9) As I have previously bought 2 GPUs used on eBay, and one running smoothly but the other one was dead upon arrival, I tend to second-guess buying used cards, as you never know how people have run their cards before. What experiences have you made with used cards? Have you ever had experienced similar issues? Would you consider buying such a card used? At what price difference to a newly bought card, does buying second-hand make sense? What about warranty issues, especially with running the card 24/7 here on BOINC? Note: Regarding the TDP, I want to run quietly and avoid extreme temperatures, and considering the 80% Bronze PSU, the 100W TDP rated card would easily pull 120W if overclocked and under full load. That's my limit for the running cost of my rig 24/7 and without having the need to upgrade my whole rig. Any advice much appreciated! |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Have a look a the GPUFlops Theory vs Reality chart. The GTX 1660Ti is top of the chart for efficiency. https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703 Only 120W and has 6GB of memory. GPUGrid likes fast memory transactions and wide memory bandwidth. Doesn't actually use all that much memory in practice though for tasks. My current task running is only showing 300MB of memory in use on my RTX 2080 card but with 97% utilization. Other projects do though like Einstein's Gravity Wave tasks which can use over 4GB of memory to crunch a task. There might be some Black Friday loss-leader sales of the older generation from the people that still haven't been able to snag a new Ampere card. But whether in general the prices will be lower, in past years that hasn't been the case in generational transitional years. Steve at GN actually commented on this topic. |
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Well, thanks first of all for your timely answer! I appreciate the pointer to the efficiency comparison table very much. Impressive how detailed it is. Kind of confirms what I already suspected as it really seems that the GTX 1660 / Ti would be more costly initially but well worth the investment even though all Turing based cards score pretty well. That also is very similar to the data ServicEnginIC shared with me in another recent post of mine. Probably was looking for the easy and fast way to boost performance without having to upgrade my whole rig. A GTX 1660 Ti would require an 8 pin connector but my PSU only offers a single 6 pin. What I'll end up doing very likely is to wait for the GPU upgrade until I can afford to start from scratch building a whole new system while substituting my Xeon for a Ryzen chip as well. I guess that's worth a bit more patience. Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :) Definitely reassured me in my plans to consider an upgrade in the near future and that a GTX 1660 model either Ti or Super will be a great choice. The decision to wait then is more based on the timing of being able to set up a new rig rather than trying to aim for a great deal on BF while it might still be worth a shot being on the lookout for a great GTX 1660 deal! I'll keep running my 750Ti 24/7 for now. At the current rate it takes ~10 days for 1m. It's gonna be a long journey :) Thanks Keith Myers for your advice! |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
A GTX 1660 Ti would require an 8 pin connector but my PSU only offers a single 6 pin. What about some spare Molex connectors from the power supply. A lot of cards include Molex to PCIe power adapters with the card. I'm sure there are adapters to go from Molex to 8-pin available online. Might be a solution. But a new build solves a lot of compatibility issues since you are spec'ing out everything from scratch. I like my Ryzen cpus as they provide enough cpu threads to crunch cpu tasks for a lot of projects and still provide enough cpu support to support GPUGrid tasks on multiple gpus. |
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Thanks for this idea. I'll investigate this possibility further while I actually prefer to wait now to plan the new rig first. Do you know how much W a molex connector supplies? Have read different values online so far... Oh man, you got some nice rigs. I'll probably score well below your Threadripper and/or Ryzen 9. At the moment I like a Ryzen 3700X for my new setup. Definitely amazing if you compare the performance of a 3700X paired with a GTX1660 Ti vs. a Xeon X5660 @95W and a GTX750 Ti and especially its efficiency. As this is gonna be my first own build, it'll take some time doing my research and looking for the right parts. It's gonna be fun though :) Thanks again for your insights! |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :) I am running GTX750Ti in two of my old and small machines. And I don't want to miss these cards :-) They have been doing a perfect job so far, for many years. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Keith is correct that adaptors are available. The power to the 1660Ti is what's also known as a 6+2 pin VGA power plug. That might help with your search. We're also assuming that your PSU is at least 450W. From what I have seen the 1660 super is a better buy, and the GTX3080 at $750 USD is now a best value in direct computing power (provided you can find one for sale). Check them out here: https://www.videocardbenchmark.net/directCompute.html I'm going to wait and see what the market does after the 3000 series hits the streets for a while. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Something to think about, two GTX1660 Supers on a machine can potentially keep pace with a GTX2080. That's $460 USD vs $800 for the 2080. Both solutions work equally well for research computation. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Oops, I mistakenly called the RTX GPUs GTX. My bad.🙄 |
ServicEnginICSend message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 1,447 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Interesting to see the GTX 750 Ti still making it to the top of the list performance-wise :) I built from recovered parts this system to return to work a repaired GTX 750 Ti graphics card. It is currently scoring a 50K+ RAC at GPUGrid. A curious collateral anecdote: I attached this system to PrimeGrid, for maintaining its CPU working also. OK, finally it collaborated as double checker system in discovering the prime number 885*2^2886389+1 Being 868.893 digits long, it entered T5K list for largest known primes at position 971 ;-) |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Thanks for this idea. I'll investigate this possibility further while I actually prefer to wait now to plan the new rig first. Do you know how much W a molex connector supplies? Have read different values online so far... The standard Molex connector can supply 11A for the 12V pin or 132W. The quick perusal at Amazon and Newegg.com showed a Molex to 8 pin PCIe adapter for $8. |
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Thank you all for your comments. Definitely seems to me that all Turing chip based cards seem rather efficient and a good bang for your buck :) I'll further look into GTX 1650 vs. 1660 Super vs. 1660 Ti. The last two mentioned actually seem pretty similar to me with 125 W for 5.027 TFLOPs (F32) vs. 120 W for 5.437 TFLOPs according to Techpowerup.com. Currently I tend more towards the 1660 models as they come with 6GB of GDDR6 memory and that seems more future proof to me than only 4GB. The idea of using a Molex to 8 pin PCIe connector adapter in the meantime to bridge the gap until building my new system due to the lack thereof with my current PSU, seems like a great idea. Just wanted to verify beforehand to avoid any instant ignition surprise with the new card :) As my PSU delivers 475W that shouldn't be issue. At the moment my GTX 750 Ti with the earlier mentioned OC setting sitting at 1365 core clock, is currently pushing it to nearly 100k credit per day, with RAC taking a hit because I recently powered down the system for a few days for maintenance. So that's decent. I guess that when I finish the new rig, probably not before the beginning of next year, I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards. Then if budget, and running cost allow, I consider upgrading further. Keep in mind that electricity bills here in Germany tend to be often threefold of what other users are used to and that can definitely influence hardware decision when running a rig 24/7. A OC GTX 1660 Ti with 120 W, keeping in mind the degree of efficiency of most PSUs might easily pull 140-150 W from the wall and with 24/7 that relates to 24h*150W=3.6 kWh which at the current rate of .33€ is about 1.19€ per day only for a single GPU. So efficiency is on mind, as CPU and other peripherals also pull a significant wattage.... Still wouldn't like to have the power bill of some of the users here with a couple RTX 2xxx or 3xxx soon :) And by the way, congrats ServicEnginIC on finding your first prime! Thanks again |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards. I am not sure whether you can have a mix different GPU types (Pascal, Turing) in the same machine. One of the specialists here might give more information on this. |
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards. Yes, the cards can be mixed. The only issue is on a PC restart (or Boinc service restart) the gpugrid tasks must attach to the same gpu they were started on or both tasks will fail immediately. Refer this post by Retvari Zoltan for more information on this issue and remedial action. http://www.gpugrid.net/forum_thread.php?id=5023&nowrap=true#53173 Also, refer the ACEMD3 faq http://www.gpugrid.net/forum_thread.php?id=5002 |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 428 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
... I'll first run with a dual-GPU system with my GTX 750Ti and one of the aforementioned cards. Yes, you can run two different GPUs in the same computer - my host 43404 has both a GTX 1660 SUPER and a GTX 750Ti, and they both crunch for this project. Three points to note: i) The server shows 2x GTX 1660 SUPER - that's a reporting restriction, and not true. ii) You have to set a configuration flag 'use_all_gpus' in the configuration file cc_config.xml - see the User manual. Otherwise, only the 'better' GPU is used. iii) This project - unusually, if not uniquely - can't start a task on one model of GPU and finish it on a different model of GPU. You need to take great care when stopping and re-starting BOINC, to make sure the tasks restart on their previous GPUs. |
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Then if budget, and running cost allow, I consider upgrading further. Keep in mind that electricity bills here in Germany tend to be often threefold of what other users are used to and that can definitely influence hardware decision when running a rig 24/7. A OC GTX 1660 Ti with 120 W, keeping in mind the degree of efficiency of most PSUs might easily pull 140-150 W from the wall and with 24/7 that relates to 24h*150W=3.6 kWh which at the current rate of .33€ is about 1.19€ per day only for a single GPU. So efficiency is on mind, as CPU and other peripherals also pull a significant wattage.... Still wouldn't like to have the power bill of some of the users here with a couple RTX 2xxx or 3xxx soon :) If running costs are an important factor, then consider the observations made in this post: https://gpugrid.net/forum_thread.php?id=5113&nowrap=true#54573 |
ServicEnginICSend message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 1,447 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Whatever the decision be, I'd recommend to purchase a well refrigerated graphics card. Heat pipe or vapour chamber based heatsink, better than a passive one. My most powerful card currently in production is a factory overclocked GTX 1660 Ti. Its refrigeration is based on a dual fan passive heatsink, and I've had to fight hard to maintain temperatures outside of crazy limits when processing at full (120 Watts) power. The last card I purchased was a GTX 1650 Super, and I'm very satisfied with its well employed 100 Watts power. I'm still evaluating performance at this system, but in a flash: - GTX 750 Ti OC, Adria Tasks mean time execution: 73741 seconds - GTX 1650 Super, Adria Tasks mean time execution: 18634 seconds |
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Whatever the decision be, I'd recommend to purchase a well refrigerated graphics card. +1 on attention to cooling ability (The GTX 1650 Super when power limited to 70W (minimum power), ADRIA Tasks execution Time: 20800 seconds) EDIT: GERARD has just released some work units that will take a GTX 750 Ti over 1 day to complete. A good time to retire the venerable GTX 750 Ti. |
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Thank you all! That thread turned out to be a treasure trove of information. I will keep referring to it in the future as it is almost like a timelier version of the FAQs. Interesting to know that running two or more GPUs within one system is indeed possible even if the chipsets/architectures are different. However it really seems like a lot of pain to implement this in practice (just rebooting for an update...) Purchasing a powerful 250W card and then directly reducing its power limit seemed a bit counterintuitive at the beginning but I guess efficiency-wise it makes total sense. Just like going with 60mph on a 100 PS car is fine and consumes least, but ramping it up to 90 or even 120mph while doable requires loads of more fuel to even achieve it. So I'll give this a thought. Dual-fan cooling is a must have for me, but thanks anyway for the pointer. A passive cooled one and especially in a small case, must really be horrible temperature wise. I have just kind of scrolled through the valid tasks of some non-hidden hosts amongst users in this thread, and some ADRIA tasks took nearly 10 hrs (~33000sec) on even newer cards such as the GTX 1650 and GTX 1650 Super. Never saw a GERARD tasks, but would love to get either one of the mentioned ones myself. But these must be real monsters :) That definitely supports my idea of considering an upgrade. Thanks again guys |
|
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
bozz4science wrote: At the moment my GTX 750 Ti with the earlier mentioned OC setting sitting at 1365 core clock, is currently pushing it to nearly 100k credit per day... may I ask you at which temperature this card is running 1365 MHz core clock? Question also to the other guys here who mentioned that they are running a GTX 50 Ti: which core clocks at which temperatures? Thanks in advance for your replies :-) |
©2025 Universitat Pompeu Fabra