General buying advice for new GPU needed!

Message boards : Graphics cards (GPUs) : General buying advice for new GPU needed!
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55741 - Posted: 15 Nov 2020, 0:43:39 UTC - in response to Message 55739.  

I second what Keith said, and I also have found that to be true of motherboards, when comparing Asus to Gigabyte.
Their price points are similar but Asus stuff just performs better and longer in my experiences.
ID: 55741 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55742 - Posted: 15 Nov 2020, 1:05:14 UTC - in response to Message 55738.  

The choice of 1660 SUPER for price and performance, combined with Asus ROG for it's features, longevity and thermal capacity are both excellent choices.

The only other thing to consider is how desperate are you to buy now?

The Ampere equivalent GPU to 1660 SUPER should be released by March 2021, Gpugrid should have Ampere compatibility by March as well. (I don't have a crystal ball so don't hold me to either of those claims!!)

However, if we always wait for the next best thing in technology, we would never buy anything....
ID: 55742 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55747 - Posted: 15 Nov 2020, 14:45:07 UTC
Last modified: 15 Nov 2020, 14:58:10 UTC

Thank you both for your replies as well. I guess consensus is that Asus might offer the superior product. That's definitely reassuring :)

@rod4x4: What would the equivalent Ampere card to the 1660 Super be? The RTX 3060/3050? I am really fighting with myself right now, as this thought hasn't crossed my mind just once. Waiting now and saving up for a larger upgrade later early next year might be worthwhile but again, the half time of computational performance is not very long these days. Not that a 1660 Super would become obsolete but it already lacks far behind the current gen's capacity. I feel that especially this year, CPUs and GPUs alike experienced a rather steep improvement in performance ... and competition. The latter one hopefully proving to be benefit all of us in the long run.

I guess, it'd be much clearer for me of I only were to consider upgrading my GPU. But as I am building the PC from scratch, starting with PSU/mainboard etc. I guess the time is ripe now. And I would still be pretty much constrained by the very same ~300€ for a GPU in Q1/2021. That wouldn't get me close to a ~500$ RTX 3060 card. Hopefully, I will be able to add to it later next year with a latest gen RTX card while prices may already have come down and supply stabilised.

@Pop Pissa: Interesting! I have been torn between the B550 Aorus Pro AC from Gigabyte/Aourus and the Rog Strix B550-F Gaming for a long time now. And I really didn't see much difference neither from the comparisons of the specs, nor from reading a couple reviews. My gut feeling told me to go with the Gigabyte board, as reviews mostly said it'd run a few degrees cooler overall, but I might reconsider carefully again. I'll have (another) close look! Nothing's been finalised yet.

Never anticipated the anxiety that comes with choosing the right parts. :)

I quickly drafted a comparison table to compare the price per TFLOP F16 and price per TFLOP F64 performance of a few card models for which I found prices on some of Germany's biggest PC hardware shops. I averaged out the performance data from the respective models from the GPU techpowerup database and found once again that at least by this measure the 1660 Super comes out top. I hope the premium over the 1660 Super per TFLOP benchmark will narrow in the future.

Model Price [€] TFLOPs (F16) GFLOPs (F64) €/TFLOP (F16) €/TFLOP (F64) ∆ F16 [%] ∆ F64 [%]
1660 Super // 250 // 10,39 // 162,4 // 24,06 € // 1,54 € // 0,0% // 0,0%
1660 Ti // 290 // 11,52 // 180,0 // 25,17 € // 1,61 € // 4,6% // 4,6%
RTX 2060 Super // 435 // 14,62 // 228,5 // 29,75 € // 1,90 € // 23,7% // 23,6%
RTX 2070 Super // 525 // 18,12 // 283,2 // 28,97 € // 1,85 € // 20,4% // 20,4%
RTX 3070 // 699 // 21,55 // 336,7 // 32,44 € // 2,08 € // 34,8% // 34,8%
ID: 55747 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55749 - Posted: 15 Nov 2020, 18:33:32 UTC - in response to Message 55747.  
Last modified: 15 Nov 2020, 19:13:28 UTC

I am in the process of upgrading my GTX 750 Ti to a GTX 1650 Super on my Win7 64-bit machine, and thought I would do a little comparing first to my Ubuntu machines. The new card won't arrive until tomorrow, but this is what I have thus far. The output of each card is averaged over at least 20 work units.

I mainly compare efficiency (output per unit of energy), but collect speed along the way.

Note that the GTX 1650 Super has 1280 CUDA cores, the same as the GTX 1060, but faster memory (GDDR6 instead of GDDR5), and lower power (100 watts vs. 120 watts TDP).

Work units: GPUGRID 2.11 New version of ACEMD (cuda100) (TONI_MDAD, for example)

i7-4771 Win7 64-bit
GTX 750 Ti (446.14-CUDA10.2 driver)

Power (GPU-Z): 88.4% TDP = 53 watts
Average over 21 work units: 13340 seconds = 222 minutes
So energy is 53 x 222 = 11,784 watt-minutes per work unit

---------------------------------------------------------------------
Ryzen 2700, Ubuntu 20.04.1 GTX 1060, 450.66 driver (CUDA 11.0)
Average time per work unit: 5673 seconds
Power: 104 watts average (nvidia-smi -l)
So energy is 589,992 watt-seconds, or 9833 watt-minutes per work unit.

----------------------------------------------------------------------
Ryzen 2600, Ubuntu 20.04.1 GTX 1650 Super, 455.28 driver (CUDA 11.1)
Average time per work unit: 4468 seconds
Power: 92 watts average (nvidia-smi -l)
So energy is 411,056 watt-seconds, or 6851 watt-minutes per work unit.
=======================================================================


Conclusion: The GTX 1650 Super on Ubuntu is almost twice as efficient as the GTX 750 Ti on Win7.
But these time averages still jump around a bit when using BOINCTask estimates, so you would probably have to do more to really get a firm number.
The general trend appears to be correct though.
ID: 55749 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55750 - Posted: 15 Nov 2020, 19:58:28 UTC

It's great seeing you again here Jim. From the discussion over at MLC I know that your top priority in GPU computing is efficiency as well. The numbers you shared seem very promising. I can only imagine the 1660 Super being very close to the results of a 1650 Super. I don't know if you already know the resources hidden all over in this thread, but I recommend you take a look if you don't.

Let me point one out in particular that Keith shared with me. It's a very detailed GPU comparison table at Seti (performance and efficiency): https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703

After all the advice I received on various topics throughout this thread, I want to express my sincere thanks to the following volunteers for your valuable contributions so far and for turning it into a lively discussion!
rod4x4 / Pop Piasa / Keith Myers / Retvari Zoltán / eXaPower / ServicEnginIC / Erich56 / Richard Haselgrove – Note pls that I didn't put these names in any particular order, but rather wanted to take this as an opportunity to express my gratitude for sharing your knowledge with me and giving advice.


Adding to my earlier post, my gut feeling to go with the Asus 1660 Super card stems from my current 750Ti which happens to also be an Asus card. Even though this second-hand bought card is already more than 6 years old, except for a renewal of thermal paste, it has been running for the last couple months nearly 24/7 and always stayed below 63% without the fans ever going above 65%.
ID: 55750 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55751 - Posted: 15 Nov 2020, 20:36:40 UTC - in response to Message 55750.  

Let me point one out in particular that Keith shared with me. It's a very detailed GPU comparison table at Seti (performance and efficiency): https://setiathome.berkeley.edu/forum_thread.php?id=81962&postid=2018703
Thanks. The SETI figures are quite interesting, but they optimize their OpenCL stuff in ways that are probably quite different than CUDA, though the trends should be similar. I am out of the market for a while though, unless something dies on me.

ID: 55751 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55752 - Posted: 15 Nov 2020, 21:12:45 UTC - in response to Message 55751.  
Last modified: 15 Nov 2020, 21:34:00 UTC

Incidentally all, have you seen the Geekbench CUDA benchmark chart yet?


https://browser.geekbench.com/cuda-benchmarks

Edit; I wonder if their test is relevant to ACEMD? The ratings are quite surprising to me and somewhat contradict what rod4x4 found using recent data here, as I glanced at it.
ID: 55752 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 731
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55753 - Posted: 16 Nov 2020, 1:02:05 UTC

When the science app is optimized for CUDA calculations, it decimates ANY OpenCL application.

The RTX 3050 and RTX 3050 Ti are already leaked for release early next year with 2304 CUDA cores, same as the RTX 2070.

If you can maintain your patience, then that might be the sweet spot for performance/efficiency.

The Seti gpu performance/efficiency chart had NO included CUDA applications in it. Only OpenCL. The Seti special CUDA application was 3X-10X faster than the OpenCL application used in the charts.
ID: 55753 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55755 - Posted: 16 Nov 2020, 12:16:24 UTC

Well, that thought has crossed my mind before. But I guess that availability will become an issue again next year and current retail prices still seem quite a bit higher than the suggested launch prices from Nvidia. And unfortunately I will still be budget constrained. Currently depending on the deals I would get, I look at ~650/700$ for all my components excluding the GPU. That leaves just a little over 300$ for this component. Looking at the rumoured price predictions for the 3050/3060 cards, at cheapest I will look at ~350$ for a 3050 Ti and that is without the price inflation we'll likely see on the retailer side at launch. Small inventory and an initial lack of supply will likely make matters even worse.

For benchmarking purposes, I consider the ø current retail price for a 1660 Super at ~250€. If you were to believe the leaked specs, a GTX 1660 Super would be the equivalent of:
- 84% of a RTX 3050 Ti --> 119% vs. GTX 1660 Super
- 75% of a RTX 2060 Super --> 133%
- 73% of a RTX 3060 --> 137%
- 65% of a RTX 2070 Super --> 154%
- 64% of a RTX 3060 Ti --> 156%
- 52% of a RTX 2080 Ti --> 192%
- 51% of a RTX 3070 --> 196%
- 39% of a RTX 3080 --> 256%
- 33% of a RTX 3090 --> 303%

Looking again at the rumoured specs taken from techpowerup, I drafted the following comparison list:
RTX 3050 Ti
- on par with a RTX 2060
- 3584 vs. 1920 CUDA cores
- 12.36 vs. 12.90 TFLOPs (F16)
- 150W vs. 160W
- same memory type + size, memory bandwidth, memory bus
- price ? (300$)

RTX 3060
- on par with a RTX 2060 Super / 2070
- 3840 vs. 2176 vs. 2304 CUDA cores
- 180W vs. 175W vs. 175W
- same memory type, smaller memory size, lower memory bandwidth, lower memory bus speed
- price ? (350$)
--> apparently 2 variations with 2 different VRAM sizes and bandwidths (?)

RTX 3060 Ti
- on par with a RTX 2070 Super
- 4864 vs. 2560 CUDA cores
- 200W vs. 215W
- same memory type + size, memory bandwidth, memory bus
- price ? (399$)

Do you think that major projects will have implemented the Ampere architecture in their GPU apps by early next year?


Only components left for me to choose now are the mobo where I am still torn between the 2 mentioned ones and the GPU. I'll probably look out for the 3060 launch (17th Nov or 2nd Dec) and watch prices like a hawk. While I happily wait for next year's launches, I probably won't be able to resist a 1660 super now if I come across a sweet deal. It just offers the best value IMO. And probably will continue doing so well into 2021 until prices settle down. At worst, I'll have a cool and efficient runner with 3yrs of warranty and getting a RTX 30xx card is not off the table, but rather delayed for a bit.

Looking at the price per TFLOP (F16) again as my final benchmark, the 1660 models score at ~23-24€/TFLOP, the RTX 20xx cards score in the range of 27-32€ depening on make and model and the latest RTX 30xx cards will score right in between if prices come down to their suggested launch prices to a range of 24-26€ (RTX 3050/3060/3070 models). Needless to say that at current prices, you'll get an efficiency boost over last gen cards, but you'll end up paying nearly the same if not more per TFLOP as for any RTX 20xx card (ø 30€/TFLOP). However, you won't get F16 = F32 performance as seen in the RTX cards... So, as of right now a new RTX card doesn't seem like a value buy to me. Whatever my final decision may be, I still plan to run at least 1 out of the 2 GPUs in the 750W system on my x16 PCIe 4.0 slot. I could see the RTX 3060 Ti settling at around the 400$ mark and as you pointed out, that might be the sweet spot for performance/efficiency that I'd like to add to the new system later next year at this price level.
ID: 55755 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 731
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55756 - Posted: 16 Nov 2020, 17:17:36 UTC

I am only aware of the projects I run. That said, we are still waiting for Ampere compatibility here.

RTX 3000 series cards work at Einstein already. Haven't found any at Milkyway.
Think they are running at Folding@home and Collatz too.
ID: 55756 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 5,269
Level
Trp
Scientific publications
wat
Message 55757 - Posted: 16 Nov 2020, 19:15:41 UTC - in response to Message 55756.  

I am only aware of the projects I run. That said, we are still waiting for Ampere compatibility here.

RTX 3000 series cards work at Einstein already. Haven't found any at Milkyway.
Think they are running at Folding@home and Collatz too.


I think the RTX3000 "should" work at MW since their apps are openCL like Einstein. But it's probably not worth it vs other cards since Nvidia nerfed FP64 so bad on the Ampere Geforce line, even worse than Turing.

ID: 55757 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55758 - Posted: 16 Nov 2020, 23:33:06 UTC - in response to Message 55755.  

- 84% of a RTX 3050 Ti --> 119% vs. GTX 1660 Super
- 73% of a RTX 3060 --> 137% (vs. GTX 1660 Super)


RTX 3050 Ti
- on par with a RTX 2060
- 3584 vs. 1920 CUDA cores
- 12.36 vs. 12.90 TFLOPs (F16)
- 150W


RTX 3060
- on par with a RTX 2060 Super / 2070
- 3840 vs. 2176 vs. 2304 CUDA cores
- 180W


Based on stats you quoted, I would definitely hold off on Ampere until Gpugrid releases Ampere compatible apps.
GTX 1660 SUPER vs Ampere:
RTX 3050 Ti - 119% performance increase, 125% power increase.
RTX 3060 - 137% performance increase, 150% power increase.
Not great stats.

I am sure these figures are not accurate at all. Once an optimised CUDA app is released for Gpugrid, the performance should be better. (but the increase is yet unknown)

To add to your considerations.
Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU. Wait until May, by then a clearer picture of how the market and BOINC projects are interacting with Ampere, and then purchase an Ampere card. This gives you time to save up for the Ampere GPU as well.
The GTX 1650 SUPER could be relegated to your retired rig for projects, testing etc, or sold on eBay.
Pricing on Ampere may have dropped a bit by May, (due to pressure from AMD and demand for Nvidia waning after the initial launch frenzy) so the extra money to invest in a GTX 1650S now, may not be that much.
ID: 55758 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55759 - Posted: 17 Nov 2020, 0:48:46 UTC
Last modified: 17 Nov 2020, 0:56:04 UTC

we are still waiting for Ampere compatibility here
Thanks Keith! Well, that's one of the reasons that I am hesitant still about purchasing an Ampere card yet. I'd much rather wait for the support at major projects and see from there. And that is very much along the same lines as rod4x4 has fittingly put it.
Wait until May, by then a clearer picture of how the market and BOINC projects are interacting with Ampere, and then purchase an Ampere card.
Delaying an RTX 3000 series card purchase, from my perspective, seems like a promising strategy. The initial launch frenzy you are mentioning is driving prices up incredibly... And I predict that the 3070/360Ti will attract the most demand that'll inevitably drive prices up further – at least in the short term.

Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU.
Very interesting train of thought rod4x4! I guess, the 160-180$ would still be money well worth spent (for me at least). Any upgrade would deliver a considerable performance boost from a 750 Ti :) I'd give up 2GB of memory, a bit of bandwidth, and just ~20% performance but at a 30-40% lower price... I'll think about that.
This gives you time to save up for the Ampere GPU as well.
Don't know if this is gonna happen this quickly for me, but that's definitely the upgrade path I am planning on. Would an 8-core ryzen (3700x) offer enough threads to run a future dual GPU setup with a 1650 Super + RTX 3060 Ti/3070 while still allowing free resources to be allocated to CPU projects?

Based on stats you quoted, I would definitely hold off on Ampere until Gpugrid releases Ampere compatible apps.
Well, there is definitely no shortage of rumours recently and numbers did change while I was looking up the stats from techpowerup. So, there is surely lots of uncertainty surrounding these preliminary stats, but they do offer a bit of guidance after all.

Pricing on Ampere may have dropped a bit by May, (due to pressure from AMD and demand for Nvidia waning after the initial launch frenzy) so the extra money to invest in a GTX 1650S now, may not be that much.
I fully agree on your assessment here! Thanks. Now it looks as I am going down the same route as Jim1348 after all! :)

since Nvidia nerfed FP64 so bad on the Ampere Geforce line, even worse than Turing.
I saw that too Ian&Steve C. Especially compared to AMD cards, their F64 performance actually looks rather poor. Is this due to a lack of technical ability or just a product design decisions to differentiate them further from their workstation/professional cards?
ID: 55759 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55760 - Posted: 17 Nov 2020, 1:25:12 UTC - in response to Message 55759.  
Last modified: 17 Nov 2020, 1:30:33 UTC

Would an 8-core ryzen (3700x) offer enough threads to run a future dual GPU setup with a 1650 Super + RTX 3060 Ti/3070 while still allowing free resources to be allocated to CPU projects?

I am running a Ryzen 7 1700 with dual GPU and WCG. I vary the CPU threads from 8 to 13 threads depending on the WCG sub-project. (Mainly due to the limitations of the L3 cache and memory controller on the 1700)

The GPUs will suffer a small performance drop due to the CPU thread usage, but this is easily offset by the knowledge other projects are benefiting from your contributions.

The 3700X CPU is far more capable than a 1700 CPU and does not have the same limitations, so the answer is YES, the 3700X will do the job well!
ID: 55760 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 731
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55761 - Posted: 17 Nov 2020, 2:21:11 UTC
Last modified: 17 Nov 2020, 2:36:55 UTC

I saw that too Ian&Steve C. Especially compared to AMD cards, their F64 performance actually looks rather poor. Is this due to a lack of technical ability or just a product design decisions to differentiate them further from their workstation/professional cards?

Yes, it is a conscious design decision since the Kepler family of cards. The change in design philosophy started with this generation. The GTX 780Ti and the Titan Black were identical cards for the most part based on the GK110 silicon.

The base clock of the Titan Black was a tad lower but the same number of CUDA cores and SM's. But you could switch the Titan Black to 1:3 FP64 mode in the video driver when the driver detected that card type while the GTX 780Ti had to run at 1:24 FP64 mode.

Nvidia makes design changes in the silicon of the gpu to de-emphasize the FP64 performance in later designs and didn't rely on just a driver change.

So it is not out of incompetence, just the focus on gaming because they think that anyone that buys their consumer cards is solely focused on gaming.

But in reality we crunchers use the cards not for their intended purposes because it is all we can afford since we don't have the industrial strength pocketbooks of industry, HPC computing and higher education entities to purchase the Quadros and the Teslas.

The generation design of the silicon is the same among the consumer cards and professional cards, but the floating point pathways and registers are implemented very differently in the professional card silicon.

So there is actual differences in the gpu dies between the two product stacks.
The professional silicon gets the "full-fat" version and the consumer dies are very cut-down versions.
ID: 55761 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile ServicEnginIC
Avatar

Send message
Joined: 24 Sep 10
Posts: 592
Credit: 11,972,186,510
RAC: 1,187
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55762 - Posted: 17 Nov 2020, 7:01:46 UTC

On November 16th rod4x4 wrote:
To add to your considerations.
Purchase a GTX 1650 SUPER now, selling real cheap at the moment and a definite step up on your current GPU

At this point, perhaps I can help a bit the way I like: with a real-life example.
I have both GTX 1650 SUPER and GTX 750 TI graphics cards running GPUGrid 24/7 in dedicated systems #186626 and #557889 for more than one month.
Current RAC for GTX 1650 SUPER is settled at 350K, while for GTX 750 TI is at 100K: about 3,5 performance ratio.
Based on nvidia-smi:
GTX 1650 SUPER power consumtion: 96W at 99% GPU utilization
GTX 750 Ti power consumtion: 38W at 99% GPU utilization
Power consumtion ratio is about 2,5. Taking in mind that performance ratio was 3,5, power efficiency for GTX 1650 SUPER is clearly beating GTX 750 Ti.
However, GTX 750 Ti is winning in cooling performance: it is maintaining 60 ºC at full load, compared to 72 ºC for GTX 1650 SUPER, at 25 ºC room temperature.
But this GTX 750 Ti is not exactly a regular graphics card, but a repaired one...
ID: 55762 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55763 - Posted: 17 Nov 2020, 13:29:03 UTC
Last modified: 17 Nov 2020, 13:30:01 UTC

The 3700X CPU is far more capable than a 1700 CPU and does not have the same limitations, so the answer is YES, the 3700X will do the job well!
Well, that is again tremendously reassuring! Thanks
GPUs will suffer a small performance drop due to the CPU thread usage
Well, Zoltan elaborated a couple times on this topic very eloquently. While I can definitely live with a small performance penalty, it was interesting to me that newer CPUs might actually see their memory bandwidth saturated faster than earlier lower-thread count CPUs. And that by comparison, the same high-end GPU running along a high-end desktop processor might actually perform worse than if being run along an earlier CPU with higher # of memory channels and bandwidth.

Yes, it is a conscious design decision since the Kepler family of cards.
Unfortunately, from the business perspective, it makes total sense for them to introduce this price discrimination barrier between their retail and professional product lines.
But in reality we crunchers use the cards not for their intended purposes because it is all we can afford ... industry, HPC computing and higher education entities to purchase the Quadros and the Teslas.
Still, it is a rather poor design decision if you compare it to the design philosophy of AMD and the F64 they are delivering with their retail products...

At this point, perhaps I can help a bit the way I like: with a real-life example.
You never disappoint with your answers as I love the way you go about giving feedback. Real-life use cases are always best to illustrate arguments.
Definitely interesting. I thought, that given most card manufactures offering very similar cooling solutions for the 1650 Super and 1660 Super cards, the 1650 Super rated @ only 100W TDP would run at similar temps if not lower than the 1660 Super cards. Might I ask what model you are running on? And at what % the GPU fans are running. Eying the Rog Strix 1650 Super, it seems to offer the same cooling solution as its 1660 Super counterpart and test reviews I read suggested that this card runs very cool (< 70 ºC) and this is at 125W. Would be keen on your feedback.
ID: 55763 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55764 - Posted: 17 Nov 2020, 16:28:16 UTC - in response to Message 55762.  

Current RAC for GTX 1650 SUPER is settled at 350K, while for GTX 750 TI is at 100K: about 3,5 performance ratio.
Based on nvidia-smi:
GTX 1650 SUPER power consumtion: 96W at 99% GPU utilization
GTX 750 Ti power consumtion: 38W at 99% GPU utilization
Power consumtion ratio is about 2,5. Taking in mind that performance ratio was 3,5, power efficiency for GTX 1650 SUPER is clearly beating GTX 750 Ti.
However, GTX 750 Ti is winning in cooling performance: it is maintaining 60 ºC at full load, compared to 72 ºC for GTX 1650 SUPER, at 25 ºC room temperature.
But this GTX 750 Ti is not exactly a regular graphics card, but a repaired one...

I just received my GTX 1650 Super, and have run only three work units, but it seems to be about the same gain. It is very nice.

I needed one that was quiet, so I bought the "MSI GeForce GTX 1650 Super Ventus XS". It is running at 66 C in a typical mid-ATX case with just a rear fan, in a somewhat warm room (73 F). Now it is the CPU fan I need to work on.
ID: 55764 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55765 - Posted: 17 Nov 2020, 17:07:34 UTC - in response to Message 55764.  
Last modified: 17 Nov 2020, 17:52:55 UTC

That's great news Jim! Thanks for providing feedback on this. Definitely seems that the 1650 Super is a capable card and will accompany my 750Ti for the interim time :)

The variation seen in the operating temperature is probably mostly due to case airflow characteristics right? ... as I can't really see how such a big gap in temp could only result from the different cooling solutions on the cards. (That's ~9% difference!)
ID: 55765 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55766 - Posted: 17 Nov 2020, 19:10:06 UTC - in response to Message 55765.  

The variation seen in the operating temperature is probably mostly due to case airflow characteristics right? ... as I can't really see how such a big gap in temp could only result from the different cooling solutions on the cards. (That's ~9% difference!)

Maybe so. I can't really tell. I have one other GTX 1650 Super on GPUGrid, but that is in a Linux machine. It is a similar size case, with a little stronger rear fan, since it does not need to be so quiet. It is an EVGA dual-fan card, and is running at 61 C. But it shows only 93 watts power, so a little less than on the Windows machine.

If I were to choose for quiet though, I would go for the MSI, though they are both nice cards.
ID: 55766 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Graphics cards (GPUs) : General buying advice for new GPU needed!

©2025 Universitat Pompeu Fabra