needed pci-e bandwidth? mining rig idea

Message boards : Graphics cards (GPUs) : needed pci-e bandwidth? mining rig idea
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51740 - Posted: 2 May 2019, 8:16:37 UTC

does gpugrid really need the full 16x or 8x bandwidth of pci-e 3?

or can i also build a system looking like mining rig?
what i mean using special mining motherboard with lots of 1x lane pcie-3, connecting pcie-3 x16 riser to mainboard via usb for data transfers and putting gpu-card into pcie-16x riser.
should this work?
because the limiting factor here is pci-1x lane connection on motherboard
ID: 51740 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PappaLitto

Send message
Joined: 21 Mar 16
Posts: 513
Credit: 4,673,458,277
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51741 - Posted: 2 May 2019, 12:21:06 UTC

You can certainly try it to see what GPU usage you get with SWAN_SYNC on Linux. Without SWAN_SYNC I notice about 30% PCIe usage on an 8x link, but with SWAN_SYNC on Linux I notice only about 2% usage.

It might be possible, but only with SWAN_SYNC on Linux, though I have never tried it.

I wouldn't go out and buy mining specific hardware until you have tested it out. Keep in mind you should want at least 1 thread free per GPU for science workloads. Let us know what you find!
ID: 51741 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51742 - Posted: 2 May 2019, 13:22:38 UTC - in response to Message 51741.  
Last modified: 2 May 2019, 13:24:08 UTC

You can certainly try it to see what GPU usage you get with SWAN_SYNC on Linux. Without SWAN_SYNC I notice about 30% PCIe usage on an 8x link, but with SWAN_SYNC on Linux I notice only about 2% usage.

It might be possible, but only with SWAN_SYNC on Linux, though I have never tried it.

I wouldn't go out and buy mining specific hardware until you have tested it out. Keep in mind you should want at least 1 thread free per GPU for science workloads. Let us know what you find!
it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me
ID: 51742 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Jul 16
Posts: 338
Credit: 7,987,341,558
RAC: 259
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51743 - Posted: 2 May 2019, 14:33:31 UTC

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.
ID: 51743 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PappaLitto

Send message
Joined: 21 Mar 16
Posts: 513
Credit: 4,673,458,277
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51744 - Posted: 2 May 2019, 18:32:10 UTC - in response to Message 51742.  

it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me

I apologize if my answer was not clear enough. Basically at 8x I use 30% of the 8x bandwidth. But with SWAN_SYNC on Linux I only use 2% of the bandwidth of 8x. So I would imagine you could see less than 80% usage on 1x pcie. I have not tested this myself but I would imagine it would still work just fine with minimal loss. Keep in mind you need 1 CPU thread per GPU unlike mining which is almost 0% reliant on CPU threads.
ID: 51744 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51745 - Posted: 2 May 2019, 18:33:36 UTC - in response to Message 51743.  

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?
ID: 51745 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51746 - Posted: 2 May 2019, 19:47:23 UTC - in response to Message 51744.  

it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me

I apologize if my answer was not clear enough. Basically at 8x I use 30% of the 8x bandwidth. But with SWAN_SYNC on Linux I only use 2% of the bandwidth of 8x. So I would imagine you could see less than 80% usage on 1x pcie. I have not tested this myself but I would imagine it would still work just fine with minimal loss. Keep in mind you need 1 CPU thread per GPU unlike mining which is almost 0% reliant on CPU threads.
i am sorry. i didn't see your reply on my smaal screen smartphone.

now i see it.
1 cpu thread per 1 pgu. my cpu is threadripper 1950x, 16 core with STM (or SMT) 32 viewable core's. is this enough for 4 gpu's?
and...if i understand your answer rigth, then i cann't punt 4 gpu's in pcie-3 x1 lane because of that 30% of x8 lanes.
am i right?

once again, i am sorry for confusion
ID: 51746 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PappaLitto

Send message
Joined: 21 Mar 16
Posts: 513
Credit: 4,673,458,277
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51747 - Posted: 2 May 2019, 20:04:18 UTC

30% on an 8x pcie is without SWAN_SYNC. With SWAN_SYNC on Linux I get 2% on 8x. As long as you use SWAN_SYNC it might be theoretically possible.
ID: 51747 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Jul 16
Posts: 338
Credit: 7,987,341,558
RAC: 259
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51748 - Posted: 2 May 2019, 21:16:38 UTC - in response to Message 51745.  

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?



I was comparing how mining is different from BOINC crunching and in particular GPUGrid. How mining can get away with using just a 1x slow and how GPUGrid cannot. There ya got it. It won't work. You want a Black and white answer without any understanding, here it is. No

Just test it. It's not that hard.
ID: 51748 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51749 - Posted: 2 May 2019, 21:33:06 UTC - in response to Message 51748.  

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?



I was comparing how mining is different from BOINC crunching and in particular GPUGrid. How mining can get away with using just a 1x slow and how GPUGrid cannot. There ya got it. It won't work. You want a Black and white answer without any understanding, here it is. No

Just test it. It's not that hard.
thanks
ID: 51749 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 51750 - Posted: 3 May 2019, 0:39:59 UTC

In the past I have used a usb cable connected riser to a 1x pci-e slot using a GTX750ti for a few months on GPUgrid. (I no longer use it)
It was on a Linux host set to BLOCK mode.
The GPUgrid output was lower by around 15% (approx). I am guessing a faster card could suffer a larger speed reduction.
I found the GPUgrid task would randomly pause for no reason, but I could manually start the task again. (I had a script that would check the task status and start it again if necessary)
Not sure why tasks would pause, as multiple GPU cards work fine on rigs with multiple 8x/16x slots, I assumed it was just a poorly designed/implemented riser.
ID: 51750 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 891
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51751 - Posted: 3 May 2019, 7:59:35 UTC

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.
ID: 51751 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51752 - Posted: 3 May 2019, 11:11:24 UTC - in response to Message 51751.  

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.

thanks for your reply.

to try I firstly need to buy gtx1070 or gtx1080 cards, used ones. the price here for used cards are around 250-300 euro;s.
spending 1000 euro for 4 cards to try something is odd for me.
that is why asked here before I spend 1000 euro.
and I would by 4 gpu's just only for gpugrid, no other intended use.

so now I know it will not work because of limitation of speed of x1 lane.

thanks all who replied
ID: 51752 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 51753 - Posted: 3 May 2019, 22:08:36 UTC - in response to Message 51752.  
Last modified: 3 May 2019, 22:16:08 UTC

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.

thanks for your reply.

to try I firstly need to buy gtx1070 or gtx1080 cards, used ones. the price here for used cards are around 250-300 euro;s.
spending 1000 euro for 4 cards to try something is odd for me.
that is why asked here before I spend 1000 euro.
and I would by 4 gpu's just only for gpugrid, no other intended use.

so now I know it will not work because of limitation of speed of x1 lane.

thanks all who replied

I think your motherboard already has four PCIe3.0x16 slots (probably they will run at x8 if all of them occupied). You should look for x16 PCIe3.0 risers and use them. It is recommended to resolve cooling issues. Or you can build a water cooled rig with 4 cards. The heat output will be around 1.2kW, so it's not recommended to put 4 air cooled cards close to each other.
As for the original question: In my experience the performance loss (caused by the lack of PCIe bandwidth) is depends on the workunit (some will suffer more, some will suffer less/none). To achieve optimal performance high end cards need at least PCIe3.0x8. If you are not bothered by the performace loss, perhaps they will work even at PCIe3.0x1. I'm not sure because I've never put more than 4 GPUs in a single host. I build single GPU hosts lately, because I can spread them across our flat (I use them as space heaters).
ID: 51753 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1419
Credit: 9,119,446,190
RAC: 891
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51754 - Posted: 4 May 2019, 0:18:40 UTC

Yes, any HEDT motherboard should be able to support 4 gpus natively. I have an Intel X99 board that supports 4 gpus at PCIe 3.0 X16 speeds and one X399 board that supports 2 gpus at PCIe 3.0 X8 speeds along with 2 gpus at PCIe 3.0 X16 speeds. As long as the gpus are no wider than two slots, even air cooled cards fit. Water cooling or hybrid cooled cards keep the cards cool so they clock well with the best performance.
ID: 51754 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zalster
Avatar

Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51756 - Posted: 4 May 2019, 1:00:52 UTC - in response to Message 51754.  

Hybrid or custom cooling loops is best options for multi card systems. Prevent throttling of the cards if the are all in 1 box. However, if you go the route of hanging them from a support beam above the Mobo, then you could probably get away with air cooled as long as you have proper ventilation.

Keith is correct in that using a higher PCI-e bandwidth is preferable if you are looking to crunch the fastest. If not then, then yes using risers is a viable option.

As you noted in the other thread, you have to keep in mind how many threads are on the CPU, how many are available to the GPUs, are those threads going to be shared with the Sata's or M2? The more lanes you can devote to the cards, the faster they will finish the work.

Good luck

Z
ID: 51756 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51757 - Posted: 4 May 2019, 9:08:36 UTC - in response to Message 51756.  

my idea is to build a setup with mining rig frames, like this one

https://images.app.goo.gl/F8vavF34ggADQoiC9

with asus b250 mining motherboard, intel i7-7700 or i7-7700T (4 cores, hyperthreading to 8 cores/threads), 4 - 6 gpu's (gtx1070), some fans of 120x120x38mm. and of course using pcie-x16 risers connected to motherboards x1-slots
ID: 51757 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PappaLitto

Send message
Joined: 21 Mar 16
Posts: 513
Credit: 4,673,458,277
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51759 - Posted: 4 May 2019, 12:50:36 UTC
Last modified: 4 May 2019, 12:59:09 UTC

This software is fundamentally different than mining software and requires more resources. You will need at least 1 CPU core per GPU and I highly doubt PCIe 1x is enough bandwidth to supply a fast GPU like a 1070 (PCIe 16x riser on a PCIe 1x slot is the same thing as a 1x riser on a 1x slot.) You will need minimum 4x and recommended 8x PCIe per GPU.

I have used PCIe 1x risers in the past and I can tell you they are an absolute nightmare. Most notably if the USB connection between PCIe connectors isn't perfect you have enormous difficulty getting the Operating system to recognize the GPU.

Zoltan and I have found that CPU frequency also plays a large role in GPU usage, but you should be fine with an i7-7700. You might be better off having 3-4 GPUs per system with 16x risers and having two cheap (but high frequency) systems.
ID: 51759 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
erik

Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51760 - Posted: 4 May 2019, 13:34:18 UTC - in response to Message 51754.  

Yes, any HEDT motherboard should be able to support 4 gpus natively. I have an Intel X99 board that supports 4 gpus at PCIe 3.0 X16 speeds and one X399 board that supports 2 gpus at PCIe 3.0 X8 speeds along with 2 gpus at PCIe 3.0 X16 speeds. As long as the gpus are no wider than two slots, even air cooled cards fit. Water cooling or hybrid cooled cards keep the cards cool so they clock well with the best performance.

intel x99 boards have mostly 16x/8x/4x modus.
probably you are talking about asus ws-serie mainboards when you say 4 gpu's natively x16. but do you realize about the presence of PLX pcie-switches on those ws-boards? the net result is much lesser bandwidth

take a look at block diagram

https://www.overclock.net/content/type/61/id/2674289/width/350/height/700/flags/LL

my idea was not makng hedt-system with threadripper x399-chipset. way to expensive.

but....thanks for your reply.

i found someone near to me where i can test my idea with asus b250 mining motherboard, intel g4400 cpu, 8gb ram, 2x gtx1070.
i will give update about my progress.
ID: 51760 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zalster
Avatar

Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51761 - Posted: 4 May 2019, 21:29:54 UTC - in response to Message 51760.  


intel x99 boards have mostly 16x/8x/4x modus.
probably you are talking about asus ws-serie mainboards when you say 4 gpu's natively x16. but do you realize about the presence of PLX pcie-switches on those ws-boards? the net result is much lesser bandwidth


It's not less, it's better utilization of available lanes. For this example I am only talking about Intel chips . There are only as many lanes as the CPU chip has. If you get a low end CPU with 24 lanes, then that is all you get. If you get a high end CPU then you might have 40 or 44.

For the ASUS X99e-ws ($$$$) the PCIe are x16/x16/x16/x16 because of the PLX chip. As long as you don't add other things that take up any of the lanes (m2, etc) that the GPUs are using then you can get close to x16 for the GPUs. The PLX chips have their own lanes as well that allow you to attach other items of your computer (Lan, USB etc).

Here's a link to post where someone is attempting to describe what is occurring. He's quoting an article we both read about this a long time ago but can't find right now.

https://www.overclock.net/forum/6-intel-motherboards/1618042-what-multiplexing-how-does-plx-chips-work.html
ID: 51761 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · Next

Message boards : Graphics cards (GPUs) : needed pci-e bandwidth? mining rig idea

©2025 Universitat Pompeu Fabra