Message boards :
Graphics cards (GPUs) :
needed pci-e bandwidth? mining rig idea
Message board moderation
Previous · 1 · 2 · 3 · Next
| Author | Message |
|---|---|
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
so, you recommend to get a motherboard with plx-swtich? is this better than b250 mining bord with x1 lanes? do i understand you correct? a motherboard with plx will crunch faster than x1-lanes? |
|
Send message Joined: 26 Feb 14 Posts: 211 Credit: 4,496,324,562 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I'm saying, PCIe speed does make a difference. You are better off with higher than PCIe x1. What level of PCIe is up to you. X16 will process the data faster than x8 and so forth. PLX boards are extreme examples but not necessary for this project. You can get by with lot of different boards as long as you take into account PCIe speeds and total number of lanes of the CPU. Seti has a good example of someone using a mining rig, but the applications there have been refined over the years and the data packets are small enough that PCIe aren't a factor. This project, like Einstein perform better with larger PCIe lanes. |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
my idea is to build a setup with mining rig frames, like this one That frame should work. I believe that the ASUS B250 Mining Expert motherboard is the one TBar is using with 12 gpus and a i7-6700. https://www.newegg.com/Product/Product.aspx?Item=9SIA96K7TC8911&Description=B250%20MINING%20EXPERT&cm_re=B250_MINING_EXPERT-_-13-119-028-_-Product His host is here. https://setiathome.berkeley.edu/show_host_detail.php?hostid=6813106 One of his stderr.txt outputs is here showing the 12 gpus. https://setiathome.berkeley.edu/result.php?resultid=7649459019 |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
so, you recommend to get a motherboard with plx-swtich?Yes. is this better than b250 mining bord with x1 lanes?Yes. do i understand you correct? a motherboard with plx will crunch faster than x1-lanes?Yes. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
edit...deleted |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
ok, ppl. today i have been testing asus b250 mining board. crap. those x1 ==> x16 risers are so sensitive and so crappy that i decided not to use any mining board. so, i have decided to build my 3rd Threadripper build with threadripper 1920x cpu. but which motherboard has the best pcie-slots orientation for 3 gpu setup when you look at this site? it is in dutch, but click on pictures for judgement of pcie-slots. https://tweakers.net/categorie/47/moederborden/producten/#filter:TcvBCsIwEATQf5lzhCzFNuwHFDx46lE8hHSRlWBDUjxY8u9NEcTTMI-ZDUueJY8qcQYjZX0WmC9OS16b-RJ-kiRc2u5EBsk_ZNKPgMlaczyDXPUFbqW03ahxlVzAG7rBHvH2EXwDkXNn3KsB9d2_D65prTs any advice?? |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
I would go with either of the Asrock boards on that site. The Taichi has four X16 PCIe slots so you could fit four gpu cards. The Pro Gaming 6 satisfies your minimum 3 gpu requirement. I have the Asrock X399 Fatality Professional Gaming motherboard with a 2920X and I really like it. There are some sweet deals on the 1950X now as AMD tries to reduce inventory in advance of Zen 2. Surprised you can even find a 1920X anymore as I thought all its stock disappeared last year when the TR2 models came out and retailers were blowing the 1920X out the door for less than $250. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
ok, little update on my build project. https://imgur.com/a/7dfC8ym what i have done: - asrock x399 taichi <<=== motherboard - TR 1950x <<=== cpu (with noctua 120mm TR4 cooler) - pcie-riser cables from x16 to x16 (25cm long, https://www.highflow.nl/hardware/videokaarten/li-heat-pci-e-gen-3.0-ribbon-flexible-riser-cable-v2-black.html - 4 msi gtx 1070, 8gb, itx cards - gpu load around 85% - pcie interface load around 20% on x16-slot, around 30% on x8-slot (app_config.xml with cpu usage at 0.975) - all gpu-cards undervolted with msi afterburner to 50% of power limit (with disadvantage lower core clock. but i dont mind that). - crunching time according boinc around 10 hours my next step: building another system with a mining frame for 5 gpu cards (msi gtx 1070, 8gb, itx). this one: https://imgur.com/a/U5gP4tW purpose is to still using pcie riser cables x16 >> x16 to connect so much gpu's to mainboard. i will not use regular computer case, i like to have good airflow, so i choose for mining frame. and therefore using x16 >> x16 riser cables. so, i need your advice (or have questions). - which mainboard has more than 5 pci-e slots? (intel, amd, old, new platform doesnt matter). but it must be ATX form factor - is it true that windows 10 64bit supports maximal 6 gpu's? thanks in advance erik |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
ok, little update on my build project. Hello erik, Nice build! If you ever have any cooling problems make sure to install 120mm fans blowing the heat over and away from the GPUs. Is the 85% GPU load with windows SWAN_SYNC? If so, I believe you will be in the 95%+ range with SWAN_SYNC on Linux once their application is fixed soon. You mention building another system. As far as I am aware the only motherboards with more than 4 full size PCIe slots are workstation or server boards. They typically use PLX chips to act as a PCIe lane 'switch' to the CPU. The old fashioned way to achieve high GPU counts is PCIe x1 but I don't think we've had anyone test the GPU utilization with the GPUGRID app with SWAN_SYNC under Linux. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
forgotten to say: under windows 10 is Swan_sync enabled (set to 1). but having 80% gpu load, I don't mind that. I think the reason for such a low gpu load is my cpu. because my cpu is partially broken. (but this is another issue, asked rma at AMD) now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%. is a PLX-switch not becoming bottleneck for gpugrid-calculations when I make a 7-gpu-setup? and...any info about maximal supported gpu in windows 10?? |
|
Send message Joined: 2 Jul 16 Posts: 338 Credit: 7,987,341,558 RAC: 259 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
It's hard to see but is there support under those GPUs or are they just hanging by the PCI bracket? I have this one that 'supports' 6x GPUs but you can easily drill more holes for the PCI brackets. Not even $30 and it comes with fan mounts. https://www.amazon.com/gp/product/B079MBYRK2/ There are server boards that have more than 4x PCI-E 16/8x slots. Some have 7 but may not be ATX. I've hard some people using a Rosewell cases for higher count GPU setups. No need to limit yourself to an ATX board for either type of case. Those open air mining rigs are just made out of aluminum t-slot pieces. https://8020.net/shop |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
It's hard to see but is there support under those GPUs or are they just hanging by the PCI bracket?see first picture. those 2 white parallel aluminium bars are the support for riser-slots. the cards are not hanging "in the air". they sit on riser-slot, and riser-slot sits on those parallel bars. all white bars are my own adjustments. I have this one that 'supports' 6x GPUs but you can easily drill more holes for the PCI brackets. Not even $30 and it comes with fan mounts.my cards are itx-format. so putting a fan at back-side of the cards will not help that much to cool the gpu's. if I should hang fans on the backside, then there is about 7-8cm space between the fan and backside of gpu. so I don't expect much cooling effect. I could put fans on the frontside of the gpu's where hdmi cables are connecting. but as of now, I don't see any advantage of this. gpu-temperature is now around 50 degree Celsius. No need to limit yourself to an ATX board for either type of case. Those open air mining rigs are just made out of aluminum t-slot pieces. https://8020.net/shopi am in doubt between those 2 boards, Asus X99-E WS (socket 2011-3) or Asus P9X79-E WS (socket 2011) |
|
Send message Joined: 2 Jul 16 Posts: 338 Credit: 7,987,341,558 RAC: 259 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Ah missed the 1st link. 2nd one with fewer images are darker. With a couple of 120mm fans along the length of the rack it would become like a wall of air, esp if there was a top/sides to force the air along the GPUs. To get that many PCI-E slots you'll need a single socket WS board/CPU as you mentioned that came with more lanes, a 2P board or an AMD TR/Epyc setup. Even if you don't need all those lanes for GPUGrid, those are the type of systems where the slots will be available. Z10PE-D8 WS, EP2C621D12 WS or even X9DRX+-F with ten 8x 3.0 slots. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
https://edgeup.asus.com/2018/asus-h370-mining-master-20-gpus-one-motherboard-pcie-over-usb/could this board working with gpugrid? because data connection is usb 3.1 gen 1 (5gbps speed). is this enough data speed (usb 3.1) for gpugrid? i want to connect 6-8 gpus. if one lane pci-e v3.0 has around 1gbps speed and x16-slot has around 16gbps speed, then one usb 3.1 gen1 will be comparable with pci-e v3.0 x4-slot. is my calculation rigth? or am i missing something? |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%. That is not as bad as I thought, though you did mention you lowered the power limit of the cards a lot. What GPU clockspeed are they running at? I would imagine with a faster card you would get less GPU utilization. Also keep in mind this is Windows SWAN_SYNC and not Linux SWAN_SYNC so I think there is still much performance to be gained with PCIe x1 You might also be able to maximize what you can out of the limited PCIe x1 bandwidth. If you lower the power limit enough, which in turn lowers clockspeed, you could potentially maximize GPU utilization making it not only more efficient from the power limit but also more efficient with the higher utilization. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%. all my cards are the same, msi gtx 1070 8gb itx. all of them are power limited to 50%. one of them is connected to pcie-x1-lane v2.0. this card has: gpu load around 70%, bus interface load 67%, clock speed around 1650-1680 MHz, under power limit 50% all other cards with same power limit 50% are running around 1530-1570MHz, gpu load around 85%. all have the same temperature...around 50-52 degree celsius |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I too have a mining-esque system with hopefully 5-6 gpus. The case is already designed for open air 6 GPUs. My current and hopefully only problem is the cards I currently have require two 6 and 8 pin 12v plugs and I've run out of cables from my power supply. I should have done a bit more research before buying! At first I had severe issues getting the GPUs to be recognized so if you ever have this problem, try updating the BIOS, this is what fixed it for me. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
I too have a mining-esque system with hopefully 5-6 gpus. The case is already designed for open air 6 GPUs. My current and hopefully only problem is the cards I currently have require two 6 and 8 pin 12v plugs and I've run out of cables from my power supply. I should have done a bit more research before buying! for psu you cxan use atx-psu with enough power, starting at 1200 watt. preferably full modular, so dont want to have molex or sata power connectors. or use hp server psu with special modules for 12x 6pin pcie-power-connector.https://tweakers.net/aanbod/1983990/mining-starters-kit-benodigdheden.html this site is in dutch. but you can check the pictures about ho server psu. for your build...what kind of motherboard are you using? and cpu? |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have a 1200 watt AX1200 fully modular from Corsair paired with an AB350 Pro4 from ASRock and a r7 1700. I would not recommend this motherboard as it has caused me great agony with GPU detection as for at least a year after its release it did not have a BIOS that allowed for what I was trying to do. It has 6x PCIe 1x slots as I don't have a need for more than 6 GPUs. I personally would recommend literally any other board. I think a mining specific board would work the best as that is what you will be doing with it. It probably has other mining specific features built in. I have the r7 1700 at full load with World Community Grid and Rosetta@home while also having multiple GPUs at high load. As long as you don't overwhelm the CPU with too much CPU work, everything should run at peak efficiency and speed. |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
does gpugrid really need the full 16x or 8x bandwidth of pci-e 3 No - though more lanes will perform better on GPUGRID WUs that are bandwidth heavy. In my GPUGRID experience performance loss varies 33-50% running PCIe 2.0 x1 with GTX 750 / GTX 970 / GTX 1060 / GTX 1070 or any card. My z87 MB with 5 GPUs PCIe 2.0 x1 bus interface load 82% on any card. PCIe 3.0 x4 70-75% bus usage. PCIe 3.0 x8 50-60% bus usage. GTX 970 PCIe2.0 x1 has 55% the performance of a GTX970 on PCIe3.0 x4. GTX 1060 and 1070 PCIe2.0 x1 has 66% the performance compared to 1060 / 1070 PCIe3.0 x4. Turing GPU on PCIe2.0 x1 I suspect will run ACEMD at 70-75% performance of PCIe3.0 x4. On a z87 MSI XPOWER motherboard I have RTX 2070 (PCIe3.0 x8) / RTX 2060 (PCIe3.0 x4) / RTX 2080 (PCIe3.0 x4) with GTX 1080 / 1070 (PCIe2.0 x1) running (integer) Genefer n=20 Primegrid app. x1 PCIe bus shows a 7-10% performance loss compared to PCIe3.0 x4 or 8-13% loss vs. PCIe 3.0 x8. |
©2025 Universitat Pompeu Fabra