Message boards :
Graphics cards (GPUs) :
needed pci-e bandwidth? mining rig idea
Message board moderation
Previous · 1 · 2 · 3
| Author | Message |
|---|---|
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
i am considering this setup: https://imgur.com/a/e03wZMM mainboard: Onda B250 D8P (not D3-version) support for socket 1151 intel cpu (6th gen, maybe even 8th gen) sodimm upto 16gb ddr4 (laptop ram-modules, sodimm) |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That CPU only has 16 lanes on the CPU so it would be pretty pointless to have a full PCIe connector for each GPU. It would only make sense if there were enough PCIe lanes to go around |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
so, your better choice is using mainboards with PLX-switches? or....making this https://www.asrockrack.com/general/productdetail.asp?Model=EPYCD8-2T#Specifications mainboard of cost at 480 euro with x8-slots open-end, so using my x16 ==>> x16 pcie-risers to setup a system with maximal 7 gpu with cpu epyc 7251 [535 euro, 8 cores] or epyc 7281 [720 euro, 16 cores] |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
That CPU only has 16 lanes on the CPU so it would be pretty pointless to have a full PCIe connector for each GPU. It would only make sense if there were enough PCIe lanes to go around The purpose of the full PCIe connector is to give the highest possible mechanical stabilty for all the cards as these x16 connectors have latches on the "inner" end. This latch on the "inner" edge of the PCIe connector is available only on the x16 slot. This provides compatibility with shorter cards, while shorter open end PCIe slots can accomodate longer cards while providing fewer lanes, and no latch on the end. Take a look at the last picture: https://www.fasttech.com/product/9661024-authentic-onda-b250-d8p-d4-btc-mining-motherboard Only the first (the closest to the CPU) PCIe slot has 16 lanes, the others have only 1 lane. According to the Intel specification the lanes of the PCIe controller integrated into the CPU can't be used as 16 separate x1 lanes. See the expansion options: https://ark.intel.com/content/www/us/en/ark/products/191047/intel-core-i7-9850h-processor-12m-cache-up-to-4-60-ghz.html https://ark.intel.com/content/www/us/en/ark/products/135457/intel-pentium-gold-g5620-processor-4m-cache-4-00-ghz.html It goes like: PCI Express Configurations: Up to 1x16, 2x8, 1x8+2x4for every socket 115X CPU. (or less for low-end Celerons and Pentiums) That is there could be at most 3 GPUs connected to the CPU: 1 on 8 lanes, the 2 other in 4 lanes. However the "South Bridge" chip provides further PCIe (2.0) lanes: (these lanes have higher latency than the the lanes built into the CPU) https://ark.intel.com/content/www/us/en/ark/products/98086/intel-b250-chipset.html PCI Express Revision : 3.0 PCI Express Configurations: x1, x2, x4 Max # of PCI Express Lanes: 12Perhaps that's the tick: the 1st PCIe is connected with all the 16 lanes to the CPU, the other 11 is connected to the south bridge (1 lane each). There's no unnecessary peripherals (PS/2 keyboard, serial and parallel ports, additional USB ports, sound controller), only one PCIe gigabit network interface controller (occupying the 12th lane). |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
ok, yet another update, using x1-slot (on asrock x399 taichi mainboard with threadripper 1950x) https://imgur.com/a/VV6CdUB little explanation about the above image... values from 2 to 20 are under power limit of 50% values from 21 to 38 are under no power limit (so power is 100%) changing the power limit has only effect on core clock speed, meaning 100% power will finish WU faster while gpu-temperature will grow with 4-5 degree Celsius. another note...with or without power limit nothing changed regarding gpu-load and pcie-bus-load. these both values are around 74-76, with or without power limited (or undervolted). my another card in real x16-slot (electrical x16) has gpu-load around 88-90%. so, my performance loss using x1-slot vs x16-slot is from 88-90% to 70-72% gpu-load. my conclusion: 95% chance I am going to choose for mining board for my multi-gpu setup with 6 gpu's...much cheaper that buying mainboard with 7 slots of x16-lanes (intel platform and PLX-switches). this is just sharing info with you. have fun !! |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Turing RTX 2080 PCIe 2.0 x1 performance 50% of PCIe 3.0 x4 on Primegrid n=20 Genefer. GPU power went from 205W to 133W - same as the GTX 1070 on PCIe 2.0 x1. Pascal's x1 performance loss around 8-17%.Genefer a heavy on PCie bandwidth. GPUGRID can have high PCIe bandwidth usage with Multi GPUs motherboard. Turing PCIe x1 performance worse than Pascal / Maxwell / Kelper x1. My Zotac mini GTX 1080 died yesterday after 28 months 24/7 service. I returned a RTX 2070 and kept the RTX 2060. 2060 limited to 160W compared to 210W where the 2070 operated OCed for 12-20% more performace. Today I purchased my first ever ti GPU: 2080ti: GPU Asus ROG strix COD edition (700$ open box Microcenter.) Boosted out of the box at 1970Mhz with 280W power on n=20 WU. With the new 2080ti I decided to test Turing on PCIe x1. Also my 2013 z87 Haswell showing age with the OCed RTX 2080ti and 2080 on primegrid PPSsieve. Runtimes are slower then the RTX 2080ti & 2080 skylake / coffeelake CPUs combos. CPU speed with GPU overclocking on Primgrid (PPSsieve) scales very well as does (AP27). These two programs require minimal PCIe bandwidth. Higher clocked (+3.7GHz) CPUs help the OCed GPU finish the WU faster. This is similar to GPUGRID. |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Wow I think you got the deal of the century, I bought a new EVGA 2080ti from Microcenter for $1015 and I thought that was a pretty good deal. The difference is, my max boost speed seems to be 1800 using 275 watts according to GPU-Z. Pretty amazing you can get 1970Mhz with only 280 watts. |
|
Send message Joined: 30 Apr 19 Posts: 54 Credit: 168,971,875 RAC: 0 Level ![]() Scientific publications
|
i will not use pcie-x1 slot. i have decided to use mainboards with plx8747 pcie-switches, mostly they have 7 pcie-slots. fully loading mining frame with 7 gpus. i think i am going for gtx 1070, but not for sure yet. and i am always undervolting all my gpu's to power limit 50% because of the heat. rigth now running 4 systems. 3 for gpugrid (with totall 6gpu's), 1 system with 6 gpus for Folding@Home. in the planning. add 1 more system for gpugrid, add 1 more system for FAH, 1 more system for WCG and FAH. but not yet decided which gpu to choose. |
|
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Thanks - its an amazing price. Someone returned an efficient chip. Even with the Turing refresh on the horizon soon and price drops - I couldn't pass a 2080ti at such cost. I always check newegg and Microcenter for open box deals. The 2080ti was returned the night before being purchased on sale for 979usd. I saw 2080ti deal online early this morning then walked in when the store opened. |
©2025 Universitat Pompeu Fabra