Message boards :
Number crunching :
Linux vs Microsoft
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ian&Steve C. suggested a comparison between Microsoft and Linux hosts (https://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55895) An interesting thought, so data has been captured and summarized below showing Average Credit per Day for Linux and Microsoft Hosts for each GPU model. The results are very subjective as the hosts are not in a controlled environment. There is no control over:
Host and PCIe specifications, CPU/GPU utilisation for other projects, Power limiting or Overclocking, Software / OS configurations.
|
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
I wonder why the average 2080ti isnt scoring better. I have 6x 2080tis and have seen 2080tis in other hosts also scoring north of 1,000,000 per day per card under Linux. how are you calculating average credit? is it based on individual task run times? or did you just add up all the credit earned from that day for each host then average them? If the latter, then maybe some hosts weren't contributing the entire day.
|
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I wonder why the average 2080ti isnt scoring better. I have 6x 2080tis and have seen 2080tis in other hosts also scoring north of 1,000,000 per day per card under Linux. Good question, calculation needs to be made visible for clarity. For each GPU model: Average Credit / Day = (86400 / Total Runtime) * Total Credit This can also be written as Total Credit / (Total Time / 86400) Essentially I am dividing the Total Credit by Total Runtime expressed as days. happy to look at other calculations if this is not a good fit. If there is a flaw in the calculation used, please let me know so I can correct it. The other contributing factor could be the differential in credit awarded between ADRIA and MDAD. I could redo the table based on MDAD tasks only. From the dataset used, I have grabbed 443 tasks from your host 546526. Using the above calculation, the Average Credit / Day is 1,037,108. This lines up with the RAC of 1,042,210 for this host. The calculation I use appears ok. |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
yeah i think that would be more appropriate to remove that extra variable. MDAD only i think is a good idea. see what happens
|
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
are you applying this formula to every task individually? and then averaging the results? for each host? because even within the MDAD group on the same host/GPU, there seems to be varying credit reward per unit time. take the following two tasks results as examples. both from my 2080ti system above. both MDAD tasks. wildly different calculated credit/day. Task 1: https://www.gpugrid.net/result.php?resultid=31613572 Runtime: 905.56 Credit: 7337.81 Calc Cred/Day: (86400/905.56)*7337.81 = 700,104 Task 2: https://www.gpugrid.net/result.php?resultid=31643991 Runtime: 1298.90 Credit: 17523.00 Calc Cred/Day: (86400/1298.90)*17523.00 = 1,165,591 just something to think about.
|
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
re you applying this formula to every task individually? and then averaging the results? for each host? Calculation is a single calculation using TOTAL Credit and TOTAL Runtime for each GPU Model. (not each GPU, but each GPU Model) The wild variance in credit you are seeing is normal for MDAD. Data from your host suggests the calculation is good.(from Message ID:https://www.gpugrid.net/forum_thread.php?id=5203&nowrap=true#55913) From the dataset used, I have grabbed 443 tasks from your host 546526. Looking only at RTX 2080 Ti and using RAC (not my calculation), the table below shows that not all users are getting the most out of their devices. The table includes all RTX 2080 Ti single GPU Hosts from Gpugrid Volunteer Tab, selecting only from first 100 hosts. The RAC figures for the RTX 2080 Ti GPUs mirror the results I have been posting. Boinc Gpugrid Host ID OS GPU RAC Rank ------------------------------------------------------ 546526 Linux RTX 2080 Ti 1035368 33 501978 Microsoft RTX 2080 Ti 907221 46 478405 Microsoft RTX 2080 Ti 837393 54 455804 Microsoft RTX 2080 Ti 832648 55 513616 Microsoft RTX 2080 Ti 801022 58 522530 Microsoft RTX 2080 Ti 793276 59 539392 Microsoft RTX 2080 Ti 756802 65 542939 Microsoft RTX 2080 Ti 751048 68 552807 Microsoft RTX 2080 Ti 742845 69 538494 Microsoft RTX 2080 Ti 722999 73 564665 Linux RTX 2080 Ti 722641 74 538725 Linux RTX 2080 Ti 722416 75 519117 Microsoft RTX 2080 Ti 717758 78 518874 Microsoft RTX 2080 Ti 713037 79 512891 Linux RTX 2080 Ti 700465 83 552300 Microsoft RTX 2080 Ti 673280 90 547099 Microsoft RTX 2080 Ti 670212 92 528914 Microsoft RTX 2080 Ti 650826 98 ------------------------------------------------------ I agree, the figure for the RTX 2080 Ti was a surprise, but all data I have seen supports this is the correct figure. |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
can you elaborate on what you mean by total run time and credit for each gpu model? do you mean to say you take the sum of all the runtimes and credits of the thousands of tasks across all the hosts and do the calculation in a single equation? it just seems odd for so many hosts to perform so poorly. I'm not doing anything "special" with mine. I've even power limited them, reducing their production slightly to increase power efficiency. I'm trying to think if your method is somehow being skewed by the individual hosts behavior (like someone who's not running their host 24/7, or not running GPUGRID full time and sharing resources with other projects). but assuming you're basing your numbers on only the tasks they submit, and they aren't running more than 1 WU at a time, this shouldnt be the case. or perhaps a lot of other hosts are getting a disproportionate amount of low credit tasks? but again I would expect the task distribution to be more or less the same for everyone. I have no other explanation for it though.
|
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
can you elaborate on what you mean by total run time and credit for each gpu model? do you mean to say you take the sum of all the runtimes and credits of the thousands of tasks across all the hosts and do the calculation in a single equation? If you look at this post, the Total Runtime and Total credit is listed for every GPU Model (but not broken up by OS): https://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55894 These are the figures used. I didnt put those figures on this Thread as there was already 7 columns on the original post in this Thread. I will prepare another comparison next year. It will be interesting to see if the results are any different. |
|
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 0 Level ![]() Scientific publications
|
Thank you for putting this information together. Always knew that there was Linux had an advantage over Windows but would have never suspected to see this big of a difference in performance. Thanks for the detailed analysis. Much appreciated |
|
Send message Joined: 20 Mar 20 Posts: 1 Credit: 36,144,114 RAC: 0 Level ![]() Scientific publications
|
Don't forget to do at least bare minimum of statistical computations. This exercise was not of much use. (Arithmetic avg on avg is wrong - I think it should be harmonic, were are other basic characterizations like variance?) Depending on those calculations, that ~20% "advantage" might still be insignificant. I think ANOVA would be far better method. (Assuming data fulfill assumptions of method, otherwise one of non-parametric comaprative tests would have to be used) |
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Don't forget to do at least bare minimum of statistical computations. This exercise was not of much use. (Arithmetic avg on avg is wrong - I think it should be harmonic, were are other basic characterizations like variance?) I totally agree. There is so much more analysis that can be done to present this data in better ways. I will certainly consider your suggestions on presenting the data with the statistical methods mentioned. It should also be considered that not everyone here are Statisticians. The current approach is to present the data with a minimum of statistical arithmetic so it has the broadest reach. Arithmetic avg on avg is wrong Yes, agreed. The "Advantage" column values are misleading, but nevertheless does indicate which OS has the advantage. that ~20% "advantage" might still be insignificant That remains to be seen. Hopefully the next post will be better presented with a better representation of the advantage. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Linux has a performance advantage over Windows (Vista and newer) operating systems due their different technological approach. After releasing Windows XP (and Server 2003 series) Microsoft decided to sacrifice some GPU performance to make their OS more stable: whenever an exception takes place in the device driver, it won't take down the whole OS (= there's no Blue Screen Of Death). They refined this technology in the later editions of Windows 10, but the latency is still there. See my post about it with links. Linux does not have this feature (as far as I know), so there's less latency in Linux in every CPU-GPU interaction. The impact of this on the overall performance of a given GPUGrid workunit greatly depends on how much (how many per second) CPU intervention is needed to run the given simulation. The more intervention the bigger the gain on Linux over Windows. So you have to redo your statistics for every different simulation. |
|
Send message Joined: 4 Aug 14 Posts: 266 Credit: 2,219,935,054 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Linux has a performance advantage over Windows (Vista and newer) operating systems due their different technological approach. Great insight. So based on that, comparison will need a breakdown by OS AND Application. Relatively easy to do. This will be next year's challenge. |
©2025 Universitat Pompeu Fabra