Linux vs Microsoft

Message boards : Number crunching : Linux vs Microsoft
Message board moderation

To post messages, you must log in.

AuthorMessage
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55908 - Posted: 9 Dec 2020, 3:04:50 UTC
Last modified: 9 Dec 2020, 3:25:39 UTC

Ian&Steve C. suggested a comparison between Microsoft and Linux hosts (https://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55895)

An interesting thought, so data has been captured and summarized below showing Average Credit per Day for Linux and Microsoft Hosts for each GPU model.

The results are very subjective as the hosts are not in a controlled environment.

There is no control over:
    Thermal environment,
    Host and PCIe specifications,
    CPU/GPU utilisation for other projects,
    Power limiting or Overclocking,
    Software / OS configurations.


The lack of a controlled environment makes this comparison questionable, but I was still curious, so data is presented out of interest rather than being an exact comparison.

Make your own conclusions and feel free to comment.

			Linux		   MS
	        Linux	Av.Cred	   MS	 Av.Cred       Linux	Microsoft
GPU Model	Tasks	 / Day	  Tasks	  / Day	     Advantage	Advantage
-------------------------------------------------------------------------    
RTX 2080	  963	786,836	  5,208	  667,794	18%	
RTX 2080 SUPER	  303	786,164	  4,623	  720,189	 9%	
RTX 2080 Ti	2,219	783,743	  9,406	  756,671	 4%	
RTX 2070 SUPER	2,220	704,915	  5,044	  620,787	14%	
GTX 1080 Ti	1,922	664,039	  8,487	  527,414	26%	
RTX 2060 SUPER	  146	643,716	  3,825	  580,067	11%	
RTX 2070	1,082	637,254	  5,375	  578,506	10%	
TITAN X		  120	583,820	    321	  519,139	12%	
GTX 1080	3,139	551,634	  9,890	  460,617	20%	
GTX 1070 Ti	1,091	515,561	  3,333	  437,885	18%	
RTX 2060	  253	473,835	  4,517	  403,898	17%	
GTX 1660 Ti	1,695	468,735	  3,661	  415,678	13%	
GTX 1070	2,878	449,560	  8,974	  394,496	14%	
GTX 1660 SUPER	  889	438,065	  3,312	  411,626	 6%	
GTX 1660	  226	402,077	  1,579	  359,399	12%	
GTX TITAN X	   50	374,996	     60	  303,673	23%	
GTX 980 Ti	  311	373,613	    810	  322,007	16%	
GTX 1650 SUPER	  578	368,614	  1,639	  323,988	14%	
GTX 1060 6GB	  784	308,254	  6,072	  287,037	 7%	
GTX 980		  220	307,633	  1,025	  296,368	 4%	
GTX 1060 3GB	1,388	307,559	  3,927	  268,834	14%	
GTX 1650	1,244	285,020	  2,287	  250,840	14%	
GTX TITAN Black	  111	248,741	     81	  235,078	 6%	
GTX 970		  930	246,908	  3,212	  245,575	 1%	
GTX 1050 Ti	  483	193,456	  1,968	  180,500	 7%	
GTX 960		  455	183,379	  1,161	  173,986	 5%	
GTX 1050	  159	166,705	    769	  160,519	 4%	
Quadro P1000	   57	149,987	     87	  108,722	38%	
GTX 950		   59	142,365	    241	  144,675		  2%
GTX 750 Ti	   30	105,671	    169	  105,381	 0%	
-------------------------------------------------------------------------    


NOTE:
Data captured 8th December 23:30 UTC.
Data is sourced from valid tasks gathered from first 1500 hosts in Volunteer Tab
Hosts with multiple GPUs excluded
ID: 55908 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 55909 - Posted: 9 Dec 2020, 3:59:52 UTC - in response to Message 55908.  
Last modified: 9 Dec 2020, 4:01:34 UTC

I wonder why the average 2080ti isnt scoring better. I have 6x 2080tis and have seen 2080tis in other hosts also scoring north of 1,000,000 per day per card under Linux.

how are you calculating average credit? is it based on individual task run times? or did you just add up all the credit earned from that day for each host then average them? If the latter, then maybe some hosts weren't contributing the entire day.
ID: 55909 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55913 - Posted: 9 Dec 2020, 4:43:54 UTC - in response to Message 55909.  
Last modified: 9 Dec 2020, 5:18:10 UTC

I wonder why the average 2080ti isnt scoring better. I have 6x 2080tis and have seen 2080tis in other hosts also scoring north of 1,000,000 per day per card under Linux.

how are you calculating average credit? is it based on individual task run times? or did you just add up all the credit earned from that day for each host then average them? If the latter, then maybe some hosts weren't contributing the entire day.


Good question, calculation needs to be made visible for clarity.

For each GPU model:
Average Credit / Day = (86400 / Total Runtime) * Total Credit

This can also be written as Total Credit / (Total Time / 86400)

Essentially I am dividing the Total Credit by Total Runtime expressed as days.

happy to look at other calculations if this is not a good fit.
If there is a flaw in the calculation used, please let me know so I can correct it.


The other contributing factor could be the differential in credit awarded between ADRIA and MDAD. I could redo the table based on MDAD tasks only.


From the dataset used, I have grabbed 443 tasks from your host 546526.
Using the above calculation, the Average Credit / Day is 1,037,108.
This lines up with the RAC of 1,042,210 for this host. The calculation I use appears ok.
ID: 55913 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 55914 - Posted: 9 Dec 2020, 4:59:32 UTC - in response to Message 55913.  

yeah i think that would be more appropriate to remove that extra variable. MDAD only i think is a good idea. see what happens
ID: 55914 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 55928 - Posted: 10 Dec 2020, 0:00:15 UTC - in response to Message 55913.  



For each GPU model:
Average Credit / Day = (86400 / Total Runtime) * Total Credit


are you applying this formula to every task individually? and then averaging the results? for each host?

because even within the MDAD group on the same host/GPU, there seems to be varying credit reward per unit time.

take the following two tasks results as examples. both from my 2080ti system above. both MDAD tasks. wildly different calculated credit/day.

Task 1: https://www.gpugrid.net/result.php?resultid=31613572
Runtime: 905.56
Credit: 7337.81
Calc Cred/Day: (86400/905.56)*7337.81 = 700,104

Task 2: https://www.gpugrid.net/result.php?resultid=31643991
Runtime: 1298.90
Credit: 17523.00
Calc Cred/Day: (86400/1298.90)*17523.00 = 1,165,591

just something to think about.
ID: 55928 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55931 - Posted: 10 Dec 2020, 3:27:27 UTC - in response to Message 55928.  
Last modified: 10 Dec 2020, 4:20:08 UTC

re you applying this formula to every task individually? and then averaging the results? for each host?

because even within the MDAD group on the same host/GPU, there seems to be varying credit reward per unit time.

take the following two tasks results as examples. both from my 2080ti system above. both MDAD tasks. wildly different calculated credit/day.


Calculation is a single calculation using TOTAL Credit and TOTAL Runtime for each GPU Model. (not each GPU, but each GPU Model)

The wild variance in credit you are seeing is normal for MDAD.

Data from your host suggests the calculation is good.(from Message ID:https://www.gpugrid.net/forum_thread.php?id=5203&nowrap=true#55913)
From the dataset used, I have grabbed 443 tasks from your host 546526.
Using the above calculation, the Average Credit / Day is 1,037,108.
This lines up with the RAC of 1,042,210 for this host. The calculation I use appears ok.


Looking only at RTX 2080 Ti and using RAC (not my calculation), the table below shows that not all users are getting the most out of their devices.
The table includes all RTX 2080 Ti single GPU Hosts from Gpugrid Volunteer Tab, selecting only from first 100 hosts.

The RAC figures for the RTX 2080 Ti GPUs mirror the results I have been posting.

					Boinc   Gpugrid
Host ID	   OS		    GPU	         RAC	 Rank
------------------------------------------------------
546526	 Linux		RTX 2080 Ti    1035368	  33
501978	 Microsoft	RTX 2080 Ti	907221	  46
478405	 Microsoft	RTX 2080 Ti	837393	  54
455804	 Microsoft	RTX 2080 Ti	832648	  55
513616	 Microsoft	RTX 2080 Ti	801022	  58
522530	 Microsoft	RTX 2080 Ti	793276	  59
539392	 Microsoft	RTX 2080 Ti	756802	  65
542939	 Microsoft	RTX 2080 Ti	751048	  68
552807	 Microsoft	RTX 2080 Ti	742845	  69
538494	 Microsoft	RTX 2080 Ti	722999	  73
564665	 Linux		RTX 2080 Ti	722641	  74
538725	 Linux		RTX 2080 Ti	722416	  75
519117	 Microsoft	RTX 2080 Ti	717758	  78
518874	 Microsoft	RTX 2080 Ti	713037	  79
512891	 Linux		RTX 2080 Ti	700465	  83
552300	 Microsoft	RTX 2080 Ti	673280	  90
547099	 Microsoft	RTX 2080 Ti	670212	  92
528914	 Microsoft	RTX 2080 Ti	650826	  98
------------------------------------------------------


I agree, the figure for the RTX 2080 Ti was a surprise, but all data I have seen supports this is the correct figure.
ID: 55931 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 55932 - Posted: 10 Dec 2020, 4:50:43 UTC - in response to Message 55931.  
Last modified: 10 Dec 2020, 4:52:19 UTC

can you elaborate on what you mean by total run time and credit for each gpu model? do you mean to say you take the sum of all the runtimes and credits of the thousands of tasks across all the hosts and do the calculation in a single equation?

it just seems odd for so many hosts to perform so poorly. I'm not doing anything "special" with mine. I've even power limited them, reducing their production slightly to increase power efficiency.

I'm trying to think if your method is somehow being skewed by the individual hosts behavior (like someone who's not running their host 24/7, or not running GPUGRID full time and sharing resources with other projects). but assuming you're basing your numbers on only the tasks they submit, and they aren't running more than 1 WU at a time, this shouldnt be the case. or perhaps a lot of other hosts are getting a disproportionate amount of low credit tasks? but again I would expect the task distribution to be more or less the same for everyone.

I have no other explanation for it though.
ID: 55932 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55935 - Posted: 10 Dec 2020, 6:32:12 UTC - in response to Message 55932.  

can you elaborate on what you mean by total run time and credit for each gpu model? do you mean to say you take the sum of all the runtimes and credits of the thousands of tasks across all the hosts and do the calculation in a single equation?

If you look at this post, the Total Runtime and Total credit is listed for every GPU Model (but not broken up by OS): https://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55894
These are the figures used. I didnt put those figures on this Thread as there was already 7 columns on the original post in this Thread.

I will prepare another comparison next year. It will be interesting to see if the results are any different.
ID: 55935 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
bozz4science

Send message
Joined: 22 May 20
Posts: 110
Credit: 115,525,136
RAC: 0
Level
Cys
Scientific publications
wat
Message 55970 - Posted: 10 Dec 2020, 22:13:44 UTC - in response to Message 55935.  

Thank you for putting this information together. Always knew that there was Linux had an advantage over Windows but would have never suspected to see this big of a difference in performance.

Thanks for the detailed analysis. Much appreciated
ID: 55970 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
DKlimax

Send message
Joined: 20 Mar 20
Posts: 1
Credit: 36,144,114
RAC: 0
Level
Val
Scientific publications
wat
Message 56142 - Posted: 22 Dec 2020, 18:41:07 UTC - in response to Message 55935.  

Don't forget to do at least bare minimum of statistical computations. This exercise was not of much use. (Arithmetic avg on avg is wrong - I think it should be harmonic, were are other basic characterizations like variance?)

Depending on those calculations, that ~20% "advantage" might still be insignificant.

I think ANOVA would be far better method. (Assuming data fulfill assumptions of method, otherwise one of non-parametric comaprative tests would have to be used)
ID: 56142 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 56145 - Posted: 23 Dec 2020, 0:18:25 UTC - in response to Message 56142.  
Last modified: 23 Dec 2020, 0:21:26 UTC

Don't forget to do at least bare minimum of statistical computations. This exercise was not of much use. (Arithmetic avg on avg is wrong - I think it should be harmonic, were are other basic characterizations like variance?)

Depending on those calculations, that ~20% "advantage" might still be insignificant.

I think ANOVA would be far better method. (Assuming data fulfill assumptions of method, otherwise one of non-parametric comaprative tests would have to be used)


I totally agree. There is so much more analysis that can be done to present this data in better ways. I will certainly consider your suggestions on presenting the data with the statistical methods mentioned.

It should also be considered that not everyone here are Statisticians.
The current approach is to present the data with a minimum of statistical arithmetic so it has the broadest reach.


Arithmetic avg on avg is wrong

Yes, agreed. The "Advantage" column values are misleading, but nevertheless does indicate which OS has the advantage.

that ~20% "advantage" might still be insignificant

That remains to be seen. Hopefully the next post will be better presented with a better representation of the advantage.
ID: 56145 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 56146 - Posted: 23 Dec 2020, 1:26:22 UTC
Last modified: 23 Dec 2020, 1:32:15 UTC

Linux has a performance advantage over Windows (Vista and newer) operating systems due their different technological approach.
After releasing Windows XP (and Server 2003 series) Microsoft decided to sacrifice some GPU performance to make their OS more stable: whenever an exception takes place in the device driver, it won't take down the whole OS (= there's no Blue Screen Of Death).
They refined this technology in the later editions of Windows 10, but the latency is still there.
See my post about it with links.
Linux does not have this feature (as far as I know), so there's less latency in Linux in every CPU-GPU interaction. The impact of this on the overall performance of a given GPUGrid workunit greatly depends on how much (how many per second) CPU intervention is needed to run the given simulation. The more intervention the bigger the gain on Linux over Windows. So you have to redo your statistics for every different simulation.
ID: 56146 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 56147 - Posted: 23 Dec 2020, 1:54:30 UTC - in response to Message 56146.  
Last modified: 23 Dec 2020, 2:11:40 UTC

Linux has a performance advantage over Windows (Vista and newer) operating systems due their different technological approach.
After releasing Windows XP (and Server 2003 series) Microsoft decided to sacrifice some GPU performance to make their OS more stable: whenever an exception takes place in the device driver, it won't take down the whole OS (= there's no Blue Screen Of Death).
They refined this technology in the later editions of Windows 10, but the latency is still there.
See my post about it with links.
Linux does not have this feature (as far as I know), so there's less latency in Linux in every CPU-GPU interaction. The impact of this on the overall performance of a given GPUGrid workunit greatly depends on how much (how many per second) CPU intervention is needed to run the given simulation. The more intervention the bigger the gain on Linux over Windows. So you have to redo your statistics for every different simulation.

Great insight.
So based on that, comparison will need a breakdown by OS AND Application. Relatively easy to do.
This will be next year's challenge.
ID: 56147 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Linux vs Microsoft

©2025 Universitat Pompeu Fabra