Performance Tab still broken

Message boards : Server and website : Performance Tab still broken
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55674 - Posted: 2 Nov 2020, 3:52:54 UTC
Last modified: 2 Nov 2020, 4:35:52 UTC

The Performance tab was always a great source of information on gauging how each GPU was performing in comparison to other GPUs.

I have now resorted to scanning the Volunteers \ Hosts list for this info.

Below is a quick summary taken 2nd November 2020 2:00am UTC.

In this listing the Recent Average Credit (RAC) has been used for the comparison due to the variability of the work unit runtimes. Toni has stated that the credit formula has been uniform for several years.

This list is not definitive, just an indicator at best. There are many factors that could affect this listing.

NOTE:
Hosts with multiple GPUs have been exluded.
Recent Average Credit has been rounded
The best performing GPU from each type has been listed

Rank	GPU		 RAC
 23	RTX 2080 Ti	1032000
 61	RTX 2080 Super	 760000
 65	RTX 2070 Super	 747000
 74	GTX 1080 Ti	 712000
 85	RTX 2080	 658000
 94	RTX 2070	 625000
116	RTX 2060 Super	 585000
138	GTX 1080	 528000
155	GTX 1660 Ti	 511000
156	RTX 2060	 510000
166	GTX 1070 Ti	 502000
194	GTX 1070	 468000
216	GTX 1660 Super	 436000
276	GTX 1660	 396000
335	GTX 1650 Super	 353000
408	GTX 1060 6Gb	 310000
459	GTX 1060 3Gb	 288000
490	GTX 1650	 274000
809	GTX 1050 Ti	 193000
960	GTX 1050	 160000


Below is a list of GPU efficiency (based on the list above)

Again, list is not definitive and should not be taken too seriously. There are many factors that could change this listing.

NOTE
Watts are estimated for each GPU type.

GPU		Watts	RAC/Watt	Rank
GTX 1660 Ti	 130	 3931		  1
GTX 1650 Super	 100	 3530		  2
GTX 1660 Super	 125	 3488		  3
RTX 2070 Super	 215	 3474		  4
RTX 2080 Ti	 300	 3440		  5
GTX 2070	 185	 3378		  6
GTX 2060 Super	 175	 3343		  7
GTX 1660	 120	 3300		  8
GTX 1650	  85	 3224		  9
RTX 2060	 160	 3188		 10
GTX 1070	 150	 3120		 11
GTX 2080	 215	 3060		 12
RTX 2080 Super	 250	 3040		 13
GTX 1080	 180	 2933		 14
GTX 1080 Ti	 250	 2848		 15
GTX 1070 Ti	 180	 2789		 16
GTX 1060 6Gb	 120	 2583		 17
GTX 1050 Ti	  75	 2573		 18
GTX 1060 3Gb	 120	 2400		 19
GTX 1050	  75	 2133		 20
ID: 55674 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1112
Credit: 40,764,483,595
RAC: 7,379,314
Level
Trp
Scientific publications
wat
Message 55675 - Posted: 2 Nov 2020, 5:06:32 UTC - in response to Message 55674.  

the 2080ti TDP number is way high. most are 250w, some a little more. but 300 would be atypical.

personally i run my single one at 225W, and my 5x system at 215W each.
ID: 55675 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile ServicEnginIC
Avatar

Send message
Joined: 24 Sep 10
Posts: 591
Credit: 11,738,036,510
RAC: 10,299,581
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55676 - Posted: 2 Nov 2020, 6:33:03 UTC - in response to Message 55674.  

I have now resorted to scanning the Volunteers \ Hosts list for this info.
Below is a quick summary taken 2nd November 2020 2:00am UTC.


Great (and laborious) job!
Very interesting, thank you for this effort.
ID: 55676 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55679 - Posted: 2 Nov 2020, 8:24:42 UTC - in response to Message 55675.  
Last modified: 2 Nov 2020, 8:27:08 UTC

the 2080ti TDP number is way high. most are 250w, some a little more. but 300 would be atypical.

personally i run my single one at 225W, and my 5x system at 215W each.


Setting the Watts was a difficult choice. For the RTX 2080 Ti, highend cards are 300W, reference TDP is 250W. If I was to pick the best performing GPU, I was making an assumption, it was a highend card.

And then there are user who do modify the power limits. (Me included)

The table is definitely not perfect.
Hence the caveat on the post.

Nice to know you run your GTX 2080 Ti at 225W, as it was your GPU in the list.

I may post this list occassionally. I will update your GPU TDP if it appears in the next list.
ID: 55679 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55681 - Posted: 2 Nov 2020, 10:51:12 UTC - in response to Message 55676.  

I have now resorted to scanning the Volunteers \ Hosts list for this info.
Below is a quick summary taken 2nd November 2020 2:00am UTC.


Great (and laborious) job!
Very interesting, thank you for this effort.


This was a manual task, but I have almost finished a script to automate the process.

This should provide more GPUs in the list and hopefully include median RAC figures for GPU types.

I will also revert to reference TDP for efficiency calculations on the median RAC.

Time permitting, will attempt to publish a list every month.
ID: 55681 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55687 - Posted: 5 Nov 2020, 1:09:29 UTC
Last modified: 5 Nov 2020, 1:10:02 UTC

Finished the script and gathered some info on the first 3000 Hosts (Volunteers Tab) which I have listed below.

Hosts with Multiple GPUs: 320 (which are excluded from list)
List compiled: 5th November 2020 0:00UTC.

If you have a keen eye, you will see a RTX3070 and 2 RTX3090 GPUs. (Any RAC they have, would be from a compatible card that was removed)

GPU Model				Count
GTX 1070 				197
GTX 1060 6GB 				194
GTX 1080 				175
GTX 1080 Ti 				158
GTX 1050 Ti 				142
GTX 1060 3GB 				114
GTX 970 				114
RTX 2070 SUPER 				111
RTX 2060 				101
RTX 2080 Ti 				96
GTX 1660 Ti 				94
RTX 2070 				89
GTX 1650 				74
RTX 2080 SUPER 				70
GTX 1070 Ti 				67
RTX 2080 				65
GTX 1050 				64
GTX 960 				63
GTX 1660 SUPER 				60
RTX 2060 SUPER 				53
GTX 750 Ti 				50
GT 1030 				39
GTX 1650 SUPER 				37
GTX 1660 				37
GTX 980 				35
GTX 980 Ti 				24
GTX 1060 				18
GTX 750 				16
GTX 950 				15
GT 730 					11
GTX 760 				11
GTX 770 				11
Quadro P1000 				11
Quadro K620 				10
Quadro P2000 				10
GTX 650 				8
GTX 660 				8
GTX 780 				7
Quadro P4000 				7
GTX 650 Ti 				6
GTX 960M 				6
Quadro K2200 				6
TITAN V 				6
TITAN X Pascal 				6
GTX 1060 with MaxQ Design 		5
GTX 980M 				5
GTX TITAN X 				5
MX150 					5
Quadro RTX 4000 			5
GTX 1650 with MaxQ Design 		4
GTX 680 				4
P106090 				4
Quadro K4200 				4
Quadro M4000 				4
Quadro P5000 				4
Quadro P600 				4
RTX 2070 with MaxQ Design 		4
Tesla M60 				4
940MX 					3
GT 640 					3
GTX 745 				3
GTX 750 980MB				3
GTX 780 Ti 				3
GTX 950M 				3
GTX TITAN Black 			3
Quadro P2200 				3
Quadro P620 				3
Quadro T1000 				3
RTX 2080 with MaxQ Design 		3
TITAN Xp COLLECTORS EDITION 		3
840M 					2
GT 740 					2
GTX 1650 Ti 				2
GTX 1660 Ti with MaxQ Design 		2
GTX 650 Ti BOOST			2
GTX 660 Ti 				2
GTX 670 				2
GTX 765M 				2
GTX TITAN 				2
MX130 					2
MX250 					2
P104100 				2
Quadro K4000 				2
Quadro K5000 				2
Quadro M2000 				2
Quadro T2000 				2
RTX 3090 				2
Tesla K20Xm 				2
Tesla T4 				2
Tesla V100PCIE16GB 			2
GT 650M 				1
GT 740M 				1
GTX 1050 with MaxQ Design 		1
GTX 1070 with MaxQ Design 		1
GTX 1080 with MaxQ Design 		1
GTX 645 				1
GTX 770M 				1
GTX 780M 				1
GTX 880M 				1
GTX 970M 				1
Quadro K6000 				1
Quadro M1000M 				1
Quadro M1200 				1
Quadro M3000M 				1
Quadro M6000 				1
Quadro P3000 				1
Quadro P3200 				1
Quadro P400 				1
Quadro P4200 				1
Quadro RTX 3000 			1
Quadro RTX 8000 			1
Quadro T2000 with MaxQ Design 		1
RTX 2070 Super with MaxQ Design 	1
RTX 2080 Super with MaxQ Design 	1
RTX 3070 				1
Tesla K20c 				1
Tesla K40c				1
Tesla K80 				1
Tesla P100PCIE12GB 			1
TITAN RTX 				1
ID: 55687 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55688 - Posted: 5 Nov 2020, 17:12:24 UTC

Very much appreciate your effort, Rod4x4.

You've provided me with a better idea of how they stack up running MDAD tasks than I previously had. Thanks.
ID: 55688 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55696 - Posted: 6 Nov 2020, 14:48:48 UTC

One thing that throws a wrench into rod4x4's comparison is the ADRIA-Bandit WU.

Every time one of my machines gets one, that host's RAC takes a beating.

I've also found that if restarted they throw an error, even on a single GPU host. That showed up during my last driver update. Don't know if anybody else had this happen.
ID: 55696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1404
Credit: 8,898,646,190
RAC: 7,548,451
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55697 - Posted: 6 Nov 2020, 20:00:34 UTC - in response to Message 55696.  

I think Ian has mentioned the same.
ID: 55697 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1112
Credit: 40,764,483,595
RAC: 7,379,314
Level
Trp
Scientific publications
wat
Message 55698 - Posted: 6 Nov 2020, 20:33:03 UTC - in response to Message 55697.  

I know I've seen that before, but don't remember specifically which ones. I think I saw that behavior on the PABLO tasks.
ID: 55698 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55699 - Posted: 6 Nov 2020, 22:16:14 UTC

I remember Ian mentioning it about ADRIAs when they first came out and comparing them to PABLO WUs, as they are similar in size but ADRIA WUs give around half the points that PABLOs did.

(I searched the forum for "ADRIA" and "Adria" but couldn't find it.)

Speaking of broken tabs, the donation page and the profile creation page haven't worked for me since I started last year. There seems to be a problem with the "I am not a robot" verification picture is missing.
ID: 55699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55700 - Posted: 6 Nov 2020, 22:37:56 UTC - in response to Message 55696.  

One thing that throws a wrench into rod4x4's comparison is the ADRIA-Bandit WU.

Every time one of my machines gets one, that host's RAC takes a beating.


Statistically, all volunteers should experience similar issue. Outliers will still occur, but will only have a small and temporary effect on the statistics.
ID: 55700 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55701 - Posted: 7 Nov 2020, 0:09:17 UTC - in response to Message 55699.  

https://www.gpugrid.net/forum_thread.php?id=5125&nowrap=true#55153

Ian C wrote:
The VillinAdaptive WUs also pay a lot less credit reward as compared to the pablo tasks, per time invested.


Found it!
Now, these are labeled ADRIA_NTL9Bandit100ns.

I might be comparing apples to oranges, but both seem to credit about half the normal cobblestones.
ID: 55701 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile ServicEnginIC
Avatar

Send message
Joined: 24 Sep 10
Posts: 591
Credit: 11,738,036,510
RAC: 10,299,581
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55702 - Posted: 7 Nov 2020, 9:30:18 UTC - in response to Message 55701.  

Ian C wrote:
The VillinAdaptive WUs also pay a lot less credit reward as compared to the pablo tasks, per time invested.

Found it!
Now, these are labeled ADRIA_NTL9Bandit100ns.

It's been happening since long time ago.
I found this older "Immensive credits difference: ADRIA vs. PABLO tasks" thread even expresely mentioning it.

And regarding this rod4x4 thread "Performance Tab still broken" topic, I agree that Adria work units may be a good tool for a performance comparative, due to their consistence in number of calculations per task.
ID: 55702 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55703 - Posted: 7 Nov 2020, 19:20:36 UTC - in response to Message 55702.  
Last modified: 7 Nov 2020, 19:35:17 UTC


It's been happening since long time ago.
I found this older "Immensive credits difference: ADRIA vs. PABLO tasks" thread even expresely mentioning it.


Thanks ServicEnginIC, since nothing's changed I guess
"Ours is not to question why, ours is just to crunch, not cry!"


And regarding this rod4x4 thread "Performance Tab still broken" topic, I agree that Adria work units may be a good tool for a performance comparative, due to their consistence in number of calculations per task.


I'm all for that, as long as times are used instead of points per day for rating the cards. That would negate the effects that errors and WU granted credit variances have on a host's RAC. I see credit acquisition rates as a function of the entire host machine's output, rather than just the GPU.
(Capt. Obvious, I am)

Time for me to display more newbie naivete...🤔
Is it possible to write a script to glean only the data from ADRIA tasks?

(Edit)
Thanks again for your work on time comparisons of GPUs running the PABLO tasks, ServicEnginIC!
ID: 55703 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55704 - Posted: 8 Nov 2020, 1:02:58 UTC - in response to Message 55703.  
Last modified: 8 Nov 2020, 1:42:36 UTC

I'm all for that, as long as times are used instead of points per day for rating the cards. That would negate the effects that errors and WU granted credit variances have on a host's RAC. I see credit acquisition rates as a function of the entire host machine's output, rather than just the GPU.


I cannot use task runtimes for several reasons:

1. I don't have access to the backend data
The frontend data on the volunteer tab is the only easily accessible data. That is why I always quote this page as the source.
Scanning 3000 hosts takes less than 270 seconds. I have calculated that scanning 100 tasks for 20 hosts on each of the 150 volunteer pages scanned could take several days. As a comparison, a SQL query on the backup data would only take a few minutes to run (or less)

2. variable length of runtime of the MDAD task makes for a complex calculation.
The variable runtime of the tasks would need a defined measure of the work unit calculation to allow for a meaningful comparison.

It would not be a full comparison if we only concentrate on one kind of work unit. It should also be considered that there are runtime variabilities in each generation of ADRIA work units. The variance on the RAC is uniform across the board, so does not distract on the overall performance results.(Each user will see the same variance)

It is correct to say credit is a function of the work unit output as completed on the GPU and host, ...as is the runtime...

In my original post, I also pointed out that Toni has stated that credit calculation has been consistent over the years. It is the only constant we have for making a comparison at the frontend.

I do admit the comparison is definitely not perfect, but useful enough to make a generalized comparison.

It would be good and welcomed if better methods can be highlighted.

More importantly, we would not have this dilemma if the Performance tab was working.
Instead, I simply started this thread as an alternate way for performance comparison to be made, help GPUgrid users, take some workload off GPUgrid admins (fixing Performance tab would distract them from more important project work) and stimulate discussion.

I hope this post is a vehicle for sharing ideas, provide support for GPUgrid and engender open discussion.
ID: 55704 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pop Piasa
Avatar

Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55705 - Posted: 9 Nov 2020, 1:36:32 UTC - in response to Message 55704.  
Last modified: 9 Nov 2020, 2:09:13 UTC

I get it now, mate. Users can't access the level of data that Grosso can, and the existing server webpage interface is no longer providing pertinent data so it is essentially useless. I hope somebody on the team actually reads your post and corrects this problem.

Thanks for your kindly responses and thanks double for your contribution to benchmarking GPU performance while Processing ACEMD platform based tasks.
ID: 55705 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rod4x4

Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55706 - Posted: 9 Nov 2020, 2:03:03 UTC - in response to Message 55705.  
Last modified: 9 Nov 2020, 2:07:33 UTC

I have calculated that scanning 100 tasks for 20 hosts on each of the 150 volunteer pages scanned could take several days. As a comparison, a SQL query on the backend data would only take a few minutes to run (or less)


I must have missed my morning coffee when I made that calculation.
Having a further think about it, the script should take just under 30 minutes to grab the tasks for each host, not several days (what was I thinking).
That then makes it viable to grab ADRIA task runtimes for each participating host.


Might have this done by the end of the week.
ID: 55706 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile ServicEnginIC
Avatar

Send message
Joined: 24 Sep 10
Posts: 591
Credit: 11,738,036,510
RAC: 10,299,581
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55707 - Posted: 9 Nov 2020, 20:45:36 UTC - in response to Message 55706.  

I've catched on past week in my systems some ADRIA work units, few, but enough for me to confirm some suppositions.

* Graphics card ASUS TUF-GTX1650S-4G-GAMING, based on GTX1650 SUPER GPU, running at PCIE gen 3.0 x16 slot, got three. Execution times (seconds): WU#1: 18628,53 ; WU#2: 18660,36 ; WU#3: 18627,83
* Graphics card ASUS DUAL-GTX1660TI-O6G , based on GTX1660 Ti GPU, running at PCIE gen 2.0 x16 slot, got two. Execution times (seconds): WU#1: 16941,52 ; WU#2: 16998,69
-1) So far, execution times for ADRIA tasks are relatively consistent when executed at the same card model and setup.

* Graphics card ASUS ROG-STRIX-GTX1650-O4G-GAMING, based on a factory overclocked GTX1650 GPU, got three. Execution times (seconds): WU#1: 23941,33 ; WU#2: 23937,18 ; WU#3: 32545,90
-2) Ooops!!! What's happened here? Execution time for WU#3 is very different to WU#1 and WU#2, being executed at the same graphics card model.
The explanation: WU#1 and WU#2 were executed on a card installed at a PCIE gen 3.0 x16 slot, while WU#3 was executed on a card installed at a PCIE gen 3.0 x4 slot.
Both are installed at the same multiGPU system, but due to mainboard limitations, only PCIE slot 0 is running at x16, while two more slots 1 and 2 are runing at x4, thus limiting performance for ADRIA tasks.
Conclusion: Execution times are not only showing a particular graphics card performance itself, but also its particular working conditions at the system where it is installed.
rod4x4, in my opinion, your decision to discard hosts with multiple GPUs makes complete sense due to this (and other) reason(s).

It's a pity that ADRIA work units availability is unpredictable, and usually very transient...

-3) rod4x4, thank you very much again for your labour.

More importantly, we would not have this dilemma if the Performance tab was working.

+1
ID: 55707 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Keith Myers
Avatar

Send message
Joined: 13 Dec 17
Posts: 1404
Credit: 8,898,646,190
RAC: 7,548,451
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55708 - Posted: 9 Nov 2020, 23:12:28 UTC

That GPUGrid tasks are heavily dependent on PCIE bus speeds has been posted and commented multiple times by Ian.

I too see a noticeable slowdown on all GPUG tasks when comparing the same RTX 2080 cards (x3) from the two cards at X8 speeds compared to the card running at X4 speed.

Not as big a difference as your test comparing X16 to X4 though.

Einstein sees the same kind of differences though not as extreme compared to GPUG.
ID: 55708 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · Next

Message boards : Server and website : Performance Tab still broken

©2025 Universitat Pompeu Fabra