Advanced search

Message boards : Server and website : Performance Tab still broken

Author Message
rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55674 - Posted: 2 Nov 2020 | 3:52:54 UTC
Last modified: 2 Nov 2020 | 4:35:52 UTC

The Performance tab was always a great source of information on gauging how each GPU was performing in comparison to other GPUs.

I have now resorted to scanning the Volunteers \ Hosts list for this info.

Below is a quick summary taken 2nd November 2020 2:00am UTC.

In this listing the Recent Average Credit (RAC) has been used for the comparison due to the variability of the work unit runtimes. Toni has stated that the credit formula has been uniform for several years.

This list is not definitive, just an indicator at best. There are many factors that could affect this listing.

NOTE:
Hosts with multiple GPUs have been exluded.
Recent Average Credit has been rounded
The best performing GPU from each type has been listed

Rank GPU RAC 23 RTX 2080 Ti 1032000 61 RTX 2080 Super 760000 65 RTX 2070 Super 747000 74 GTX 1080 Ti 712000 85 RTX 2080 658000 94 RTX 2070 625000 116 RTX 2060 Super 585000 138 GTX 1080 528000 155 GTX 1660 Ti 511000 156 RTX 2060 510000 166 GTX 1070 Ti 502000 194 GTX 1070 468000 216 GTX 1660 Super 436000 276 GTX 1660 396000 335 GTX 1650 Super 353000 408 GTX 1060 6Gb 310000 459 GTX 1060 3Gb 288000 490 GTX 1650 274000 809 GTX 1050 Ti 193000 960 GTX 1050 160000


Below is a list of GPU efficiency (based on the list above)

Again, list is not definitive and should not be taken too seriously. There are many factors that could change this listing.

NOTE
Watts are estimated for each GPU type.

GPU Watts RAC/Watt Rank GTX 1660 Ti 130 3931 1 GTX 1650 Super 100 3530 2 GTX 1660 Super 125 3488 3 RTX 2070 Super 215 3474 4 RTX 2080 Ti 300 3440 5 GTX 2070 185 3378 6 GTX 2060 Super 175 3343 7 GTX 1660 120 3300 8 GTX 1650 85 3224 9 RTX 2060 160 3188 10 GTX 1070 150 3120 11 GTX 2080 215 3060 12 RTX 2080 Super 250 3040 13 GTX 1080 180 2933 14 GTX 1080 Ti 250 2848 15 GTX 1070 Ti 180 2789 16 GTX 1060 6Gb 120 2583 17 GTX 1050 Ti 75 2573 18 GTX 1060 3Gb 120 2400 19 GTX 1050 75 2133 20

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55675 - Posted: 2 Nov 2020 | 5:06:32 UTC - in response to Message 55674.

the 2080ti TDP number is way high. most are 250w, some a little more. but 300 would be atypical.

personally i run my single one at 225W, and my 5x system at 215W each.
____________

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,258,940,990
RAC: 16,075,129
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55676 - Posted: 2 Nov 2020 | 6:33:03 UTC - in response to Message 55674.

I have now resorted to scanning the Volunteers \ Hosts list for this info.
Below is a quick summary taken 2nd November 2020 2:00am UTC.


Great (and laborious) job!
Very interesting, thank you for this effort.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55679 - Posted: 2 Nov 2020 | 8:24:42 UTC - in response to Message 55675.
Last modified: 2 Nov 2020 | 8:27:08 UTC

the 2080ti TDP number is way high. most are 250w, some a little more. but 300 would be atypical.

personally i run my single one at 225W, and my 5x system at 215W each.


Setting the Watts was a difficult choice. For the RTX 2080 Ti, highend cards are 300W, reference TDP is 250W. If I was to pick the best performing GPU, I was making an assumption, it was a highend card.

And then there are user who do modify the power limits. (Me included)

The table is definitely not perfect.
Hence the caveat on the post.

Nice to know you run your GTX 2080 Ti at 225W, as it was your GPU in the list.

I may post this list occassionally. I will update your GPU TDP if it appears in the next list.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55681 - Posted: 2 Nov 2020 | 10:51:12 UTC - in response to Message 55676.

I have now resorted to scanning the Volunteers \ Hosts list for this info.
Below is a quick summary taken 2nd November 2020 2:00am UTC.


Great (and laborious) job!
Very interesting, thank you for this effort.


This was a manual task, but I have almost finished a script to automate the process.

This should provide more GPUs in the list and hopefully include median RAC figures for GPU types.

I will also revert to reference TDP for efficiency calculations on the median RAC.

Time permitting, will attempt to publish a list every month.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55687 - Posted: 5 Nov 2020 | 1:09:29 UTC
Last modified: 5 Nov 2020 | 1:10:02 UTC

Finished the script and gathered some info on the first 3000 Hosts (Volunteers Tab) which I have listed below.

Hosts with Multiple GPUs: 320 (which are excluded from list)
List compiled: 5th November 2020 0:00UTC.

If you have a keen eye, you will see a RTX3070 and 2 RTX3090 GPUs. (Any RAC they have, would be from a compatible card that was removed)

GPU Model Count GTX 1070 197 GTX 1060 6GB 194 GTX 1080 175 GTX 1080 Ti 158 GTX 1050 Ti 142 GTX 1060 3GB 114 GTX 970 114 RTX 2070 SUPER 111 RTX 2060 101 RTX 2080 Ti 96 GTX 1660 Ti 94 RTX 2070 89 GTX 1650 74 RTX 2080 SUPER 70 GTX 1070 Ti 67 RTX 2080 65 GTX 1050 64 GTX 960 63 GTX 1660 SUPER 60 RTX 2060 SUPER 53 GTX 750 Ti 50 GT 1030 39 GTX 1650 SUPER 37 GTX 1660 37 GTX 980 35 GTX 980 Ti 24 GTX 1060 18 GTX 750 16 GTX 950 15 GT 730 11 GTX 760 11 GTX 770 11 Quadro P1000 11 Quadro K620 10 Quadro P2000 10 GTX 650 8 GTX 660 8 GTX 780 7 Quadro P4000 7 GTX 650 Ti 6 GTX 960M 6 Quadro K2200 6 TITAN V 6 TITAN X Pascal 6 GTX 1060 with MaxQ Design 5 GTX 980M 5 GTX TITAN X 5 MX150 5 Quadro RTX 4000 5 GTX 1650 with MaxQ Design 4 GTX 680 4 P106090 4 Quadro K4200 4 Quadro M4000 4 Quadro P5000 4 Quadro P600 4 RTX 2070 with MaxQ Design 4 Tesla M60 4 940MX 3 GT 640 3 GTX 745 3 GTX 750 980MB 3 GTX 780 Ti 3 GTX 950M 3 GTX TITAN Black 3 Quadro P2200 3 Quadro P620 3 Quadro T1000 3 RTX 2080 with MaxQ Design 3 TITAN Xp COLLECTORS EDITION 3 840M 2 GT 740 2 GTX 1650 Ti 2 GTX 1660 Ti with MaxQ Design 2 GTX 650 Ti BOOST 2 GTX 660 Ti 2 GTX 670 2 GTX 765M 2 GTX TITAN 2 MX130 2 MX250 2 P104100 2 Quadro K4000 2 Quadro K5000 2 Quadro M2000 2 Quadro T2000 2 RTX 3090 2 Tesla K20Xm 2 Tesla T4 2 Tesla V100PCIE16GB 2 GT 650M 1 GT 740M 1 GTX 1050 with MaxQ Design 1 GTX 1070 with MaxQ Design 1 GTX 1080 with MaxQ Design 1 GTX 645 1 GTX 770M 1 GTX 780M 1 GTX 880M 1 GTX 970M 1 Quadro K6000 1 Quadro M1000M 1 Quadro M1200 1 Quadro M3000M 1 Quadro M6000 1 Quadro P3000 1 Quadro P3200 1 Quadro P400 1 Quadro P4200 1 Quadro RTX 3000 1 Quadro RTX 8000 1 Quadro T2000 with MaxQ Design 1 RTX 2070 Super with MaxQ Design 1 RTX 2080 Super with MaxQ Design 1 RTX 3070 1 Tesla K20c 1 Tesla K40c 1 Tesla K80 1 Tesla P100PCIE12GB 1 TITAN RTX 1

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55688 - Posted: 5 Nov 2020 | 17:12:24 UTC

Very much appreciate your effort, Rod4x4.

You've provided me with a better idea of how they stack up running MDAD tasks than I previously had. Thanks.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55696 - Posted: 6 Nov 2020 | 14:48:48 UTC

One thing that throws a wrench into rod4x4's comparison is the ADRIA-Bandit WU.

Every time one of my machines gets one, that host's RAC takes a beating.

I've also found that if restarted they throw an error, even on a single GPU host. That showed up during my last driver update. Don't know if anybody else had this happen.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55697 - Posted: 6 Nov 2020 | 20:00:34 UTC - in response to Message 55696.

I think Ian has mentioned the same.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55698 - Posted: 6 Nov 2020 | 20:33:03 UTC - in response to Message 55697.

I know I've seen that before, but don't remember specifically which ones. I think I saw that behavior on the PABLO tasks.
____________

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55699 - Posted: 6 Nov 2020 | 22:16:14 UTC

I remember Ian mentioning it about ADRIAs when they first came out and comparing them to PABLO WUs, as they are similar in size but ADRIA WUs give around half the points that PABLOs did.

(I searched the forum for "ADRIA" and "Adria" but couldn't find it.)

Speaking of broken tabs, the donation page and the profile creation page haven't worked for me since I started last year. There seems to be a problem with the "I am not a robot" verification picture is missing.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55700 - Posted: 6 Nov 2020 | 22:37:56 UTC - in response to Message 55696.

One thing that throws a wrench into rod4x4's comparison is the ADRIA-Bandit WU.

Every time one of my machines gets one, that host's RAC takes a beating.


Statistically, all volunteers should experience similar issue. Outliers will still occur, but will only have a small and temporary effect on the statistics.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55701 - Posted: 7 Nov 2020 | 0:09:17 UTC - in response to Message 55699.

https://www.gpugrid.net/forum_thread.php?id=5125&nowrap=true#55153

Ian C wrote:

The VillinAdaptive WUs also pay a lot less credit reward as compared to the pablo tasks, per time invested.


Found it!
Now, these are labeled ADRIA_NTL9Bandit100ns.

I might be comparing apples to oranges, but both seem to credit about half the normal cobblestones.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,258,940,990
RAC: 16,075,129
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55702 - Posted: 7 Nov 2020 | 9:30:18 UTC - in response to Message 55701.

Ian C wrote:
The VillinAdaptive WUs also pay a lot less credit reward as compared to the pablo tasks, per time invested.

Found it!
Now, these are labeled ADRIA_NTL9Bandit100ns.

It's been happening since long time ago.
I found this older "Immensive credits difference: ADRIA vs. PABLO tasks" thread even expresely mentioning it.

And regarding this rod4x4 thread "Performance Tab still broken" topic, I agree that Adria work units may be a good tool for a performance comparative, due to their consistence in number of calculations per task.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55703 - Posted: 7 Nov 2020 | 19:20:36 UTC - in response to Message 55702.
Last modified: 7 Nov 2020 | 19:35:17 UTC


It's been happening since long time ago.
I found this older "Immensive credits difference: ADRIA vs. PABLO tasks" thread even expresely mentioning it.


Thanks ServicEnginIC, since nothing's changed I guess
"Ours is not to question why, ours is just to crunch, not cry!"


And regarding this rod4x4 thread "Performance Tab still broken" topic, I agree that Adria work units may be a good tool for a performance comparative, due to their consistence in number of calculations per task.


I'm all for that, as long as times are used instead of points per day for rating the cards. That would negate the effects that errors and WU granted credit variances have on a host's RAC. I see credit acquisition rates as a function of the entire host machine's output, rather than just the GPU.
(Capt. Obvious, I am)

Time for me to display more newbie naivete...🤔
Is it possible to write a script to glean only the data from ADRIA tasks?

(Edit)
Thanks again for your work on time comparisons of GPUs running the PABLO tasks, ServicEnginIC!

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55704 - Posted: 8 Nov 2020 | 1:02:58 UTC - in response to Message 55703.
Last modified: 8 Nov 2020 | 1:42:36 UTC

I'm all for that, as long as times are used instead of points per day for rating the cards. That would negate the effects that errors and WU granted credit variances have on a host's RAC. I see credit acquisition rates as a function of the entire host machine's output, rather than just the GPU.


I cannot use task runtimes for several reasons:

1. I don't have access to the backend data
The frontend data on the volunteer tab is the only easily accessible data. That is why I always quote this page as the source.
Scanning 3000 hosts takes less than 270 seconds. I have calculated that scanning 100 tasks for 20 hosts on each of the 150 volunteer pages scanned could take several days. As a comparison, a SQL query on the backup data would only take a few minutes to run (or less)

2. variable length of runtime of the MDAD task makes for a complex calculation.
The variable runtime of the tasks would need a defined measure of the work unit calculation to allow for a meaningful comparison.

It would not be a full comparison if we only concentrate on one kind of work unit. It should also be considered that there are runtime variabilities in each generation of ADRIA work units. The variance on the RAC is uniform across the board, so does not distract on the overall performance results.(Each user will see the same variance)

It is correct to say credit is a function of the work unit output as completed on the GPU and host, ...as is the runtime...

In my original post, I also pointed out that Toni has stated that credit calculation has been consistent over the years. It is the only constant we have for making a comparison at the frontend.

I do admit the comparison is definitely not perfect, but useful enough to make a generalized comparison.

It would be good and welcomed if better methods can be highlighted.

More importantly, we would not have this dilemma if the Performance tab was working.
Instead, I simply started this thread as an alternate way for performance comparison to be made, help GPUgrid users, take some workload off GPUgrid admins (fixing Performance tab would distract them from more important project work) and stimulate discussion.

I hope this post is a vehicle for sharing ideas, provide support for GPUgrid and engender open discussion.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55705 - Posted: 9 Nov 2020 | 1:36:32 UTC - in response to Message 55704.
Last modified: 9 Nov 2020 | 2:09:13 UTC

I get it now, mate. Users can't access the level of data that Grosso can, and the existing server webpage interface is no longer providing pertinent data so it is essentially useless. I hope somebody on the team actually reads your post and corrects this problem.

Thanks for your kindly responses and thanks double for your contribution to benchmarking GPU performance while Processing ACEMD platform based tasks.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55706 - Posted: 9 Nov 2020 | 2:03:03 UTC - in response to Message 55705.
Last modified: 9 Nov 2020 | 2:07:33 UTC

I have calculated that scanning 100 tasks for 20 hosts on each of the 150 volunteer pages scanned could take several days. As a comparison, a SQL query on the backend data would only take a few minutes to run (or less)


I must have missed my morning coffee when I made that calculation.
Having a further think about it, the script should take just under 30 minutes to grab the tasks for each host, not several days (what was I thinking).
That then makes it viable to grab ADRIA task runtimes for each participating host.


Might have this done by the end of the week.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,258,940,990
RAC: 16,075,129
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55707 - Posted: 9 Nov 2020 | 20:45:36 UTC - in response to Message 55706.

I've catched on past week in my systems some ADRIA work units, few, but enough for me to confirm some suppositions.

* Graphics card ASUS TUF-GTX1650S-4G-GAMING, based on GTX1650 SUPER GPU, running at PCIE gen 3.0 x16 slot, got three. Execution times (seconds): WU#1: 18628,53 ; WU#2: 18660,36 ; WU#3: 18627,83
* Graphics card ASUS DUAL-GTX1660TI-O6G , based on GTX1660 Ti GPU, running at PCIE gen 2.0 x16 slot, got two. Execution times (seconds): WU#1: 16941,52 ; WU#2: 16998,69
-1) So far, execution times for ADRIA tasks are relatively consistent when executed at the same card model and setup.

* Graphics card ASUS ROG-STRIX-GTX1650-O4G-GAMING, based on a factory overclocked GTX1650 GPU, got three. Execution times (seconds): WU#1: 23941,33 ; WU#2: 23937,18 ; WU#3: 32545,90
-2) Ooops!!! What's happened here? Execution time for WU#3 is very different to WU#1 and WU#2, being executed at the same graphics card model.
The explanation: WU#1 and WU#2 were executed on a card installed at a PCIE gen 3.0 x16 slot, while WU#3 was executed on a card installed at a PCIE gen 3.0 x4 slot.
Both are installed at the same multiGPU system, but due to mainboard limitations, only PCIE slot 0 is running at x16, while two more slots 1 and 2 are runing at x4, thus limiting performance for ADRIA tasks.
Conclusion: Execution times are not only showing a particular graphics card performance itself, but also its particular working conditions at the system where it is installed.
rod4x4, in my opinion, your decision to discard hosts with multiple GPUs makes complete sense due to this (and other) reason(s).

It's a pity that ADRIA work units availability is unpredictable, and usually very transient...

-3) rod4x4, thank you very much again for your labour.

More importantly, we would not have this dilemma if the Performance tab was working.

+1

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55708 - Posted: 9 Nov 2020 | 23:12:28 UTC

That GPUGrid tasks are heavily dependent on PCIE bus speeds has been posted and commented multiple times by Ian.

I too see a noticeable slowdown on all GPUG tasks when comparing the same RTX 2080 cards (x3) from the two cards at X8 speeds compared to the card running at X4 speed.

Not as big a difference as your test comparing X16 to X4 though.

Einstein sees the same kind of differences though not as extreme compared to GPUG.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55710 - Posted: 10 Nov 2020 | 0:36:02 UTC - in response to Message 55708.
Last modified: 10 Nov 2020 | 0:38:59 UTC

From my testing with my cards (RTX 2070 and faster) I’ve determined that PCIe 3.0 x4 should be the lowest link you should use. Preferably PCIe 3.0 x8 or better for full speed crunching. I noticed no difference in 3.0x8 and 3.0x16.

When comparing link speeds make sure you’re accounting for both link gen and link width. Saying just x4 or x8 or x16 alone is rather meaningless. PCIe 2.0 is half the speed as 3.0 at the same width, and 1.0 is half the speed as 2.0 (1/4 speed as 3.0).

Also if you’re using an 3.0x4 slot on an Intel board, that’s likely from the chipset. In which case you will likely have LESS than 3.0x4 actually available to the device depending on what other devices are being serviced by the chipset. The DMI link between the chipset and the CPU only has the equivalent of PCIe 3.0x4 available for ALL devices total (other PCIe slots, SATA devices, networking, etc). You really won’t get the full speed from a chipset based slot because of this.

Don’t forget to account for CPU load also. If your CPU is maxed out, you’ll see slow and inconsistent speeds.
____________

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55711 - Posted: 10 Nov 2020 | 1:12:57 UTC

Is anyone else seeing these?

0_2-GERARD_pocket_discovery_08522dfc_f243_41c4_b232_587af6264fe7-0-3-RND1589

I and Lara ran one in tandem. One of my Dell GTX 1650 cards (8x slot) finished in 28,980.25 seconds. Lara's RTX 2080ti finished in 6,761.39 seconds.

These look like fun WUs to track. They're direct comparisons of machines.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1358
Credit: 7,894,103,302
RAC: 7,266,669
Level
Tyr
Scientific publications
watwatwatwatwat
Message 55712 - Posted: 10 Nov 2020 | 1:33:30 UTC - in response to Message 55710.

Good point to emphasize where the slot is serviced by. My card at the bottom of the motherboard is a PCIE Gen2 X4 slot serviced by the chipset.

Only can manage 5.0 GT/second compared to the other cpu fed slots that can do 8.0 GT/second.

sph
Send message
Joined: 22 Oct 20
Posts: 4
Credit: 34,434,982
RAC: 0
Level
Val
Scientific publications
wat
Message 55725 - Posted: 12 Nov 2020 | 3:15:54 UTC
Last modified: 12 Nov 2020 | 3:21:48 UTC

...

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55726 - Posted: 12 Nov 2020 | 5:39:08 UTC
Last modified: 12 Nov 2020 | 6:17:05 UTC

I have captured data from the top 1200 hosts - Volunteer / Hosts tab.

Breakdown of top 1200 host's data:
- 213 hosts have multiple GPUs (which are excluded from further analysis)
- 108,000 completed tasks captured from remaining 987 hosts
- 1322 completed tasks are ADRIA (1.2% of total tasks captured)

Script runtime: 2 hours 43 minutes (Bit longer than anticipated, but still reasonable when considering the volume of data captured)
Scan started: 11th November 2020, 23:20 UTC

Below is a summary of ADRIA task runtimes for each GPU type.

NOTE:
Sorted by fastest Average Runtime
All Run Times are in seconds

No. Min Max Average Rank GPU Tasks Runtime Runtime Runtime --------------------------------------------------------------- 1 Quadro RTX 8000 1 8691 8691 8691 2 TITAN RTX 1 10834 10834 10834 3 RTX 2080 Ti 119 8168 37177 11674 4 Quadro RTX 5000 1 11972 11972 11972 5 RTX 2080 81 9841 17570 12288 6 RTX 2080 SUPER 43 10774 16409 12290 7 RTX 2070 SUPER 86 10690 16840 12828 8 TITAN V 11 12473 14216 12983 9 TITAN Xp COLLECTORS 2 12620 14501 13560 10 RTX 2060 SUPER 40 11855 22760 14268 11 TITAN X Pascal 2 14348 15043 14696 12 RTX 2070 83 11488 46588 15198 13 GTX 1080 Ti 87 11011 56959 16527 14 RTX 2060 68 12676 31984 16992 15 RTX 3090 3 17081 17914 17383 16 Quadro RTX 4000 1 17509 17509 17509 17 GTX 1080 113 13192 107431 17552 18 GTX 1660 Ti 58 14892 28114 17783 19 GTX 1070 Ti 33 14612 28911 18301 20 GTX 1660 SUPER 41 15903 30930 18664 21 Tesla P100-PCIE-12GB 3 18941 19283 19064 22 GTX 1660 24 17349 26014 19430 23 GTX 1070 103 15787 57960 20168 24 RTX 2070 with Max-Q 4 17142 29194 20429 25 GTX 1660 Ti with Max-Q 2 19940 20928 20434 26 GTX 1650 SUPER 21 18364 25123 20799 27 Quadro M6000 1 23944 23944 23944 28 Quadro P4000 8 21583 26749 24702 29 GTX 980 9 23214 29135 25218 30 Tesla M60 5 25897 26480 26153 31 GTX 1060 6GB 62 21259 54456 26329 32 GTX 980 Ti 11 19789 44804 26637 33 GTX 1650 34 24514 38937 27715 34 GTX 1060 3GB 63 13035 55834 28907 35 GTX TITAN Black 2 30945 30951 30948 36 GTX 780 1 32439 32439 32439 37 GTX 970 31 27366 82557 33367 38 GTX TITAN X 4 21600 45713 33522 39 Quadro P2000 3 33018 37158 34444 40 Quadro K6000 1 34626 34626 34626 41 Tesla K20Xm 1 39713 39713 39713 42 GTX 960 16 37297 47822 40582 43 GTX 1050 Ti 24 36387 66552 41365 44 GTX TITAN 1 41409 41409 41409 45 P104-100 1 43979 43979 43979 46 GTX 1050 6 44597 47854 46514 47 Quadro P1000 1 48555 48555 48555 ---------------------------------------------------------------

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55727 - Posted: 12 Nov 2020 | 6:01:02 UTC
Last modified: 12 Nov 2020 | 6:17:29 UTC

Same as previous post, this time for GERARD tasks

Only 129 GERARD tasks completed in the last 10 days for the hosts sampled. (Hosts keep task list for the last 10 days)

NOTE:
Sorted by fastest Average Runtime
All Run Times are in seconds

No. Min Max Average Rank GPU Tasks Runtime Runtime Runtime --------------------------------------------------------------- 1 RTX 2080 Ti 12 6220 11696 7554 2 TITAN V 2 7741 9502 8622 3 RTX 2070 SUPER 5 8336 9698 8920 4 RTX 2080 11 7836 11529 9180 5 RTX 2080 SUPER 6 8339 11847 9660 6 GTX 1080 Ti 12 7917 15769 10656 7 RTX 2070 4 9659 11438 10876 8 RTX 2080 with Max-Q 1 11951 11951 11951 9 RTX 2060 SUPER 6 9932 16892 12145 10 RTX 2060 1 13404 13404 13404 11 GTX 1080 11 11478 19816 15457 12 Quadro RTX 3000 1 15839 15839 15839 13 GTX 1660 SUPER 4 15502 17127 16005 14 GTX 1660 Ti 4 13495 18896 16074 15 GTX 1070 8 12329 19960 16089 16 GTX 1660 2 16200 16997 16598 17 GTX 1070 Ti 7 13928 20027 16640 18 Quadro P4000 2 17033 18841 17937 19 Quadro P4200 1 19012 19012 19012 20 Tesla M60 1 21491 21491 21491 21 GTX 1060 6GB 9 20386 29152 23884 22 GTX 1060 3GB 5 19457 30116 24248 23 GTX 1650 SUPER 4 20357 28377 24796 24 GTX 970 8 24052 36054 28795 25 GTX 1050 Ti 1 43300 43300 43300 26 GTX 950 1 59844 59844 59844 ---------------------------------------------------------------

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55743 - Posted: 15 Nov 2020 | 1:17:16 UTC

Thanks a million for your efforts here, rod4x4!

This IMHO is the most useful analysis yet.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55744 - Posted: 15 Nov 2020 | 2:03:30 UTC - in response to Message 55743.

Thanks a million for your efforts here, rod4x4!

This IMHO is the most useful analysis yet.

Thank you for your feedback.

I am considering a similar comparison for MDAD tasks.

This should be ready by end of November.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,258,940,990
RAC: 16,075,129
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55745 - Posted: 15 Nov 2020 | 10:53:59 UTC

This IMHO is the most useful analysis yet.

+1
My admiration for your work.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55746 - Posted: 15 Nov 2020 | 12:50:54 UTC - in response to Message 55745.

This IMHO is the most useful analysis yet.

+1
My admiration for your work.

Thanks.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55748 - Posted: 15 Nov 2020 | 15:58:05 UTC

Too bad that BOINC doesn't individually identify multiple GPUs the way FAHCore does (slots), so data from those hosts could be used also. It would give an idea of how much multiple cards in a host suffer from performance loss.

Unfortunately, F@H has poor statistical reporting in comparison to BOINC.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55754 - Posted: 16 Nov 2020 | 2:18:35 UTC
Last modified: 16 Nov 2020 | 2:26:27 UTC

From the same dataset used in the previous ADRIA and GERARD comparisons, here is a comparison of MDAD task using average runtime.

Due to the variablility of runtimes for the MDAD tasks, this comparison should not be taken too seriously. (although ranking is consistant with expections)

NOTE:
GPU types with less than 500 tasks have been excluded
Runtimes are in seconds

No. Average Rank GPU Tasks Runtime --------------------------------------- 1 RTX 2080 Ti 9,423 1,721 2 RTX 2080 SUPER 3,406 1,785 3 TITAN V 1,032 1,812 4 RTX 2080 5,853 1,855 5 RTX 2070 SUPER 6,298 1,970 6 RTX 2060 SUPER 3,206 2,250 7 GTX 1080 Ti 8,766 2,317 8 RTX 2070 5,254 2,354 9 RTX 2060 3,423 2,760 10 GTX 1070 Ti 3,396 2,821 11 GTX 1080 8,873 2,825 12 GTX 1660 Ti 3,548 3,012 13 GTX 1660 SUPER 3,002 3,148 14 GTX 1070 8,627 3,296 15 GTX 1660 1,808 3,363 16 GTX 980 Ti 924 3,738 17 GTX 1650 SUPER 1,999 3,901 18 Quadro P4000 645 4,059 19 GTX 980 811 4,440 20 GTX 1060 6GB 6,051 4,597 21 GTX 1060 3GB 5,393 4,864 22 GTX 1650 2,934 5,035 23 GTX 970 4,015 5,335 24 GTX 1050 Ti 1,975 7,220 25 GTX 960 969 7,365 26 GTX 1050 566 7,916 ---------------------------------------

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55774 - Posted: 18 Nov 2020 | 18:52:29 UTC

I saw where Toni has responded to ServiEngineIC on the wishlist thread (https://www.gpugrid.net/forum_thread.php?id=5025&nowrap=true#55709) so I trust that we we will be placated eventually and our toy restored to educate and amuse us.

Meanwhile, thanks to the loyal, diligent efforts of rod4x4 in this thread, we have a pretty good model of relative performance and efficiency on which to base a GPU buying decision.

Three virtual (social distanced) cheers for rod4x4! 🍺🍺🍺

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55776 - Posted: 18 Nov 2020 | 23:04:48 UTC - in response to Message 55774.

I saw where Toni has responded to ServiEngineIC on the wishlist thread (https://www.gpugrid.net/forum_thread.php?id=5025&nowrap=true#55709) so I trust that we we will be placated eventually and our toy restored to educate and amuse us.

Meanwhile, thanks to the loyal, diligent efforts of rod4x4 in this thread, we have a pretty good model of relative performance and efficiency on which to base a GPU buying decision.

Three virtual (social distanced) cheers for rod4x4! 🍺🍺🍺


Cheers! 🍺🍺🍺

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55811 - Posted: 25 Nov 2020 | 0:24:54 UTC - in response to Message 55707.
Last modified: 25 Nov 2020 | 0:49:16 UTC

Summerizing PCIe slot performance ---------------------------------


ServicEnginIC posted earlier in this thread: http://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55707
* Graphics card ASUS TUF-GTX1650S-4G-GAMING, based on GTX1650 SUPER GPU, running at PCIE gen 3.0 x16 slot, got three. Execution times (seconds): WU#1: 18628,53 ; WU#2: 18660,36 ; WU#3: 18627,83
* Graphics card ASUS DUAL-GTX1660TI-O6G , based on GTX1660 Ti GPU, running at PCIE gen 2.0 x16 slot, got two. Execution times (seconds): WU#1: 16941,52 ; WU#2: 16998,69
-1) So far, execution times for ADRIA tasks are relatively consistent when executed at the same card model and setup.

* Graphics card ASUS ROG-STRIX-GTX1650-O4G-GAMING, based on a factory overclocked GTX1650 GPU, got three. Execution times (seconds): WU#1: 23941,33 ; WU#2: 23937,18 ; WU#3: 32545,90
-2) Ooops!!! What's happened here? Execution time for WU#3 is very different to WU#1 and WU#2, being executed at the same graphics card model.
The explanation: WU#1 and WU#2 were executed on a card installed at a PCIE gen 3.0 x16 slot, while WU#3 was executed on a card installed at a PCIE gen 3.0 x4 slot.
Both are installed at the same multiGPU system, but due to mainboard limitations, only PCIE slot 0 is running at x16, while two more slots 1 and 2 are runing at x4, thus limiting performance for ADRIA tasks.
Conclusion: Execution times are not only showing a particular graphics card performance itself, but also its particular working conditions at the system where it is installed.

To paraphrase in a different way, to support the above observations, I have 2 similar GTX 1060 3GB cards, both power limited to 67Watt.
The difference is the host CPU and PCIe slot technology.
Host 1:
https://www.gpugrid.net/show_host_detail.php?hostid=483378
Fitted with 10W fanless CPU and a PCIe 2.0 x16 slot, x2 capable. Rated PCIe throughtput is 0.8GB/s
GPU RAC - 254,000 (approx)

Host 2:
https://www.gpugrid.net/show_host_detail.php?hostid=483296
Fitted with 35W Athlon CPU and a PCIe 3.0 x16 slot, x4 capable (limited by CPU not motherboard). Rated PCIe throughput is 3.94GB/s
GPU RAC - 268,000 (approx)

So processor and PCIe re-vision affects the output by 14,000 RAC. A 5% performance loss for the less capable Host.
Difference is enough to be noticed, but not enough to be disappointing. Obviously the faster the card, the bigger the loss, so best to put low performance GPUs on low performance hardware.
This comparison highlights the importance of matching the abilities of the host to the GPU.
Host 2 is still a very modest build, yet enough for a GTX 1060 3GB.

On a sided note (for Gpugrid ACEMD3 work units),
GTX 750 Ti PCIe throughput 1.2 GB/s (max). PCIe 2.0 x4 capable slot recommended.
GTX 960 PCIe throughput 2.2 GB/s (max). PCIe 3.0 x4 capable slot recommended.
GTX 1060 3GB PCIe throughput 2.5 GB/s (max). PCIe 3.0 x4 capable slot recommended.
GTX 1650 Super and GTX 1660 Super PCIe throughput 5.5 GB/s (max). PCIe 3.0 x8 capable slot recommended.
As Ian&Steve C. has stated here:
http://www.gpugrid.net/forum_thread.php?id=5194&nowrap=true#55710
highend cards are also quite happy with PCIe 3.0 x8 capable slots.

As a point of interest, ACEMD2 work unit PCIe throughput was higher, in some cases more than twice the above quoted figures.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55841 - Posted: 29 Nov 2020 | 2:27:35 UTC - in response to Message 55811.

If anybody has time, look at my Host 514522. https://www.gpugrid.net/show_host_detail.php?hostid=514522

It's a
Z-270A ASUS Prime MB,
i7-7700K,
G.Skill DDR4-3200 (2x8GB),
Samsung Evo 970 1st gen. 500GB PCIE3 M.2 SSD (Win10 pro),
(2) ASUS Dual OC GTX 1060 3GB GPUs,

According to Afterburner's HW Monitor, my GPU1 (x16 slot) draws around 90-97 watts depending on the WU. GPU2, an identical (newer) card in the x8 slot below draws 95-105W.
The onboard Intel graphics (GPU3) are the windows graphics, leaving both GPUs 1&2 unhindered (or so I have assumed).

The combined RAC of the 3GB GTX 1060s is presently hovering around 589,000 (with 5 threads of WCG apps running on the CPU). I find that encouraging, yet I wonder what factors contributed the most to it.

Does the Slot have anything to do with the wattage? I know these cards get additional power from the PSU, but what I see here mystifies me. Should I assume that the GPU drawing more power is doing more work?



rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55843 - Posted: 29 Nov 2020 | 4:45:58 UTC - in response to Message 55841.
Last modified: 29 Nov 2020 | 4:46:30 UTC

If anybody has time, look at my Host 514522. https://www.gpugrid.net/show_host_detail.php?hostid=514522

It's a
Z-270A ASUS Prime MB,
i7-7700K,
G.Skill DDR4-3200 (2x8GB),
Samsung Evo 970 1st gen. 500GB PCIE3 M.2 SSD (Win10 pro),
(2) ASUS Dual OC GTX 1060 3GB GPUs,

According to Afterburner's HW Monitor, my GPU1 (x16 slot) draws around 90-97 watts depending on the WU. GPU2, an identical (newer) card in the x8 slot below draws 95-105W.
The onboard Intel graphics (GPU3) are the windows graphics, leaving both GPUs 1&2 unhindered (or so I have assumed).

The combined RAC of the 3GB GTX 1060s is presently hovering around 589,000 (with 5 threads of WCG apps running on the CPU). I find that encouraging, yet I wonder what factors contributed the most to it.

Does the Slot have anything to do with the wattage? I know these cards get additional power from the PSU, but what I see here mystifies me. Should I assume that the GPU drawing more power is doing more work?

I have been testing a script for Multi GPU hosts. Random hosts were tested, it so happens host 514522 was tested. Below data was collected 24th November 2020 at 7:00 UTC.

BOINC No. Runtime Credit Average Average Device Tasks TTL TTL Credit Runtime ---------------------------------------------------------- 0 139 603,318 2,020,430 289,342 4,340 1 142 610,021 2,114,351 299,465 4,296 ----------------------------------------------------------


You will need to confirm which card is Device 0 and Device 1 by looking in the BOINC Manager.

When two GPUs are fitted, I find the GPU closest to the CPU (generally PCIe slot 1 on consumer motherboards) will run hotter. Nvidia code on the GPU will start to reduce Max clock of the GPU by 13Mhz after GPU reaches 55 degrees, and furthers reduces by 13Mhz every 5 degrees thereafter. This may explain why less power draw on PCIe slot 1. Have you checked the temperatures of the GPUs?
I have also observed that WUs will having varying power draw at different stages in the WU processing, so it will be hard to have a direct comparison just by the power draw.
The stats indicate that the cards are very close in performance. There is only a 3.4% difference in output, so this is well within normal variances that could be caused by WU variations, silicon lottery and thermal throttling.
So overall, performance is not too bad considering all these factors.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55847 - Posted: 29 Nov 2020 | 17:06:28 UTC - in response to Message 55843.

Have you checked the temperatures of the GPUs?
I have also observed that WUs will having varying power draw at different stages in the WU processing, so it will be hard to have a direct comparison just by the power draw.


My GPU 0 (upper slot) is running a steady 65C at 77% power and GPU 1 in the lower slot runs a steady 57C at 82% power, so the temp induced power limiting seems coherent with what you told me. I've tried increasing the fan curves, but it had no effect for lowering temps.

The CPUID hardware monitor shows a constant reliable voltage limit and occasional power limits on both GPUs.

My host 556495 shows a similar scenario for my two Alienware GTX 1650s.

This all makes more sense to me now, thanks as always for the trans-global tutoring.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55851 - Posted: 30 Nov 2020 | 0:14:42 UTC - in response to Message 55847.

My host 556495 shows a similar scenario for my two Alienware GTX 1650s.


Host 556495 data grabbed same time as your other host.

BOINC No. Runtime Credit Average Average Device Tasks TTL TTL Credit Runtime --------------------------------------------------------- 0 117 604,577 1,883,061 269,108 5,167 1 113 603,400 1,848,528 264,688 5,340 ---------------------------------------------------------

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55853 - Posted: 30 Nov 2020 | 3:27:30 UTC - in response to Message 55851.
Last modified: 30 Nov 2020 | 3:42:31 UTC

Host 556495 data grabbed same time as your other host


Dude, you rule!
(Learned that one working on a college campus during the 80's) 😎
Thanks a million!

I should add that device 1 on that machine is the windows display GPU so everything jives with what you said.

Incidentally, I took the side panel off of the 1060 3GB host and set the fan curve based on what you wrote about frequency throttling. My GPU temps have dropped to 60 (upper) and 55, with the fans around 80% and 65%.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55854 - Posted: 30 Nov 2020 | 4:34:27 UTC

By the way rod4x4, can a script be written to send Adria's PeanutButterBANDIT WUs on to somebody else?
Just kidding, but they really seem to put a dent in the RAC of the host that's running them.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55855 - Posted: 30 Nov 2020 | 9:30:32 UTC - in response to Message 55854.

I don't mind ADRIA tasks. There is always the possibility they could be used for future Scientific publications and go toward a badge.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55860 - Posted: 30 Nov 2020 | 16:39:27 UTC - in response to Message 55855.

You're right. It all comes out even in the wash, anyway.

I've realized just now that errors are hurting worse. My 750ti card might be dying, as I've underclocked it and it still produces "Error invoking kernel: CUDA_ERROR_LAUNCH_FAILED" errors.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55861 - Posted: 30 Nov 2020 | 20:53:34 UTC - in response to Message 55860.
Last modified: 30 Nov 2020 | 20:54:22 UTC

You're right. It all comes out even in the wash, anyway.

I've realized just now that errors are hurting worse. My 750ti card might be dying, as I've underclocked it and it still produces "Error invoking kernel: CUDA_ERROR_LAUNCH_FAILED" errors.


could be a software/OS/driver issue.

I would first try booting the system into safe mode, then use a utility called Display Driver Uninstaller (DDU) to completely remove the old driver install and all traces of it from your system. go into the settings of DDU and click the check mark for "prevent windows from installing drivers with windows update" or some similar verbiage. then boot back into windows and do a fresh install of the driver from the nvidia installer.
____________

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55866 - Posted: 1 Dec 2020 | 16:00:06 UTC - in response to Message 55861.

Thanks Ian, the card is only 4 years old (pulled from a Dell and sold to me for $60 on eBay) and stays reasonably cool (55-60C) so I hope you found the problem for me. I'll sure try it.
Sorry for straying off topic here.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55894 - Posted: 8 Dec 2020 | 3:38:32 UTC
Last modified: 8 Dec 2020 | 3:41:52 UTC

Results of a survey of the top 1500 hosts from the Volunteer Tab performed 7th December 23:20 UTC.

4 hosts have no Nvidia GPU (removed), hence not included in below statistics
243 hosts have Multiple GPUs and are not included in below statistics

Remaining 1253 hosts have all tasks (136,565 tasks), runtime and credit captured for the last 7 days (Gpugrid retain task history for 7 days for each host).

The Average Credit Per Day has been calculated from the Runtime Total and Credit Total.
NOTE: this is different from BOINC Recent Average Credit (RAC) which is computed as an exponentially weighted average with a half-life of one week. See here for further details on BOINC RAC calculation (point 4.1 Total and recent average credit): https://boinc.berkeley.edu/boinc_papers/credit/text.php



Runtime Credit Ave Credit GPU Tasks Total Total Per Day ----------------------------------------------------------------------- Quadro RTX 8000 186 240,376 2,783,973 1,000,663 TITAN RTX 37 64,517 690,412 924,587 Tesla V100-PCIE 257 405,802 3,977,167 846,785 Tesla V100-SXM2 70 119,763 1,131,145 816,036 RTX 2080 Ti 12018 21,721,618 191,288,289 760,869 RTX 2080 SUPER 4967 9,299,766 78,140,834 725,972 RTX 2080 6626 13,082,811 104,095,590 687,456 TITAN V 1271 2,501,858 19,404,202 670,111 RTX 2070 SUPER 7394 15,517,689 115,909,496 645,365 TITAN Xp COLLECTORS 144 354,447 2,461,523 600,021 RTX 2070 6497 15,199,130 103,039,314 585,731 RTX 2060 SUPER 4000 9,344,008 62,882,051 581,443 RTX 2080 Max-Q 18 41,134 271,274 569,798 Quadro RTX 4000 425 1,021,511 6,455,516 546,011 GTX 1080 Ti 10696 26,533,728 166,949,068 543,625 TITAN X 420 1,103,556 6,922,486 541,978 GTX 1080 12842 36,433,467 202,963,057 481,316 GTX 1070 Ti 4638 13,930,837 73,509,793 455,913 Tesla P100-PCIE-12GB 121 372,903 1,929,496 447,056 GTX 1660 Ti 5381 16,896,465 84,379,116 431,472 GTX 1660 SUPER 4290 13,912,564 67,340,234 418,197 Quadro P5000 28 96,148 459,537 412,947 RTX 2060 Max-Q 164 575,928 2,737,207 410,632 RTX 2060 4852 16,085,783 76,292,504 409,782 GTX 1070 11705 38,757,722 183,649,491 409,398 Quadro P4200 123 454,185 2,031,801 386,511 GTX 1660 Ti Max-Q 122 460,864 1,989,449 372,970 GTX 1660 1797 6,878,967 28,788,787 361,588 Quadro M6000 143 564,256 2,307,289 353,297 GTX TITAN X 142 557,443 2,217,098 343,636 GTX 980 Ti 1157 4,547,818 17,924,961 340,541 GTX 1650 SUPER 2317 9,300,376 36,244,346 336,708 Quadro P4000 1159 4,800,830 18,449,240 332,029 GTX 1070 Max-Q 136 568,010 1,986,588 302,180 GTX 980 1153 5,252,150 18,131,013 298,263 Tesla M60 489 2,263,999 7,669,959 292,705 GTX 1060 6GB 7045 33,339,604 111,219,391 288,226 GTX 1060 424 2,056,828 6,652,230 279,436 GTX 1060 3GB 5482 27,201,014 87,760,763 278,759 Quadro P2200 74 391,253 1,216,512 268,641 GTX 1650 Ti 152 791,120 2,452,360 267,828 GTX 1650 3597 18,887,170 56,498,908 258,456 RTX 3070 98 497,796 1,435,975 249,235 GTX 970 4225 23,452,245 66,942,386 246,621 GTX TITAN Black 195 1,050,341 2,935,558 241,476 Quadro RTX 3000 107 562,639 1,555,764 238,906 GTX 980M 177 1,063,796 2,725,277 221,343 P104-100 202 1,282,517 3,280,963 221,030 Tesla K40c 87 578,860 1,429,699 213,395 Quadro P2000 264 1,784,853 4,167,969 201,760 GTX TITAN 55 357,104 815,838 197,389 Tesla K20Xm 61 450,649 1,018,067 195,187 GTX 1050 Ti 2618 19,892,988 41,692,955 181,082 Tesla K20c 51 393,285 806,728 177,228 GTX 960 1712 12,987,938 26,431,181 175,829 Quadro K6000 67 544,992 1,035,325 164,135 GTX 1050 891 7,698,319 14,478,710 162,498 GTX 780 67 567,643 1,056,630 160,828 GTX 770 72 568,897 977,554 148,464 GTX 950 300 2,930,005 4,871,011 143,636 GTX 680 40 437,665 721,652 142,462 Quadro M4000 222 2,109,680 3,402,979 139,366 Quadro M2000 55 558,615 789,368 122,090 Quadro P1000 141 1,718,019 2,376,209 119,501 GTX 660 Ti 54 564,777 773,448 118,323 GTX 760 47 578,403 739,694 110,493 GTX 750 Ti 169 2,265,560 2,796,689 106,655 GT 1030 29 452,402 479,844 91,641 ----------------------------------------------------------------------- TOTAL 136,565 457,279,406 2,152,940,943 406,784 =======================================================================

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55895 - Posted: 8 Dec 2020 | 3:43:23 UTC - in response to Message 55894.
Last modified: 8 Dec 2020 | 3:47:04 UTC

since it's well known that the windows app performs about 15-20% less than the Linux app, you should probably split these statistics by OS/app to differentiate from Windows/Linux.

I also see that you have a 3070 in that list, which shouldnt be there as the apps dont work for the Ampere cards yet and only produce errors. whatever host you picked that one up from likely had history data from a different model and recently swapped it out for a 3070 (causing BOINC to report it as such).
____________

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55896 - Posted: 8 Dec 2020 | 4:15:50 UTC - in response to Message 55895.

I also see that you have a 3070 in that list, which shouldnt be there as the apps dont work for the Ampere cards yet and only produce errors. whatever host you picked that one up from likely had history data from a different model and recently swapped it out for a 3070 (causing BOINC to report it as such).

Correct. This host had a supported GPU at one stage returning valid tasks. The supported GPU has since been removed and now has a RTX3070 GPU installed.
I have left it in the list as I did not wish to "edit" the results and also as a point of interest.

since it's well known that the windows app performs about 15-20% less than the Linux app, you should probably split these statistics by OS/app to differentiate from Windows/Linux.

I have considered that. It could prove interesting on the Linux v Windows discussion.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55897 - Posted: 8 Dec 2020 | 4:17:40 UTC
Last modified: 8 Dec 2020 | 4:19:53 UTC

From the data capture performed 7th December 23:20 UTC

1,275 ADRIA_PBBandit tasks were completed

Average GPU Tasks Runtime --------------------------------------- Quadro RTX 8000 1 8,678 Tesla V100-SXM2 1 11,163 Tesla V100-PCIE 3 11,687 RTX 2080 Ti 118 11,713 RTX 2080 SUPER 43 12,115 RTX 2080 64 12,145 TITAN V 9 13,206 RTX 2060 SUPER 32 13,859 RTX 2070 57 13,990 TITAN Xp COLLECTORS 1 14,349 Quadro RTX 4000 3 14,906 TITAN X 4 15,397 GTX 1080 Ti 77 15,550 GTX 1080 91 16,267 RTX 2060 44 16,540 RTX 2070 SUPER 66 16,567 GTX 1660 Ti 48 17,062 GTX 1660 SUPER 32 18,222 GTX 1070 Ti 26 18,297 Tesla P100-PCIE-12GB 2 18,382 RTX 2060 Max-Q 2 18,817 GTX 1070 88 19,079 GTX 1660 18 19,683 GTX 1650 SUPER 17 19,689 GTX 1660 Ti Max-Q 2 19,716 GTX 980 Ti 9 22,213 Quadro P4000 10 22,728 Quadro M6000 1 23,953 GTX 1060 6GB 44 25,759 GTX 1060 2 25,906 Tesla M60 3 25,981 GTX 1650 32 27,709 GTX 1060 3GB 31 27,791 GTX 980 10 28,213 GTX 970 34 32,016 GTX 980M 1 34,700 Tesla K20Xm 1 39,487 GTX 1050 Ti 21 39,611 GTX TITAN 1 40,109 GTX 960 7 41,829 GTX 1050 9 45,844 Quadro P1000 1 47,940 Quadro M4000 4 51,421 GTX 950 2 53,874 Quadro K6000 2 56,584 GTX 750 Ti 1 65,783 ---------------------------------------


Another ADRIA task to look out for (only 1 task completed in last 7 days):

GPU Task Name Runtime Credit --------------------------------------------------------------------------------------------------------------- GTX 1070 e59s39_e58s51p0f396-ADRIA_FOLDACP_crystal_ss_contacts_50_acp_0-0-1-RND1707_1 19,529 181,050 ---------------------------------------------------------------------------------------------------------------

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55898 - Posted: 8 Dec 2020 | 4:35:07 UTC - in response to Message 55896.

maybe if the admins can get a new app with Ampere support released sometime soon, you can get some real data from the 30-series cards ;P. I've got a 3070 host waiting (with a 1660 Super secondary card running GPUG, and 3070 @Einstein)
____________

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 10,258,940,990
RAC: 16,075,129
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 55901 - Posted: 8 Dec 2020 | 17:19:23 UTC - in response to Message 55894.
Last modified: 8 Dec 2020 | 17:24:56 UTC

rod4x4:
It's awesome how, regarding low end GPUs, your study is nailing previously observed data at original GPUGrid's Performance tab.

Quoting my own comment from one year ago, at Low power GPUs performance comparative thread:

I've made a kind request for Performance tab to be rebuilt
At the end of this tab there was a graph named GPU performance ranking (based on long WU return time)
Currently this graph is blank.
When it worked, it showed a very useful GPUs classification according to their respective performances at processing GPUGrid tasks.
Just GT 1030 sometimes appeared at far right (less performance) in the graph, and other times appeared as a legend out of the graph.
GTX 750 Ti always appeared borderline at this graph, and GTX 750 did not.
I always considered it as a kind invitation for not to use "Out of graph" GPUs...

Both GTX 750 Ti and GT 1030 are still shown respectively at penultimate and last position in your current general list, and GTX 750 is not.
When Performance tab broke, I think that Titan Xp Collectors Edition was the card leading the classification (?).
Obviously, there are now a lot of new RTX 20XX at highest positions, and GTX 16XX new Turing cards interleaved at medium positions, while waiting for Ampere support be available...

This time, I've taken screenshots of your list, as I consider it a very solid performance comparative of currently working CUDA cards.
Good job!

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 55905 - Posted: 8 Dec 2020 | 21:13:08 UTC
Last modified: 8 Dec 2020 | 21:32:29 UTC

Can we retitle this thread to "Performance Tab Temporarily Located Here", maybe?

Excellent that you calculated the avg. per diem credit instead of plugging in BOINC RAC figures.

Bravo, rod4x4!

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55907 - Posted: 9 Dec 2020 | 0:09:03 UTC - in response to Message 55901.

ServicEnginIC wrote:

Both GTX 750 Ti and GT 1030 are still shown respectively at penultimate and last position in your current general list, and GTX 750 is not.

Yes, I wanted to include at least the GTX 750 Ti, as there is still a few of these GPUs contributing. Increasing the host count to 1500, includes these popular modest performers. Disappointed that the GTX 750 missed the cut, it was very close!

I think that Titan Xp Collectors Edition was the card leading the classification (?)

I recall the same.



Pop Piasa wrote:
Can we retitle this thread to "Performance Tab Temporarily Located Here", maybe?

I am not sure if the Thread title can be changed. Anyone know how?

Excellent that you calculated the avg. per diem credit instead of plugging in BOINC RAC figures.

Yeah, the RAC can be misleading, especially when Hosts are stopped and started for periods of time.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55910 - Posted: 9 Dec 2020 | 4:03:15 UTC
Last modified: 9 Dec 2020 | 4:20:48 UTC

Performance efficiency comparison.

The data shows the performance of the GPU Model when Average Credit / Day is compared to reference TDP Watts.

Please let me know if there is an error in the TDP Watts column. I have tried to get the correct reference TDP Watts for each GPU Model.

Average TDP Credit Rank GPU Model Tasks Credit/Day Watts / TDP --------------------------------------------------------------- 1 GTX 1660 Ti 5381 431,472 120 3,596 2 GTX 1650 3597 258,456 75 3,446 3 Quadro RTX 4000 425 546,011 160 3,413 4 Quadro RTX 8000 186 1,000,663 295 3,392 5 GTX 1650 SUPER 2317 336,708 100 3,367 6 GTX 1660 SUPER 4290 418,197 125 3,346 7 RTX 2060 SUPER 4000 581,443 175 3,323 8 TITAN RTX 37 924,587 280 3,302 9 RTX 2080 6626 687,456 215 3,197 10 RTX 2070 6497 585,731 185 3,166 11 Quadro P4000 1159 332,029 105 3,162 12 GT 1030 29 91,641 30 3,055 13 RTX 2080 Ti 12018 760,869 250 3,043 14 GTX 1660 1797 361,588 120 3,013 15 RTX 2070 SUPER 7394 645,365 215 3,002 16 Quadro RTX 3000 107 238,906 80 2,986 17 RTX 2080 SUPER 4967 725,972 250 2,904 18 Tesla V100-PCIE 257 846,785 300 2,823 19 GTX 1070 11705 409,398 150 2,729 20 Tesla V100-SXM2 70 816,036 300 2,720 21 Quadro P2000 264 201,760 75 2,690 22 TITAN V 1271 670,111 250 2,680 23 GTX 1080 12842 481,316 180 2,674 24 GTX 1070 Max-Q 136 302,180 115 2,628 25 RTX 2060 4852 409,782 160 2,561 26 Quadro P1000 141 119,501 47 2,543 27 GTX 1070 Ti 4638 455,913 180 2,533 28 GTX 1050 Ti 2618 181,082 75 2,414 29 GTX 1060 6GB 7045 288,226 120 2,402 30 TITAN Xp COLLECTORS 144 600,021 250 2,400 31 GTX 1060 424 279,436 120 2,329 32 GTX 1060 3GB 5482 278,759 120 2,323 33 Quadro P5000 28 412,947 180 2,294 34 GTX 1080 Ti 10696 543,625 250 2,175 35 TITAN X 420 541,978 250 2,168 36 GTX 1050 891 162,498 75 2,167 37 GTX 980M 177 221,343 122 1,814 38 GTX 980 1153 298,263 165 1,808 39 Tesla P100-PCIE-12GB 121 447,056 250 1,788 40 GTX 750 Ti 169 106,655 60 1,778 41 GTX 970 4225 246,621 148 1,666 42 Quadro M2000 55 122,090 75 1,628 43 GTX 950 300 143,636 90 1,596 44 GTX 960 1712 175,829 120 1,465 45 Quadro M6000 143 353,297 250 1,413 46 GTX TITAN X 142 343,636 250 1,375 47 GTX 980 Ti 1157 340,541 250 1,362 48 Tesla M60 489 292,705 225 1,301 49 Quadro M4000 222 139,366 120 1,161 50 GTX TITAN Black 195 241,476 250 966 51 Tesla K40c 87 213,395 245 871 52 Tesla K20Xm 61 195,187 235 831 53 GTX TITAN 55 197,389 250 790 54 GTX 660 Ti 54 118,323 150 789 55 Tesla K20c 51 177,228 225 788 56 GTX 680 40 142,462 195 731 57 Quadro K6000 67 164,135 225 729 58 GTX 760 47 110,493 170 650 59 GTX 770 72 148,464 230 645 60 GTX 780 67 160,828 250 643 ---------------------------------------------------------------


NOTE:
Data captured 7th December 23:20 UTC,
Data is sourced from completed task runtime and credit gathered from first 1500 hosts in Volunteer Tab,
243 Hosts with multiple GPUs excluded.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 55911 - Posted: 9 Dec 2020 | 4:06:23 UTC - in response to Message 55910.

the 1650 is only a 75W TDP card. (but really only pulls 60-65W while running from my experience)
____________

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55912 - Posted: 9 Dec 2020 | 4:22:04 UTC - in response to Message 55911.

the 1650 is only a 75W TDP card. (but really only pulls 60-65W while running from my experience)

Thanks. Post modified to reflect 75W TDP.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 56134 - Posted: 22 Dec 2020 | 8:09:53 UTC
Last modified: 22 Dec 2020 | 8:11:12 UTC

Some random stats on successfully completed Experimental RAIMIS tasks.

- Data was captured 22nd December 2020, 9:00am UTC
- Top 1600 hosts (Including hosts with Multi GPU) from Volunteer/Hosts Tab were checked
- Processing time was lost during the period checked due to a Disk Full error at GPUGRID
- These Experimental tasks currently only run on Linux hosts.

NOTE:
All stats are just informational and have no true meaning as the tasks are still Experimental with numerous bugs. But could serve as a baseline to gauge improvements in future task performance.


Random Stat...1 - Top 10 longest tasks

------------------------------------------------ Host GPU Task Runtime ------------------------------------------------ 520532 GTX 1650 31798001 345,523 559691 Quadro P2200 31968348 142,768 547222 GTX 1070 32003597 107,170 542712 GTX 1050 Ti 31870488 91,511 468073 GTX 1060 3GB 31977131 85,727 222916 GTX 1080 32016114 85,289 480458 GTX 1650 31970010 64,297 542712 GTX 1050 Ti 31983689 63,448 569938 P106-090 32023412 63,446 514126 GTX 1050 Ti 32012945 61,479 ------------------------------------------------


Random Stat...2 - Top 10 Tasks with highest credit awarded
---------------------------------------------------------- Host GPU Task Runtime Credit ---------------------------------------------------------- 571155 GTX 1080 Ti 31925336 20,992 1,732,384 547331 RTX 2080 Ti 32033047 16,164 1,729,828 524633 RTX 2080 Ti 31944658 16,196 1,728,331 570799 GTX 1080 Ti 31925399 21,391 1,725,398 570799 GTX 1080 Ti 32024878 21,380 1,724,519 570799 GTX 1080 Ti 32031416 21,375 1,724,124 563978 RTX 2080 Ti 32043816 17,278 1,724,060 570799 GTX 1080 Ti 31937624 21,363 1,723,130 558436 GTX 1080 31979414 25,676 1,722,169 563979 GTX 1660 Ti 31942085 22,385 1,720,240 ----------------------------------------------------------

NOTE:
These tasks have a wild variation in credit. If you have been following the various forum threads, you would have noted it is attributed to different credit formula used by TONI and how older BOINC Client software on hosts with later model GPUs, interact with the credit formula applied. This is one of the bugs that TONI will be correcting in due course.


Random Stat...3 - Fastest runtime by GPU model
GPU Model Runtime ----------------------- RTX 2080 Ti 8,450 RTX 2080 SUPER 10,746 RTX 2070 SUPER 11,163 GTX 1080 Ti 11,426 RTX 2080 11,518 RTX 2070 11,528 TITAN X 12,388 GTX 1660 Ti 14,477 RTX 2060 14,932 GTX 1080 15,037 GTX 1660 SUPER 15,959 GT 1030 16,561 GTX 1070 Ti 17,231 GTX 1070 18,117 RTX 3070 19,987 GTX 1650 SUPER 20,837 GTX 1650 23,697 GTX 1060 6GB 26,070 GTX 1060 3GB 26,181 Quadro P2200 32,514 GTX 1050 Ti 38,385 P106-090 63,446 -----------------------


Random Stat...4 - Summary
Total tasks completed last 7 days - 1,219 Tasks awarded 20.83 credit - 147

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 56140 - Posted: 22 Dec 2020 | 15:59:15 UTC - in response to Message 56134.


Tasks awarded 20.83 credit - 147


FYI, this looks like it was adjusted. the new "penalty" value of tasks that run too long or have a peak_flops value too high is now 34,722.22

and it appears to be a direct consequence of the FLOPS change from 3,000GFLOP to 5,000,000 GFLOP.

(5,000,000/3,000)*20.83 = 34,716.66666
likely a rounding error somewhere in there, credit reward gets truncated/rounded at 2 decimal places.

as a reminder you will hit this "penalty" if your total credit reward is calculated to be greater than about 1,750,000 or so (I havent seen higher than this awarded).

____________

johnnymc
Avatar
Send message
Joined: 7 Apr 11
Posts: 6
Credit: 92,079,090
RAC: 0
Level
Thr
Scientific publications
watwat
Message 56457 - Posted: 12 Feb 2021 | 11:47:34 UTC - in response to Message 56140.


Tasks awarded 20.83 credit - 147

as a reminder you will hit this "penalty" if your total credit reward is calculated to be greater than about 1,750,000 or so (I havent seen higher than this awarded).

Interesting (and duly noted!).
____________
Life's short; make fun of it!

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 56681 - Posted: 24 Feb 2021 | 5:43:06 UTC

ADRIA D3RBandit task runtime summary

The following table has been created from data gathered from the top 2000 hosts in the Volunteer tab.

NOTE
- Hosts with Multiple GPUs - these hosts have been included, but only tasks from the primary GPU (Device 0) have been used.
- Data was captured on 24th February 2021 at 2:30 UTC
- Task data history is only retained on the Gpugrid website for 7 days.

--------------------------------------------------------------------------- Row Labels Tasks Min Runtime Max Runtime Avg Runtime --------------------------------------------------------------------------- Tesla V100-PCIE-16GB 14 36,783 37,174 36,954 Tesla V100-SXM2-16GB 13 33,740 64,047 37,131 Quadro RTX 8000 9 37,150 37,917 37,539 Quadro RTX 6000 99 38,982 39,627 39,282 TITAN RTX 13 41,810 42,495 42,129 RTX 2080 Ti 647 34,289 105,389 42,339 Tesla K20c 1 44,447 44,447 44,447 TITAN V 62 43,045 51,784 44,511 Tesla P100-PCIE-16GB 4 33,890 64,758 49,313 RTX 2080 SUPER 198 44,573 96,631 51,703 Quadro RTX 5000 11 52,021 54,685 53,725 TITAN Xp COLLECTORS 8 51,121 54,856 53,886 RTX 2080 198 48,018 97,757 54,340 RTX 2070 SUPER 341 40,557 167,045 61,616 TITAN Xp 15 52,890 102,468 61,981 Quadro P6000 11 63,102 84,988 65,836 RTX 2070 215 45,365 118,366 68,784 RTX 2060 SUPER 180 61,782 111,505 70,187 GTX 1080 Ti 585 50,874 173,266 70,192 TITAN X 13 61,414 76,114 71,380 Quadro RTX 4000 29 69,730 73,761 71,728 P102-100 18 71,570 74,791 73,225 RTX 2080 Max-Q 2 73,233 94,764 83,999 RTX 2060 202 48,046 221,335 85,256 Quadro P5000 1 85,510 85,510 85,510 RTX 2070 Max-Q 10 82,591 93,110 88,273 GTX 1070 Ti 168 80,558 108,102 88,499 GTX 1080 516 63,533 209,920 89,915 Tesla T4 19 87,729 95,617 90,244 GTX TITAN X 2 91,966 91,967 91,967 GTX 1080 Max-Q 3 96,004 96,564 96,281 GTX 1660 94 38,466 185,556 99,626 GTX 1660 Ti 237 68,602 215,451 100,538 GTX 1660 SUPER 186 44,711 223,533 101,408 Quadro P4200 5 100,808 103,304 101,958 GTX 1070 417 81,494 262,030 102,459 P104-100 11 96,978 127,386 103,239 GTX 980 Ti 36 90,147 134,389 106,204 Quadro M6000 5 110,645 114,824 113,218 GTX 1660 Ti Max-Q 3 114,656 115,622 115,272 GTX 1650 SUPER 78 121,145 141,728 126,961 GTX 980 44 124,997 168,532 137,830 Quadro P4000 19 128,763 147,490 138,829 P106-100 2 139,894 140,308 140,101 GTX 1060 6GB 301 57,570 316,408 143,301 GTX 780 Ti 1 149,801 149,801 149,801 Tesla M60 16 151,707 155,751 153,763 GTX 1060 3GB 218 136,615 219,800 155,129 GTX 1060 12 150,483 178,826 162,666 GTX 970 158 147,892 306,909 169,443 GTX 1650 Ti 6 149,952 174,363 169,541 Quadro P2200 2 165,741 174,773 170,257 GTX 1650 112 152,170 272,321 170,546 GTX TITAN Black 6 165,385 216,105 181,113 Quadro T2000 1 181,122 181,122 181,122 GTX TITAN Z 3 178,855 214,005 190,949 GTX 1650 Max-Q 2 192,931 193,275 193,103 Quadro T1000 2 187,377 208,788 198,083 GTX 980M 13 184,961 232,265 201,078 Quadro P2000 17 190,720 253,670 204,460 Tesla K40c 1 204,651 204,651 204,651 GTX 1070 Max-Q 4 182,955 227,436 211,356 GTX 780 1 212,820 212,820 212,820 Tesla K20Xm 3 213,293 214,263 213,682 GTX 1050 Ti 142 145,657 283,807 234,586 Quadro GP100 3 134,579 328,042 239,540 GTX 960 51 234,437 313,895 253,283 RTX 2060 Max-Q 2 266,072 267,035 266,554 Quadro M4000 10 246,194 429,829 277,547 Quadro M3000M 1 279,761 279,761 279,761 GTX 1050 49 252,179 405,266 279,836 GTX 770 4 272,473 298,834 285,054 GTX 680 2 290,274 291,802 291,038 P106-090 9 246,129 581,381 295,744 GTX 965M 1 296,178 296,178 296,178 Quadro K6000 1 317,140 317,140 317,140 Quadro P1000 6 297,653 387,845 318,040 GTX 950 7 294,150 474,611 326,675 GTX 690 2 323,967 344,216 334,092 GTX 670 4 320,863 352,751 336,659 GTX 760 5 350,828 403,577 371,863 GTX 660 Ti 4 337,809 429,819 377,551 GTX 750 Ti 21 376,233 469,874 422,368 Quadro M1200 1 437,300 437,300 437,300 GTX 660 2 457,242 471,218 464,230 Quadro K5000 1 464,747 464,747 464,747 Quadro K2200 1 465,326 465,326 465,326 Quadro P600 3 446,826 485,914 472,856 MX150 1 497,959 497,959 497,959 Quadro M2000 1 498,830 498,830 498,830 GT 1030 10 433,396 614,579 504,337 GTX 950M 1 521,799 521,799 521,799 GTX 750 5 498,398 543,604 525,458 GTX 960M 3 434,429 813,818 574,086 GTX 650 Ti 1 696,465 696,465 696,465 Quadro K620 1 771,520 771,520 771,520 GTX 745 2 833,750 835,668 834,709 GTX 650 1 1,014,161 1,014,161 1,014,161 ---------------------------------------------------------------------------

Post to thread

Message boards : Server and website : Performance Tab still broken

//