WU: OPM simulations

Message boards : News : WU: OPM simulations
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · Next

AuthorMessage
Skyler Baker

Send message
Joined: 19 Feb 16
Posts: 19
Credit: 140,656,383
RAC: 0
Level
Cys
Scientific publications
wat
Message 43533 - Posted: 24 May 2016, 18:53:12 UTC

Truthfully I'm not entirely sure if a rig with multiple GPUs can even be kept very cool. I keep my 980ti fan profile at 50% and it tends to run about 60-65c, but the heat makes my case and even cpu cooler fans crank up, I'd imagine a pair of them would get pretty toasty. My fan setup I am quite sure could keep 2 below 70c, but it would sound like a jet engine and literally could not be kept in living quarters.
ID: 43533 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43542 - Posted: 25 May 2016, 8:30:25 UTC
Last modified: 25 May 2016, 8:40:57 UTC

The workunit with the worst history I've ever received:
2wf1R8-SDOERR_opm996-0-1-RND6142 I've received it after 15 days 8 hours 22 minutes
1. Gagiman's desktop with i5-4670 and GTX 970 it has 24 errors in a row and 1 successful task (probably it's OK now)
2. [PUGLIA] kidkidkid3's desktop with Core2 Quad Q9450 and GTX 750 Ti it has 29 user aborts and 4 successful tasks
3. Ralph M. Fay III's laptop with i7-4600M and GT 730M (1GB) it has 7 errors and 1 successful task (probably it's OK now)
4. Evan's desktop with i5-3570K and GTX 480 it has 35 successive immediate errors
5. Sean's desktop with i7-4770K and GTX 750 Ti it has 10 errors and 2 successful tasks (probably it's OK now)
6. Megacruncher TSBT's desktop with AMD FX-6300 and GTX 580 it has 1 aborted, 6 errors (sim. unstable) and 3 not started by deadline tasks
7. Car a carn's desktop with i7-3770 and GTX 980 Ti it has 2 successful tasks
This task is actually succeeded on this host. It spent 10 days and 2 hours on this host alone
8. shuras' desktop with i5-2500K and GTX 670 it has 1 user aborted, 1 successful and 1 timed out task
9. My desktop with i7-980x and two GTX 980 it has 41 successful, 2 user aborted, 1 ghost (timed out) and 12 error tasks
I've aborted this task after 5 hours when I've checked its history and I've noticed that it's already succeeded.
Note for the 12 errors on my host: these are leftovers from August & September 2013, March & September 2014 and March 2015 which should have been removed from the server long ago. All of these 12 errors are the result of bad batches.
ID: 43542 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43543 - Posted: 25 May 2016, 8:51:53 UTC - in response to Message 43476.  

... we should find other means to reach the problematic contributors.
It should be made very clear right at the start (on the project's homepage, in the BOINC manager when a user tries to join the project, in the FAQ etc) the project's minimum requirements:
1. A decent NVidia GPU (GTX 760+ or GTX 960+)
2. No overclocking (later you can try, but read the forums)
3. Other GPU projects are allowed only as a backup (0 resource share) project.
Some tips should be broadcast by the project as a notice on a regular basis about the above 3 points. Also there should be someone/something who could send an email to the user who have unreliable host(s), or perhaps their username/hostname should be broadcast as a notice.
I'd like to add the following:
4. Don't suspend the GPU tasks while your computer is in use, or at least set its timeout to 30 minutes. It's better to set up the list of exclusive apps (games, etc) in BOINC manager.
5. If you don't have a high-end GPU & you switch your computer off daily then GPUGrid is not for you.

I strongly recommend for the GPUGrid staff to broadcast the list of worst hosts & the tips above in every month (while needed)
ID: 43543 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Stefan
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 43545 - Posted: 25 May 2016, 9:08:38 UTC

Wow ok, this thread derailed. We are supposed to keep discussions related just to the specific WUs here, even though I am sure it's a very productive discussions in general :)
I am a bit out of time right now so I won't split threads and will just open a new one because I will resend OPM simulations soon.

Right now I am trying to look into the discrepancies between projected runtimes and real runtimes as well as credits to hopefully do it better this time.

The thing with excluding bad hosts is unfortunately not doable as the queuing system of BOINC apparently is pretty stupid and would exclude all of Gerard's WUs until all of mine finished if I send them with high priority :(
ID: 43545 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43546 - Posted: 25 May 2016, 9:23:33 UTC - in response to Message 43545.  
Last modified: 25 May 2016, 9:25:07 UTC

Wow ok, this thread derailed.
Sorry, but that's the way it goes :)

We are supposed to keep discussions related just to the specific WUs here, even though I am sure it's a very productive discussions in general :)
It's good to have a confirmation that you are reading this :)

I am a bit out of time right now so I won't split threads and will just open a new one because I will resend OPM simulations soon.
Will there be very long ones (~18-20 hours on GTX980Ti)? As in this case I will reduce my cache to 0.03 days.

Right now I am trying to look into the discrepancies between projected runtimes and real runtimes as well as credits to hopefully do it better this time.
We'll see. :) I keep my fingers crossed.

The thing with excluding bad hosts is unfortunately not doable as the queuing system of BOINC apparently is pretty stupid and would exclude all of Gerard's WUs until all of mine finished if I send them with high priority :(
This is left us to the only possibility of broadcasting to make things better.
ID: 43546 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43547 - Posted: 25 May 2016, 9:30:27 UTC - in response to Message 43545.  
Last modified: 25 May 2016, 9:33:12 UTC

There are things within your control that would mitigate the problem.

Reduce baseline WUs available per GPU per day from the present 50 to 10 and reduce WUs per gpu to one at a time.
ID: 43547 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43548 - Posted: 25 May 2016, 9:43:25 UTC - in response to Message 43545.  
Last modified: 25 May 2016, 9:45:37 UTC

The thing with excluding bad hosts is unfortunately not doable as the queuing system of BOINC apparently is pretty stupid and would exclude all of Gerard's WUs until all of mine finished if I send them with high priority :(

The only complete/long-term way around that might be to separate the research types using different apps & queues. Things like that were explored in the past and the biggest obstacle was the time-intensive maintenance for everyone; crunchers would have to select different queues and be up to speed with what's going on and you would have to spend more time on project maintenance (which isn't science). There might also be subsequent server issues.
If the OPM's were release in the beta queue would that server priority still apply (is priority applied per queue, per app or per project)?
Given how hungry GPUGrid crunchers are these days how long would it take to clear the prioritised tasks and could they be drip fed into the queue (small batches)?
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 43548 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43549 - Posted: 25 May 2016, 9:45:55 UTC - in response to Message 43547.  
Last modified: 25 May 2016, 9:46:40 UTC

Reduce baseline WUs available per GPU per day from the present 50 to 10
That's a good idea. Even it could be reduced to 5.
... and reduce WUs per gpu to one at a time.
I'm ambivalent about this.
Perhaps a 1 hour delay between WU downloads would be enough to spread the available workunits evenly between the hosts.

I see different max task per day numbers on my different hosts with same GPUs, is this how it should be?
ID: 43549 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43550 - Posted: 25 May 2016, 9:54:28 UTC - in response to Message 43549.  
Last modified: 25 May 2016, 9:55:33 UTC


I see different max task per day numbers on my different hosts with same GPUs, is this how it should be?


I believe so. You start at 50, when you send a valid result it goes up 1 when you send an error, abort a WU or server cancels, it goes back down to 50.

50 is a ridiculously high number anyway and as you have said could be reduced to 5 with benefit to user and project.
ID: 43550 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43551 - Posted: 25 May 2016, 9:57:47 UTC - in response to Message 43545.  

The thing with excluding bad hosts is unfortunately not doable as the queuing system of BOINC apparently is pretty stupid and would exclude all of Gerard's WUs until all of mine finished if I send them with high priority :(
I recall that there was a "blacklist" of hosts in the GTX480-GTX580 era. Once my host got blacklisted upon the release of the CUDA4.2 app, as this app was much faster then the previous CUDA3.1, so the cards could be overclocked less and my hosts began to throw errors until I've reduced its clock frequency. It could not get tasks for 24 hours IIRC. However it seems that later when the BOINC server software was updated at GPUGrid this "blacklist" feature disappeared. It would be nice to have this feature again.
ID: 43551 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Stefan
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 43554 - Posted: 25 May 2016, 10:58:10 UTC

Ok, Gianni changed the baseline WUs available per GPU per day from 50 to 10
ID: 43554 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43556 - Posted: 25 May 2016, 11:20:51 UTC - in response to Message 43554.  
Last modified: 25 May 2016, 11:23:22 UTC

Ok, Gianni changed the baseline WUs available per GPU per day from 50 to 10
Thanks!
EDIT: I don't see any change yet on my hosts' max number of tasks per day...
ID: 43556 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43557 - Posted: 25 May 2016, 11:31:13 UTC - in response to Message 43554.  
Last modified: 25 May 2016, 11:55:36 UTC

Ok, Gianni changed the baseline WUs available per GPU per day from 50 to 10


I don't want to sound in anyway disrespectful with this post so please don't take offence, here goes.

WOOHOO! A sign things CAN be done instead of, can't do that, not doable.

Thank you Stefan and Gianni for taking this first important step to making this project more efficient and when you witness the decline in error rates on the server status page which, while small, should be evident you will maybe consider reducing baseline to 5 and employ other initiatives to reduce errors/timeouts and ensure work is spread evenly/fairly over the GPUGrid userbase which will make this project faster, more efficient with a happier and hopefully growing core userbase.
ID: 43557 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jacob Klein

Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43558 - Posted: 25 May 2016, 12:12:51 UTC - in response to Message 43543.  

... we should find other means to reach the problematic contributors.
It should be made very clear right at the start (on the project's homepage, in the BOINC manager when a user tries to join the project, in the FAQ etc) the project's minimum requirements:
1. A decent NVidia GPU (GTX 760+ or GTX 960+)
2. No overclocking (later you can try, but read the forums)
3. Other GPU projects are allowed only as a backup (0 resource share) project.
Some tips should be broadcast by the project as a notice on a regular basis about the above 3 points. Also there should be someone/something who could send an email to the user who have unreliable host(s), or perhaps their username/hostname should be broadcast as a notice.
I'd like to add the following:
4. Don't suspend the GPU tasks while your computer is in use, or at least set its timeout to 30 minutes. It's better to set up the list of exclusive apps (games, etc) in BOINC manager.
5. If you don't have a high-end GPU & you switch your computer off daily then GPUGrid is not for you.

I strongly recommend for the GPUGrid staff to broadcast the list of worst hosts & the tips above in every month (while needed)


I know my opinions aren't liked very much here, but I wanted to express my response to these 5 proposed "minimum requirements".

1. A decent NVidia GPU (GTX 760+ or GTX 960+)
--- I disagree. The minimum GPU should be one that is supported by the toolset the devs release apps for, and one that can return results within the timeline they define. If they want results returned in a 6-week-time-period, and a GTS 250 fits the toolset, I see no reason why it should be excluded.

2. No overclocking (later you can try, but read the forums)
--- I disagree. Overclocking can provide tangible performance results, when done correctly. It would be better if the task's final results could be verified by another GPU, for consistency, as it seems currently that an overclock that is too high can still result in a successful completion of the task. I wish I could verify that the results were correct, even for my own overclocked GPUs. Right now, the only tool I have is to look at stderr results for "Simulation has become unstable", and downclock when I see it. GPUGrid should improve on this somehow.

3. Other GPU projects are allowed only as a backup (0 resource share) project.
--- I disagree. Who are you to define what I'm allowed to use? I am attached to 58 projects. Some have GPU work, some have CPU work, some have ASIC work, and some have non-CPU-intensive work. I routinely get "non-backup" work from about 15 of them, all on the same PC.

4. Don't suspend the GPU tasks while your computer is in use, or at least set its timeout to 30 minutes. It's better to set up the list of exclusive apps (games, etc) in BOINC manager.
--- I disagree. I am at my computer during all waking hours, and I routinely suspend BOINC, and even shut down BOINC, because I have some very-long-running-tasks (300 days!) that I don't want to possibly get messed up, as I do things like install/uninstall software or update Windows. Suspending and shutting down should be completely supported by GPUGrid, and to my knowledge, they are.

5. If you don't have a high-end GPU & you switch your computer off daily then GPUGrid is not for you.
--- I disagree. GPUGrid tasks have a 5-day-deadline, currently, to my knowledge. So, if your GPU isn't on enough to complete any GPUGrid task within their deadline, then maybe GPUGrid is not for you.

These "minimum requirements" are... not great suggestions, for someone like me at least. I realize I'm an edge case. But I'd imagine that lots of people would take issue with at least a couple of the 5.

I feel that any project can define great minimum requirements by:
- setting up their apps appropriately
- massaging their deadlines appropriately
- restricting bad hosts from wasting time
- continually looking for ways to improve throughput

I'm glad the project is now (finally?) looking for ways to continuously improve.
ID: 43558 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43559 - Posted: 25 May 2016, 12:24:38 UTC - in response to Message 43558.  
Last modified: 25 May 2016, 12:25:30 UTC


I feel that any project can define great minimum requirements by:
- setting up their apps appropriately
- massaging their deadlines appropriately
- restricting bad hosts from wasting time
- continually looking for ways to improve throughput

I'm glad the project is now (finally?) looking for ways to continuously improve.


I don't know whether your right about your opinions not being liked or not but you are entitled to them. Opinions are just that, you have a right to espouse and defend them. I for one can see nothing wrong with the above list that you posted.

If a host is reliable and returns WUs within the deadline period I don't think it matters whether it's a 750ti or a 980ti or whether it runs 24/7 or 12/7. I myself have a running and working 660ti which is reliable and does just that.
ID: 43559 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Stefan
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 43562 - Posted: 25 May 2016, 13:01:39 UTC - in response to Message 43557.  
Last modified: 25 May 2016, 13:01:55 UTC

If you like success stories Betting Slip then you can have another one :D
We found the reason for the underestimation of the OPM runtime (and all further equilibrations we ever send to GPUGRID).

When we calculate the projected runtime we try the first 500 steps of the simulation. However our equilibrations actually do some faster calculations during the first 500 steps and then switch to some slower ones, so they were underestimating the runtime by quite a bit (one example: 17 vs 24 hours).

This has now been fixed, so the credits should reflect better the real runtime. I am nearly feeling confident enough to submit the rest of the OPM now, hehe.
ID: 43562 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43563 - Posted: 25 May 2016, 13:03:13 UTC - in response to Message 43558.  
Last modified: 25 May 2016, 13:11:51 UTC

I know my opinions aren't liked very much here
That should not ever make you to refrain from expressing your opinion.

but I wanted to express my response to these 5 proposed "minimum requirements".
It was a mistake to call these "Minimum requirements", and it's intended for dummies. Perhaps that's makes itself unavailing.

These "minimum requirements" are... not great suggestions, for someone like me at least. I realize I'm an edge case. But I'd imagine that lots of people would take issue with at least a couple of the 5.
If you keep an eye on your results, you can safely skip these "recommendations". We can, we should refine these recommendations to make them more appropriate, and less offensive. I've made the wording of these harsh on purpose to induce a debate. But I can show you results or hosts which validate my 5 points. (just browse the links in my post about the workunit with the worst history I've ever received, and the other similar ones)
The the recommended minimum GPU should be better than the recent (~GTX 750-GTX 660), as the release of the new GTX 10x0 series will result in longer workunits by the end of this year, and the project should not lure new users with lesser cards to frustrate them in 6 months.
ID: 43563 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Betting Slip

Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43566 - Posted: 25 May 2016, 13:22:52 UTC - in response to Message 43562.  

If you like success stories Betting Slip then you can have another one :D
We found the reason for the underestimation of the OPM runtime (and all further equilibrations we ever send to GPUGRID).

When we calculate the projected runtime we try the first 500 steps of the simulation. However our equilibrations actually do some faster calculations during the first 500 steps and then switch to some slower ones, so they were underestimating the runtime by quite a bit (one example: 17 vs 24 hours).

This has now been fixed, so the credits should reflect better the real runtime. I am nearly feeling confident enough to submit the rest of the OPM now, hehe.


Thanks Stefan and let em rip.

Hope these simulations are producing the results you expected.
ID: 43566 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 1
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43567 - Posted: 25 May 2016, 13:36:34 UTC - in response to Message 43356.  
Last modified: 25 May 2016, 13:37:06 UTC

It would be hugely appreciated if you could find a way of hooking up the projections of that script to the <rsc_fpops_est> field of the associated workunits. With the BOINC server version in use here, a single mis-estimated task (I have one which has been running for 29 hours already) can mess up the BOINC client's scheduling - for other projects, as well as this one - for the next couple of weeks.
+1
Could you please set the <rsc_fpops_est> field and the <rsc_disk_bound> field correctly for the new tasks?
The <rsc_disk_bound> is set to 8*10^9 (7.45GB) which is at least one order of magnitude higher then necessary.
ID: 43567 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43568 - Posted: 25 May 2016, 14:04:21 UTC - in response to Message 43558.  

I know my opinions aren't liked very much here, but I wanted to express my response to these 5 proposed "minimum requirements".

I like your opinions. Whether or not I agree with them, they're always well thought out.

These "minimum requirements" are... not great suggestions, for someone like me at least. I realize I'm an edge case. But I'd imagine that lots of people would take issue with at least a couple of the 5.

I feel that any project can define great minimum requirements by:
- setting up their apps appropriately
- massaging their deadlines appropriately
- restricting bad hosts from wasting time
- continually looking for ways to improve throughput

I'm glad the project is now (finally?) looking for ways to continuously improve.

Thumbs up and +1.
ID: 43568 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · Next

Message boards : News : WU: OPM simulations

©2025 Universitat Pompeu Fabra