Message boards :
Number crunching :
New app update (acemd3)
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7
| Author | Message |
|---|---|
|
Send message Joined: 29 Nov 17 Posts: 4 Credit: 124,781,835 RAC: 0 Level ![]() Scientific publications ![]()
|
If the task isn't started yet, the files aren't in the "slots" directory yet. But suspending GPU, editing the file of the already started task and resuming GPU worked! On Linux, setting priority to 5 achieves the desired result: The acemd3 process always has the exact priority defined in cc_config.xml. Now the question is whether this would actually increase the priority unwantedly on Windows... |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 2 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
What figure did you find there, before you edited it? |
|
Send message Joined: 29 Nov 17 Posts: 4 Credit: 124,781,835 RAC: 0 Level ![]() Scientific publications ![]()
|
It was unset. I added the "priority" xml tag myself. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 2 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It was unset. I added the "priority" xml tag myself. OK, memo to project devs. I'd recommend value 2 for general use, rather than 5. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Well, when I played around with SWAN_SYNC about a year ago on Linux, there definitely was a small performance benefit, but it was so minimal that I decided it's not worth sacrificing half a CPU core for it for me.It depends also on the GPU. High-end GPUs gain more (up to 30% on a GTX 1080Ti). To optimize the performance of the GPUGrid app, you should not over-commit the CPU feeding the GPU(s). That is one CPU task per CPU core. (on hyperthreaded CPUs you should reduce the number of CPU tasks down to 50%). You can achieve best GPUGrid performance (on high-end GPUs) if only one CPU task is running parallel the GPUGrid app (or none). Is there any particular reason why the "short runs" give much less credit than the "long runs" for the same runtime?Because they intended to be shorter than the "long run" workunits. The actual run time got mixed since then, some "long" workunits take about the same time to process as a "short" takes. For the "TEST" work units, I figured that they give less credit because they are just tests. Is it possible to opt-out of test work units?Yes. You could set your venues in your GPUGrid preferences not to receive "beta tasks" or you can deselect the entire ACEMD3 queue. But there are not so many beta and short tasks to even bother with this. |
DingoSend message Joined: 1 Nov 07 Posts: 20 Credit: 128,376,317 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I crunched a couple of the new tasks (New version of ACEMD v2.10 (cuda100))on my GTX1660Ti which is the new turing type and they processed and validated. Good that I can use this machine on GPUGrid now. https://www.gpugrid.net/show_host_detail.php?hostid=517492 |
©2026 Universitat Pompeu Fabra