Message boards :
News :
What is happening and what will happen at GPUGRID, update for 2021
Message board moderation
Author | Message |
---|---|
![]() Send message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
As you know, GPUGRID was the first BOINC project to run GPU applications, in fact we help in creating the infrastructure for that. This was very many years ago and since then many things changed. In particular, recently, we had not a constant stream of workunits. I would like to explain the present and expected future of GPUGRID here. In the last few years, we moved from doing science by running very many simulations to develop new methods at the boundary between physical simulations and machine learning methods/artificial intelligence. These new methods did not require a lot of simulations and most of the PhD students in the research group did not use GPUGRID daily. We still had some long term project running on GPUGRID for which you will see results shortly in terms of new scientific publications. Among other things ACEMD, the application behind GPUGRID is now built partially using OpenMM, of which I am also principal investigator. As you might know OpenMM is also used in Folding@Home. We have received a grant to develop OpenMM very recently with one or two people starting before the end of the year. This will be good for GPUGRID because it means that we will be using GPUGRID a lot more. Furthermore, we recently found a way to run AI simulations in GPUGRID. We have only run very few test cases, but there is a PhD student in the lab with a thesis on cooperative intelligence, where very many AI agents collaborate to solve tasks. The goal is to understand how cooperative intelligence works. We are also looking for a postdoc in cooperative intelligence, in case you know somebody. https://www.compscience.org I hope that this clarify the current situation. On the practical term, we expect to have the ACEMD application fixed for RTX30xx within few weeks, as now the developper of ACEMD is also doing the deployment on GPUGRID, making everything simpler. GDF |
Send message Joined: 22 May 20 Posts: 110 Credit: 115,525,136 RAC: 345 Level ![]() Scientific publications ![]() |
Thanks for the much anticipated update! Appreciate that you provide a roadmap for the future. Hopefully there aren't too many roadblocks ahead with the development of OpenMM. The future project direction sounds very exciting :) I'll take that as an opportunity to upgrade my host by the end of the year to contribute more intensively next year! Keep up the good work |
Send message Joined: 7 Mar 20 Posts: 1 Credit: 78,381,276 RAC: 0 Level ![]() Scientific publications ![]() |
Are there any plans to add support for AMD GPUs now that ACEMD3 supports OPENCL? https://software.acellera.com/docs/latest/acemd3/capabilities.html This would increase participation. |
Send message Joined: 26 Dec 13 Posts: 86 Credit: 1,292,358,731 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Good news, everyone! ![]() |
Send message Joined: 2 Jan 09 Posts: 303 Credit: 7,321,800,090 RAC: 227,498 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Great News, Thanks!!!! |
![]() Send message Joined: 26 Sep 13 Posts: 20 Credit: 1,714,356,441 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for the news, I hope the work goes well. I am looking forward to making new calculations. Go ahead ! |
Send message Joined: 29 May 21 Posts: 1 Credit: 1,067,880,228 RAC: 0 Level ![]() Scientific publications ![]() |
I've got a 3090 and 2 1080ti's waiting for some work. Looking forward to the new updates. |
![]() Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for the update, GDF, it's very much appreciated! MrS Scanning for our furry friends since Jan 2002 |
Send message Joined: 1 Jan 15 Posts: 1166 Credit: 12,260,898,501 RAC: 869 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
On the practical term, we expect to have the ACEMD application fixed for RTX30xx within few weeks, as now the developper of ACEMD is also doing the deployment on GPUGRID, making everything simpler. one of my hosts with two RTX3070 inside will be pleased :-) |
![]() Send message Joined: 16 Nov 11 Posts: 4 Credit: 420,687,609 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() |
Are there any plans to add support for AMD GPUs now that ACEMD3 supports OPENCL? https://software.acellera.com/docs/latest/acemd3/capabilities.html This would increase participation. I would also like to know if AMD will finally be supported. I have a water-cooled Radeon RX 6800 XT and am ready to utilize its full capacity for cancer and COVID research, as well as other projects as they may come. AMD Ryzen 9 5950X AMD Radeon RX 6800 XT 32BG 3200MHz CAS-14 RAM NVMe 4th Gen storage Custom water cooling |
![]() Send message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Some initial new version of ACEMD has been deployed on linux and it's working, but we are still testing. gdf |
Send message Joined: 21 Feb 20 Posts: 1114 Credit: 40,838,722,595 RAC: 4,266,994 Level ![]() Scientific publications ![]() |
Some initial new version of ACEMD has been deployed on linux and it's working, but we are still testing. what is the criteria for sending the cuda101 app vs the cuda1121 app? I see both apps exist. and new drivers on even older cards will support both apps. For example, if you have CUDA 11.2 drivers on a Turing card, you can run both the 11.2 app or the 10.1 app. so what criteria does the server use to determine which app to send my Turing cards? Of course Ampere cards should only get the 11.2 app. Also looks like the Windows apps are missing for New ACEMD, are you dropping Windows support? ![]() |
Send message Joined: 16 Dec 08 Posts: 7 Credit: 1,549,469,403 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Support for AMD cards would make good for project. Atm. I´m running mostly Milkyway with my 6900XT and its doing 3 units in 1:50... that takes for 1080Ti about 6:00. I havent checked times to WCG because GPU units cames up so rarely, but I can imagine it to be ok in that too. |
Send message Joined: 21 Feb 20 Posts: 1114 Credit: 40,838,722,595 RAC: 4,266,994 Level ![]() Scientific publications ![]() |
Some initial new version of ACEMD has been deployed on linux and it's working, but we are still testing. there seems to be a problem with the new 2.17 app. it's always trying to run on GPU0 even when BOINC assigns it to another GPU. I had this happen on two separate hosts now. the host picked up a new task, BOINC assigns it to some other GPU (like device 6 or device 3) but the acemd process spins up on GPU0 anyway, even though it is already occupied by another BOINC process from another project. I think there's sometime off in that the boinc device assignment isnt being communicated to the app properly. this results in multiple processes running on a single GPU, and no process running on the device that BOINC assigned the GPUGRID task to. rebooting the BOINC client brings it back to "OK" since it prioritizes the GPUGRID task to GPU0 on startup (probably due to resource share). but I feel this will keep happening. needs an update ASAP. ![]() |
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 295,172 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Haven't snagged one of the new ones as yet (I was out all day), but I'll watch out for them, and try to find out where the device allocation is failing. |
![]() ![]() Send message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 998,578 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
there seems to be a problem with the new 2.17 app. it's always trying to run on GPU0 even when BOINC assigns it to another GPU. First of all: Congratulations, both of your multi GPU systems that weren't getting work from previous app versions, seem to have the problem solved with this new one. Welcome back to the field! I'm experiencing the same behavior, and I can go even further: I catched six WUs of the new app version 2.17 at my triple 1650 GPU system. Then I aborted three of these WUs, and two of them were recatched at my twin 1650 GPU system. At the triple GPU system: While all the three WUs seem to be progressing normally from the Boinc Manager point of view, looking at Psensor only GPU #0 (first PCIE slot) is working and GPUs #1 and #2 are inactive. It's like GPU #0 is carrying all the workload for the three WUs. Same 63% fraction done after 8,25 hours for all the three WUs. However, CPU usage is coherent with three WUs running concurrently at this system. At the twin GPU system: While both WUs seem to be progressing normally from the Boinc Manager point of view, looking at Psensor only GPU #0 (first PCIE slot) is working and GPU #1 is inactive. It's like GPU #0 is carrying all the workload for both WUs. Same 89% fraction done after 8 hours for both WUs. Also, CPU usage is coherent with two WUs running concurrently at this system. |
![]() ![]() Send message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 998,578 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
there seems to be a problem with the new 2.17 app. it's always trying to run on GPU0 even when BOINC assigns it to another GPU. Confirmed: While Boinc Manager was saying that Task #32640074, Task #32640075 and Task #32640080 were running at devices #0, #1 and #2 at this triple GTX 1650 GPU system, they actually were all processed concurrently at the same device #0. |
Send message Joined: 21 Feb 20 Posts: 1114 Credit: 40,838,722,595 RAC: 4,266,994 Level ![]() Scientific publications ![]() |
yeah it was actually partially caused by some settings on my end, combined with the fact that when the cuda1121 app was released on July 1st they deleted/retired/removed the cuda100 app. had they left the cuda100 app in place, I would have at least received that one still. i'll post more details in the original thread about that issue. ![]() |
![]() Send message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
The device problem should be fixed now. Windows version on their way |
![]() Send message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Windows version deployed |
©2025 Universitat Pompeu Fabra