Message boards :
Number crunching :
Accelerating SE(3)-Transformers Training Using an NVIDIA Open-Source Model Implementation
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
SE(3)-Transformers are versatile graph neural networks unveiled at NeurIPS 2020. NVIDIA just released an open-source optimized implementation that uses 9x less memory and is up to 21x faster than the baseline official implementation. https://developer.nvidia.com/blog/accelerating-se3-transformers-training-using-an-nvidia-open-source-model-implementation/ This was cited by boboviz on the Rosetta forum. I would think that it would be of great interest here. |
©2025 Universitat Pompeu Fabra