Message boards :
Graphics cards (GPUs) :
Zluda
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
|
Send message Joined: 10 Sep 10 Posts: 164 Credit: 388,132 RAC: 0 Level ![]() Scientific publications
|
|
|
Send message Joined: 10 Sep 10 Posts: 164 Credit: 388,132 RAC: 0 Level ![]() Scientific publications
|
and CUDA was always faster on Nvidia. Well.... https://www.phoronix.com/news/NVIDIA-Vulkan-AI-ML-Success |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
cool, but this has nothing to do with GPGPU compute or the types of compute that BOINC projects use. AI/ML loads are just a bunch of matrix computations over and over. real world scientific compute is a lot more than that. CUDA will reign supreme over OpenCL. projects aren't going to migrate to Vulkan for this stuff.
|
|
Send message Joined: 14 Nov 08 Posts: 5 Credit: 620,744 RAC: 0 Level ![]() Scientific publications
|
> just a bunch of matrix computations over and over. real world scientific compute is a lot more than that. Matrix computations are frequently used in numerous algorithms, not just AI, but AI algorithms prioritize efficiency over high precision, so they use lower-precision number formats that store only a limited set of values. For example:1.0, 1.25, 1.5, 1,75, 2.0 etc. However, using these types for scientific computations will significantly reduce the accuracy of results. Those extensions are designed for a specific use case, accelerating AI algorithms even further, nothing else. Vulkan is capable of performing the same tasks as CUDA or OpenCL with high or low precision, but it has never gained widespread popularity, at least for BOINC projects. |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
that's what i meant to say. low precision tensors.
|
©2025 Universitat Pompeu Fabra