Zluda

Message boards : Graphics cards (GPUs) : Zluda
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
[VENETO] boboviz

Send message
Joined: 10 Sep 10
Posts: 164
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 62200 - Posted: 5 Feb 2025, 20:52:01 UTC

Seems that works on Zluda has restarted.


(as long as Nvidia allows it)
ID: 62200 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[VENETO] boboviz

Send message
Joined: 10 Sep 10
Posts: 164
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 62240 - Posted: 5 Mar 2025, 14:36:27 UTC - in response to Message 61258.  

and CUDA was always faster on Nvidia.

now that AMD can run CUDA, even in this alpha state, CUDA ends up faster again.


Well....
https://www.phoronix.com/news/NVIDIA-Vulkan-AI-ML-Success
ID: 62240 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 62241 - Posted: 5 Mar 2025, 21:30:33 UTC - in response to Message 62240.  

cool, but this has nothing to do with GPGPU compute or the types of compute that BOINC projects use.

AI/ML loads are just a bunch of matrix computations over and over. real world scientific compute is a lot more than that.

CUDA will reign supreme over OpenCL. projects aren't going to migrate to Vulkan for this stuff.
ID: 62241 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ahorek's team

Send message
Joined: 14 Nov 08
Posts: 5
Credit: 620,744
RAC: 0
Level
Gly
Scientific publications
wat
Message 62246 - Posted: 7 Mar 2025, 2:08:57 UTC - in response to Message 62241.  

> just a bunch of matrix computations over and over. real world scientific compute is a lot more than that.
Matrix computations are frequently used in numerous algorithms, not just AI, but AI algorithms prioritize efficiency over high precision, so they use lower-precision number formats that store only a limited set of values. For example:1.0, 1.25, 1.5, 1,75, 2.0 etc. However, using these types for scientific computations will significantly reduce the accuracy of results. Those extensions are designed for a specific use case, accelerating AI algorithms even further, nothing else.

Vulkan is capable of performing the same tasks as CUDA or OpenCL with high or low precision, but it has never gained widespread popularity, at least for BOINC projects.
ID: 62246 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 21 Feb 20
Posts: 1116
Credit: 40,839,470,595
RAC: 6,423
Level
Trp
Scientific publications
wat
Message 62247 - Posted: 7 Mar 2025, 12:46:49 UTC - in response to Message 62246.  

that's what i meant to say. low precision tensors.
ID: 62247 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Graphics cards (GPUs) : Zluda

©2025 Universitat Pompeu Fabra