Advanced search

Message boards : Graphics cards (GPUs) : Zluda

Author Message
[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61248 - Posted: 12 Feb 2024 | 16:18:25 UTC

ZLUDA

ZLUDA lets you run unmodified CUDA applications with near-native performance on AMD GPUs.

ZLUDA is currently alpha quality, but it has been confirmed to work with a variety of native CUDA applications: Geekbench, 3DF Zephyr, Blender, Reality Capture, LAMMPS, NAMD, waifu2x, OpenFOAM, Arnold (proof of concept) and more.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 61250 - Posted: 12 Feb 2024 | 17:36:11 UTC - in response to Message 61248.

interesting project that sounds like it had some promise. sounds like AMD pulled funding and the developer just dumped it on github. defacto abandoned at this time until someone else decides to pickup where it left off.

but this kind of proves how superior CUDA is at the software level than OpenCL since they had performance boosts running the adapted CUDA native code vs the existing OpenCL implementations.

also sounds like it requires the code to be compiled with PTX code, which a lot of existing CUDA code from BOINC projects is not compiled this way, including GPUGRID as far as i know. so it will require some amount of effort from application developers still (unless they happen to be shipping PTX already), which has always been the biggest challenge.
____________

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61256 - Posted: 13 Feb 2024 | 13:15:33 UTC - in response to Message 61250.
Last modified: 13 Feb 2024 | 13:17:29 UTC

interesting project that sounds like it had some promise. sounds like AMD pulled funding and the developer just dumped it on github. defacto abandoned at this time until someone else decides to pickup where it left off.


Indeed we hope that AMD will help this project (before or later).

but this kind of proves how superior CUDA is at the software level than OpenCL since they had performance boosts running the adapted CUDA native code vs the existing OpenCL implementations.


No, Cuda is not "superior".
If implemented correctly OpenCl/Sycl/Rocm/OneApi/ect have the same (if not better) performances of Cuda.
We are speaking about a "translator".

Cuda is better manteined by Nvidia, for sure.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 61258 - Posted: 13 Feb 2024 | 13:29:42 UTC - in response to Message 61256.
Last modified: 13 Feb 2024 | 13:36:40 UTC

I dunno. it seems clear. they saw better performance with this adapted CUDA implementation than with the native HIP one. I've not seen a single instance where OpenCL performed better, objectively, than CUDA. previous comparisons were always only possible on Nvidia, since AMD couldnt run CUDA. and CUDA was always faster on Nvidia.

now that AMD can run CUDA, even in this alpha state, CUDA ends up faster again.
____________

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61286 - Posted: 15 Feb 2024 | 8:20:24 UTC - in response to Message 61258.
Last modified: 15 Feb 2024 | 8:20:50 UTC

I've not seen a single instance where OpenCL performed better, objectively, than CUDA.

https://github.com/ccsb-scripps/AutoDock-GPU/issues/239


and CUDA was always faster on Nvidia.

That's incredible!! A proprietary closed framework on the hw for which is was born is faster than a generic implementation of an open-source framework.
Who would have thought??

Ps. OpenCl could run on cpus. CUDA not.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 61287 - Posted: 15 Feb 2024 | 11:54:47 UTC - in response to Message 61286.
Last modified: 15 Feb 2024 | 11:59:33 UTC

Sounds like poor code optimization on the CUDA branch. And one of the comments on that issue suggests as much claiming the CUDA code didn’t allocate as much memory as the OpenCL variant. Not really a fair comparison.

But still you seem to be ignoring that alpha version of AMD CUDA implementation is already faster than native OpenCL and/or HIP implementations. That says a lot.
____________

wujj123456
Send message
Joined: 9 Jun 10
Posts: 19
Credit: 2,233,932,323
RAC: 54
Level
Phe
Scientific publications
watwatwatwat
Message 61298 - Posted: 18 Feb 2024 | 8:44:34 UTC - in response to Message 61250.
Last modified: 18 Feb 2024 | 8:45:22 UTC

but this kind of proves how superior CUDA is at the software level than OpenCL since they had performance boosts running the adapted CUDA native code vs the existing OpenCL implementations.

Or how poorly AMD's driver stack is for handling OpenCL code. Not saying it's definitely the case, but feels like a reasonable possibility given how much variance of performance across different AMD OpenCL stack shows on Linux.

At the end the conclusion is same anyway. AMD hardware + software is just not as competitive for compute than Nvidia for the past few generations, even without involving CUDA. I can only hope the AI craze will force AMD to invest more in software stack and eventually bring some improvement to RDNA compute.

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61309 - Posted: 21 Feb 2024 | 8:06:37 UTC

Meanwhile, Nvidia takes his countermeasures, blocking with CUDA 11.5 the porting of the code
https://twitter.com/never_released/status/1758946808183525702

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 61310 - Posted: 21 Feb 2024 | 11:50:50 UTC - in response to Message 61309.
Last modified: 21 Feb 2024 | 11:52:18 UTC

Meanwhile, Nvidia takes his countermeasures, blocking with CUDA 11.5 the porting of the code
https://twitter.com/never_released/status/1758946808183525702


Your post seems to be missing some context. Blocking how? Your link doesn’t really elaborate. Do you mean only some notes in some TOS somewhere? And CUDA 11.5? That’s 2.5 years old.
____________

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61314 - Posted: 21 Feb 2024 | 15:56:26 UTC - in response to Message 61310.

Your post seems to be missing some context. Blocking how? Your link doesn’t really elaborate. Do you mean only some notes in some TOS somewhere? And CUDA 11.5? That’s 2.5 years old.


A simple research reports these words in many license agreements of Nvidia products (for example, here: https://github.com/NVIDIA/spark-rapids-container/blob/dev/NOTICE-binary)

Cuda 11.5....and all following releases.

So, bye bye Zluda and thanks for all the fish

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 160
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 61315 - Posted: 21 Feb 2024 | 16:00:32 UTC - in response to Message 61310.

Blocking how?

With the law/copyright/agreements/etc

By the way i don't know if this clause is legal in EU...

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1078
Credit: 40,231,533,983
RAC: 27
Level
Trp
Scientific publications
wat
Message 61316 - Posted: 21 Feb 2024 | 16:03:57 UTC - in response to Message 61314.

but again. that was more than 2 years ago. and ZLUDA is working fine with all recent CUDA releases.

this is just words in the TOS, and was probably there before this guys even ported ZLUDA to AMD, and obviously didnt stop it. it's not actively "blocking" anything.
____________

Post to thread

Message boards : Graphics cards (GPUs) : Zluda

//