Message boards :
Number crunching :
LLM
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,293,491,875 RAC: 3,176 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I found the new entry ‘LLM: LLMs for chemistry’ in the apps. Is there any further information on what kind of calculations this is? |
|
Send message Joined: 3 May 20 Posts: 19 Credit: 1,043,759,208 RAC: 39 Level ![]() Scientific publications
|
And is this new app responsible for the BOINC caution that I need to update my graphics driver? I am using vers. 535. Upgrading to 560 let to a crash on my UBUNTU 20.04. host. I had to install everything again. So I would prefer to stay at version 535. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,293,491,875 RAC: 3,176 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
And is this new app responsible for the BOINC caution that I need to update my graphics driver? I am using vers. 535. Upgrading to 560 let to a crash on my UBUNTU 20.04. host. I had to install everything again. So I would prefer to stay at version 535. I had the same message and upgraded from 535 to 550 because I did not receive the message on the rig on 550. |
|
Send message Joined: 7 Apr 15 Posts: 17 Credit: 2,978,057,945 RAC: 73 Level ![]() Scientific publications
|
I have no information abou LLM but received some of them. https://gpugrid.net/gpugrid/results.php?userid=143331&offset=0&show_names=0&state=0&appid=49 |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
LLM = Large Language Model ? |
|
Send message Joined: 29 Aug 24 Posts: 71 Credit: 3,321,790,989 RAC: 1,408 Level ![]() Scientific publications
|
I figured it was a Laughing Ligand Model. |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
|
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Looks like this was configured for a very limited set of users. Apps page says it requires cuda124L but I'm not sure which model GPUs use it. Rig-44 with a 2080 Ti has its latest driver 470.256.02 CUDA Version: 11.4. That didn't work so I tried installing the NVIDIA CUDA Toolkit but that still had CUDA 11.4. Now the message says: 222 gpugrid Apr 16, 2025, 06:05:23 AM Message from server: NVIDIA GPU: Upgrade to the latest driver to process tasks using your computer's GPUBut I already have the latest driver. Devs might want to consider making this LLM app available to a broader audience.
|
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Rig-23 with a 1080 Ti has 560.35.03 CUDA 12.6. No idea why it has a higher CUDA nor what 12.4L is. It does not get any LLM WUs although preferences set for only LLM. Server status shows LLM WUs available and count changing. Will try upgrading to 570.124.04
|
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Rig-50 with a 1080 Ti has 570.124.04 CUDA 12.8 and it gets no LLM WUs. Must be some requirements we haven't been told about.
|
|
Send message Joined: 29 Aug 24 Posts: 71 Credit: 3,321,790,989 RAC: 1,408 Level ![]() Scientific publications
|
FritzB above et al is getting more but they are erring out. More importantly, it appears that you need at least 24MB GPU to run them. |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
Looks like this was configured for a very limited set of users. Apps page says it requires cuda124L but I'm not sure which model GPUs use it. you need cuda 12.4 drivers. not 11.4. your drivers are too old. i'm not sure what you mean that your drivers are already the "latest" 470 level drivers are a very old branch. cuda level is determined by the branch, not the release date. you need at least CUDA 12.x drivers for these tasks. these tasks also need a very large amount of VRAM, so they are restricted to hosts with GPUs that have 24GB or more of VRAM. these tasks also use a BF16 data type, which is only available on Ampere or newer GPU architectures. correlation to a CC of 8.0 or higher. so right now you need: a GPU that's Ampere or newer -AND- with 24GB or more VRAM -AND- with CUDA 12.x drivers (525.x or newer) an additional note. these tasks use vLLM as the framework for these tasks. this is a software package that GPUGRID does not control or maintain. this software does not support anything older than Volta architecture. these tasks are LLMs, so they require GPUs with tensor cores to operate on GPUs with this software, and anything older than Volta does not have tensor cores. so things like 1080Ti or other Pascal will not be supported.
|
[DPC] hansRSend message Joined: 27 Nov 09 Posts: 3 Credit: 558,612,922 RAC: 3,208 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Tasks running on a RTX 3090 Ti with 24 GB all error out on windows: <core_client_version>8.0.4</core_client_version> <![CDATA[ <message> (unknown error) (0) - exit code 195 (0xc3)</message> <stderr_txt> 15:01:18 (16576): wrapper (7.9.26016): starting 15:01:18 (16576): wrapper: running Library/usr/bin/tar.exe (xjvf input.tar.bz2) tasks.json conf.yaml main_generation-0.1.0-py3-none-any.whl run.sh 15:01:19 (16576): Library/usr/bin/tar.exe exited; CPU time 0.031250 15:01:19 (16576): wrapper: running C:/Windows/system32/cmd.exe (/c call Scripts\activate.bat && Scripts\conda-unpack.exe && run.bat) 'run.bat' is not recognized as an internal or external command, operable program or batch file. 15:03:26 (16576): C:/Windows/system32/cmd.exe exited; CPU time 24.765625 15:03:26 (16576): app exit status: 0x1 15:03:26 (16576): called boinc_finish(195) |
|
Send message Joined: 21 Feb 20 Posts: 1116 Credit: 40,839,470,595 RAC: 6,423 Level ![]() Scientific publications
|
windows app is new. probably expect some failures until they sort out the working configuration. we had errors on the Linux side at first too.
|
[DPC] hansRSend message Joined: 27 Nov 09 Posts: 3 Credit: 558,612,922 RAC: 3,208 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
windows app is new. probably expect some failures until they sort out the working configuration. we had errors on the Linux side at first too. I know. Just reporting. |
[DPC] hansRSend message Joined: 27 Nov 09 Posts: 3 Credit: 558,612,922 RAC: 3,208 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
windows app is new. probably expect some failures until they sort out the working configuration. we had errors on the Linux side at first too. This morning first WU that completed and validated 25 Apr 2025, 9:44:56 UTC Completed and validated 2,823.02 2,823.02 187,000.00 LLM: LLMs for chemistry v1.01 (cuda124L) windows_x86_64 |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 891 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
The new small LLM app won't run on less than CC capability of 8.0 for the BFloat16 call it tries to use. ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your NVIDIA TITAN V GPU has compute capability 7.0. You can use float16 instead by explicitly setting the `dtype` flag in CLI, for example: --dtype=half. |
|
Send message Joined: 2 Jan 09 Posts: 303 Credit: 7,321,800,090 RAC: 330 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The new small LLM app won't run on less than CC capability of 8.0 for the BFloat16 call it tries to use. Where do I find the "CC capability" pn my gpu's, no sense banging on the door here if my gpu's are too old to help. Never mind I found this page that shows every Nvidia gpu and it's CC capability" https://developer.nvidia.com/cuda-gpus |
|
Send message Joined: 11 Dec 08 Posts: 26 Credit: 648,944,294 RAC: 584 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Can you add progress? they are on 0.2 then done. |
ServicEnginICSend message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 1,447 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Where do I find the "CC capability" pn my gpu's, no sense banging on the door here if my gpu's are too old to help. You can locally check your GPU characteristics, including CUDA version, compute capability and available VRAM, at the very fist lines of BOINC Manager Event Log. Examples from my Linux hosts: Open a Terminal window and type: sudo /etc/init.d/boinc-client restart You'll obtain this answer: Restarting boinc-client (via systemctl): boinc-client.service. Then open a BOINC Manager window, and Tools - Event Log... (or Shift+Ctrl+E) And search for the following lines: Host #1: CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 4060 (driver version 570.99, CUDA version 12.8, compute capability 8.9, 7816MB, 7816MB available, 15299 GFLOPS peak) CUDA: NVIDIA GPU 1: NVIDIA GeForce RTX 3050 (driver version 570.99, CUDA version 12.8, compute capability 8.6, 5815MB, 5815MB available, 6944 GFLOPS peak) Host #2: CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 3060 (driver version 570.99, CUDA version 12.8, compute capability 8.6, 11920MB, 11920MB available, 12738 GFLOPS peak) Host #3: CUDA: NVIDIA GPU 0: NVIDIA GeForce GTX 1660 Ti (driver version 570.99, CUDA version 12.8, compute capability 7.5, 5747MB, 5747MB available, 5530 GFLOPS peak) Host #4: CUDA: NVIDIA GPU 0: NVIDIA GeForce GTX 1650 SUPER (driver version 570.99, CUDA version 12.8, compute capability 7.5, 3716MB, 3716MB available, 4416 GFLOPS peak) CUDA: NVIDIA GPU 1: NVIDIA GeForce GTX 1650 (driver version 570.99, CUDA version 12.8, compute capability 7.5, 3717MB, 3717MB available, 3091 GFLOPS peak) Host #2 (12GB VRAM) is the only candidate to run LLMs tasks, due to current VRAM amount limitations: 24GB or above for LLM, 12GB for LLMs. |
©2025 Universitat Pompeu Fabra