Message boards :
Number crunching :
Testing acemd3 windows (thread no longer relevant)
Message board moderation
Previous · 1 · 2 · 3 · Next
| Author | Message |
|---|---|
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 662 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Guys, you're having a nice discussion here but please don't take this thread completely off-topic - important news could appear here. Bah humbug. If you find these couple of posts so offensive move them to what you consider an appropriate labelled thread, Mr. Moderator. I was just trying offer some help for a poster question. |
|
Send message Joined: 18 Jun 12 Posts: 297 Credit: 3,572,627,986 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Bah humbug. How much is your electric bill each month? |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 662 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Bah humbug. I assume this was directed at me. About $650. |
|
Send message Joined: 21 Mar 16 Posts: 513 Credit: 4,673,458,277 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others? |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed. |
JStatesonSend message Joined: 31 Oct 08 Posts: 186 Credit: 3,578,903,157 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed. I discovered that some time ago on a post Keith made over at SETI. The only problem I have with this hack is if something goes wrong then 1000's of work units could error out in a few minutes. From what I see his systems are well built and unlikely to have problems. I remember years ago that it was possible to reject downloads from SETI that "took too long to finish" and the tasks were dumped by a script. I though that was cheating. On this project one can select the 2-3 hour or the 8 hour and there is no need to go to extremes to get ahead fast on credits. |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 662 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed. It's not the Manager, it's the client that has been modified. This came about when the Seti Tuesday maintenance outages were lasting 14-16 hours a day or longer. Not needed as much now that they have outages that only last the standard 5 hours. You are correct, all it takes is for the CUDA driver to go missing while you aren't looking at the host and it will zip through the cache in a matter of minutes. I did a stupid just the other day when I updated while BOINC was running and I did not realize the update was going to update the Nvidia drivers. Errored out a hundred tasks in less than a minute before the driver got reloaded. So you have to be aware of what's going on and have well running systems to begin with. I spoof the max number of cards (64) BOINC allows on the four card hosts. And (48) cards on the 3 card hosts. I could pull those back to probably 36 and 24 to make it through Tuesdays now. One of the other advantages is that I don't have to fight for tasks with all the other empty hosts when the project comes back. In fact I don't even report or ask for tasks till the RTS buffer gets refilled on the servers have settled into normality after the feeding frenzy. |
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Hey Toni, I'm getting tired of doing astronomy. Give me some protein to chew on !!!
|
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 662 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Hey Toni, I'm getting tired of doing astronomy. + 1 Ha ha ha LOL. Love it. |
|
Send message Joined: 17 Feb 09 Posts: 91 Credit: 1,603,303,394 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
+2 Yes, wish they would at least release the Linux version of acemd3. Been doing E@H 100% since mid May although I do love astronomy. Don't know what the Windows/Linux ratio is here but I am sure they are missing out on a lot WU work keeping the Linux app offline until ready for Windows as well. |
ServicEnginICSend message Joined: 24 Sep 10 Posts: 592 Credit: 11,972,186,510 RAC: 1,075 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Don't know what the Windows/Linux ratio is here but I am sure they are missing out on a lot WU work keeping the Linux app offline until ready for Windows as well. As seen at the following link, at january we were celebrating to reach 4 PetaFLOPs computation power. http://www.gpugrid.net/forum_thread.php?id=4880#51189 Now it has dropped to about half that value... I'd get very surprised if some new version was liberated in August, because it is usually a low activity month at universitary environments :-| |
|
Send message Joined: 17 Feb 09 Posts: 91 Credit: 1,603,303,394 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Ah, that kind of implies to me that the Windows/Linux ratio here is very approximately 1:1. That means GPUGrid is loosing about 1/2 of their current WU production keeping Linux machines inactive. Hey TONI please!! :) |
|
Send message Joined: 2 Jul 16 Posts: 338 Credit: 7,987,341,558 RAC: 193 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
When the Linux app first went down I had noted here somewhere that Free-DC saw a drop of about 1/3. Maybe some of it was Windows PCs now had more of the task pool. It is also summer when people are on vacation and shut down PCs more due to being away and heat. |
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
Now that the windows license appears to have expired it's time to shut down the old applications and turn on the new Linux application.
|
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 662 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Doubtful that happens as they still haven't released a Windows acemd3 wrapper app to test yet that works. |
|
Send message Joined: 22 Oct 10 Posts: 42 Credit: 1,752,050,315 RAC: 42 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I moved the following quoted posting from the adjoining forum as it obviously fits this subject matter more closely; and it would have been lost without being answered where it was originally posted. Billy Ewell 1931 [quote]I think the definitions of "long-run" tasks and "short-run" tasks have gone away with their applications. Now only New ACEMD3 tasks are available and in the future. [quote] @TONI: would you please answer this above assumption. I have my RTX 2080 set for ACEMD3 only and my 2 GTX 1060s set for "Long" and "Short" WUs only. But my 1060s have not received a task in many days.Also why not update the GPUGrid preferences selection options to display reality.I realize this is not the best forum to address the situation but maybe it will be answered anyway. Billy Ewell 1931. [quote] |
|
Send message Joined: 22 Oct 10 Posts: 42 Credit: 1,752,050,315 RAC: 42 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I just accomplished an action that answered my own question: I modified my GPUGrid preferences on my 2 Windows 10 64bit Xeon and i3 computers, both equipped with one each GTX 1060. Both computers have joined my Windows 10 64bit i7 RTX 2080 happily crunching GPUGrid Work Units under the current title ACEMD3. By the way, I excluded all other options in the preferences menus even as I understand it probably does not matter. |
God is Love, JC proves it. I t...Send message Joined: 24 Nov 11 Posts: 30 Credit: 201,648,059 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Is anyone else having trouble with a LOT of WUs erroring out, on slightly older GPUs? My new 1660 Ti is doing fine, but my (not that very old) 950M has a VERY high error rate, after running for many, many hours (after finishing, it would seem): 21553648 16894732 5 Dec 2019 | 23:25:58 UTC 8 Dec 2019 | 10:03:27 UTC Error while computing 210,744.15 208,424.00 --- New version of ACEMD v2.10 (cuda101) 21549725 16891188 3 Dec 2019 | 9:30:40 UTC 5 Dec 2019 | 23:30:57 UTC Completed and validated 221,732.69 219,584.90 61,000.00 New version of ACEMD v2.10 (cuda101) 21544426 16886529 30 Nov 2019 | 19:54:47 UTC 3 Dec 2019 | 9:25:02 UTC Error while computing 213,518.60 211,953.20 --- New version of ACEMD v2.10 (cuda101) 21532174 16876007 28 Nov 2019 | 6:09:16 UTC 30 Nov 2019 | 20:19:30 UTC Error while computing 221,587.20 219,136.50 --- New version of ACEMD v2.10 (cuda101) 21509135 16855905 23 Nov 2019 | 4:50:17 UTC 28 Nov 2019 | 6:09:16 UTC Completed and validated 151,235.95 150,607.10 61,000.00 New version of ACEMD v2.10 (cuda101) 21507371 16854655 22 Nov 2019 | 21:55:11 UTC 25 Nov 2019 | 6:44:29 UTC Error while computing 203,591.42 202,247.10 --- New version of ACEMD v2.10 (cuda101) 12/8/2019 11:33:58 PM CUDA: NVIDIA GPU 0: GeForce GTX 950M (driver version 441.20, CUDA version 10.2, compute capability 5.0, 2048MB, 1682MB available, 1188 GFLOPS peak) OpenCL: NVIDIA GPU 0: GeForce GTX 950M (driver version 441.20, device version OpenCL 1.2 CUDA, 2048MB, 1682MB available, 1188 GFLOPS peak) OpenCL: Intel GPU 0: Intel(R) HD Graphics 530 (driver version 21.20.16.4550, device version OpenCL 2.0, 3227MB, 3227MB available, 202 GFLOPS peak) OpenCL CPU: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 6.8.0.392, device version OpenCL 2.0 (Build 392)) Host name: Laptop-6AQTD8V-VCP-LLP-PhD Processor: 8 GenuineIntel Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz [Family 6 Model 94 Stepping 3] Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 fma cx16 sse4_1 sse4_2 movebe popcnt aes f16c rdrandsyscall nx lm avx avx2 vmx tm2 pbe fsgsbase bmi1 hle smep bmi2 OS: Microsoft Windows 10: Core x64 Edition, (10.00.18363.00) Memory: 7.90 GB physical, 20.90 GB virtual Disk: 929.69 GB total, 843.01 GB free I think ∴ I THINK I am My thinking neither is the source of my being NOR proves it to you God Is Love, Jesus proves it! ∴ we are |
God is Love, JC proves it. I t...Send message Joined: 24 Nov 11 Posts: 30 Credit: 201,648,059 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
a couple, resultid=21544426 and resultid=21532174, had said: "Detected memory leaks!" So I ran extensive memory diagnostics, but no errors were reported by windoze Boinc did not indicate if this was RAM or GPU 'memory leaks' I am also getting upload failure: <file_xfer_error> <file_name>initial_1132-ELISA_GSN0V1-6-100-RND5960_0_0</file_name> <error_code>-240 (stat() failed)</error_code> </file_xfer_error> https://gpugrid.net/result.php?resultid=21544426 <file_name>test324-TONI_GSNTEST3-16-100-RND7959_0_0</file_name> <error_code>-240 (stat() failed)</error_code> https://gpugrid.net/result.php?resultid=21532174 (The next one said nothing about 'memory leaks' but still gave) upload failure: <file_xfer_error> <file_name>initial_1497-ELISA_GSN4V1-20-100-RND8978_0_0</file_name> <error_code>-240 (stat() failed)</error_code> </file_xfer_error> The other projects I am running currently (Universeathome, collatz) have no problems with file uploads. LLP, PhD I think ∴ I THINK I am My thinking neither is the source of my being NOR proves it to you God Is Love, Jesus proves it! ∴ we are |
|
Send message Joined: 9 Dec 08 Posts: 1006 Credit: 5,068,599 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() |
"memory leaks" messages are always present in windows - they are just an unfortunate printout, not errors themselves. If there is an error message, it will be somewhere else in the text. Mobile cards are not suitable for crunching. It's surprising that it even starts. See the FAQ item. |
©2025 Universitat Pompeu Fabra