Message boards :
Number crunching :
Error invoked kernel
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 20 Sep 13 Posts: 16 Credit: 3,433,447 RAC: 0 Level ![]() Scientific publications
|
# Engine failed: Error invoking kernel: CUDA_ERROR_LAUNCH_FAILED (719) most Wu errored out on My 3 NVIDIA Titan any hints ? |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
# Engine failed: Error invoking kernel: CUDA_ERROR_LAUNCH_FAILED (719) You have also # Engine failed: Particle coordinate is nanwhich is usually the result of to much overclocking, or your card has a failing memory chip. NaN on Wikipedia |
|
Send message Joined: 20 Sep 13 Posts: 16 Credit: 3,433,447 RAC: 0 Level ![]() Scientific publications
|
Thank you, I will lookup und let run the cards still in standard frequency. _heinz |
|
Send message Joined: 20 Sep 13 Posts: 16 Credit: 3,433,447 RAC: 0 Level ![]() Scientific publications
|
I would try a app_config.xml <app_config> <app> <name>acemd3</name> <gpu_versions> <cpu_usage>1.0</cpu_usage> <gpu_usage>0.5</gpu_usage> </gpu_versions> </app> </app_config> but BOINC says: 22.04.2020 12:31:02 | GPUGRID | Your app_config.xml file refers to an unknown application 'acemd3'. Known applications: None Can someone tell me the right name please. |
|
Send message Joined: 20 Sep 13 Posts: 16 Credit: 3,433,447 RAC: 0 Level ![]() Scientific publications
|
admin pleasedelete the multiple messages |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Retvari Zoltan said: # Engine failed: Error invoking kernel: CUDA_ERROR_LAUNCH_FAILED (719) I don't think limiting your GPU usage will solve your errors. it's not a matter of percentage, but a matter of frequency that is causing tasks to fail when the wrapper starts the GPU. Some appear to be easier to crash than others. GPUGRID WUs are the most sensitive tasks I've seen to processor overclocking errors and I had to slow my GTX 1060 down when I came here even though it ran games and other BOINC projects OK. My errors were hit and miss like yours only not as many. they usually occurred at ~30 sec. Your base clock speed is 1000MHz per [url]https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan-x/specifications [/url] |
|
Send message Joined: 12 Jul 17 Posts: 404 Credit: 17,408,899,587 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
I would try a app_config.xmlYours looks exactly like mine except I only run one GG WU per GPU. BOINC always says that when you don't have a acemd3 WU downloaded. Wow! Eight duplicates. My record was three. No worries, it happens to us all and I don't know why. |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 351 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just got a "# Engine failed: Particle coordinate is nan" error on WU 19441088 - as have all my wingmates. I'm pretty sure that will be a mistake in the data prepared for the run, nothing to do with unstable GPUs. |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 731 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
Just got a "# Engine failed: Particle coordinate is nan" error on WU 19441088 - as have all my wingmates. I concur. Not all NaN errors are the result of a misbehaving card. Sometimes the task is just badly formatted. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Just got a "# Engine failed: Particle coordinate is nan" error on WU 19441088 - as have all my wingmates. This task is a 2ph7A01_348_3-TONI_MDADpr4sp-7-10-RND7696 So it's the 7th of 10 workunits. Perhaps the previous host made an error, which resulted in a permanent NaN error on all hosts. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Just got a "# Engine failed: Particle coordinate is nan" error on WU 19441088 - as have all my wingmates. Hi Richard Haselgrove, Your 1660-S is not overclocked; Correct? It looks like we'll have to wait until that WU reaches the Apr 29 deadline on iBat's machine (after viewing it's task status), before seeing if it crashes again. I've been getting more tasks lately which have crashed on 1 or 2 other hosts before being sent to mine. I noticed several error prone machines were Science United and a few were grcpool hosts. Fascinating. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
This task is a 2ph7A01_348_3-TONI_MDADpr4sp-7-10-RND7696 Zoltan, I think you meant to write 8th of 10, as the first one is always named 0-10. Or am I confused? 🤔 |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This task is a 2ph7A01_348_3-TONI_MDADpr4sp-7-10-RND7696 You're right, it's the 8th. Probably the host doing the 7th piece made an error. (that's what I should post to correctly include the number 7 in my post) |
|
Send message Joined: 11 Jul 09 Posts: 1639 Credit: 10,159,968,649 RAC: 351 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi Richard Haselgrove, Your 1660-S is not overclocked; Correct? Correct. I gave that machine a complete motherboard/CPU/RAM transplant at the end of January, and fitted two brand-new, identical, 1600-S GPUs. It's in a high airflow case with a Corsair modular power supply. I can do basic hardware work on computers, but I'm not a hardware specialist, so I bought the motherboard bundle pre-assembled and tested from a local trade supplier, with CPU cooler ready attached. It ran on SETI until that project stopped sending out new work (bad timing on my part!), and started working here at the beginning of April. Application details Tasks I think 4 errors, with 1167 completed tasks, indicates the machine is basically healthy. Two of the other errors reached the full 8 failures on all machines that attempted them, and one seems to have been a ghost that I never received. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
admin pleasedelete the multiple messages If you try soon enough, you should be able to do part of the work by editing all but one of them down to just one character. Making most of them the same single character is likely to trigger an automatic process for hiding duplicate messages. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
Richard, you have less errors than I do, I think. Looking at how long this WU runs before throwing an error, It probably is a bad egg. I see all the other defective ones I've gotten last around 10-20 seconds before bombing. The errors I saw from instability were a bit later, around 30 seconds. I figure that it takes about that long for the wrapper to get its task running on the GPU. (That's also the approximate timing of getting a mismatched GPU error on restarts, the cause of most of my errors.) I do recall that recently I showed 10 errors and 5 of them were bad WUs. I'll lay my nickel on WU 19441088 being another dud task.💣 |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
...And sure enough, it lasted no longer than 16 seconds before it choked on everybody's hosts. Heinz is getting errors at later stages of the tasks than we are experiencing them when running bad WUs. https://www.gpugrid.net/results.php?hostid=159065 I have had errors before that were caused by running short of memory, although I see that is not a problem in Heinz's case. I had 7 Rosetta threads and two GPUGRID wrappers running in 8 GB of ram with 8182MB swapfile. Every time a Rosetta COVID task would suddenly hog memory, one of the wrappers would give a message that an output file could not be found (can't remember which) and throw an error. I've since increased to 12GB of ram and solved that issue. I had a PSU failure on my fast host today (a recycled 600W cheapo from the days of molex connectors) and it makes me wonder if Heinz might have power issues with his 3 GTX Titans in one host. Just a thought, but if they're clocked higher than factory specs IMHO that is the first thing to suspect. 🤔 |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 731 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
It ran on SETI until that project stopped sending out new work (bad timing on my part!) Ha ha LOL. I did the same thing. Rebuilt completely/upgraded the 3900X host for Seti and put it back online a few days before Seti pulled the plug. Now it just sits there, idle, looking pretty. |
|
Send message Joined: 8 Aug 19 Posts: 252 Credit: 458,054,251 RAC: 0 Level ![]() Scientific publications ![]()
|
I have had errors before that were caused by running short of memory, although I see that is not a problem in Heinz's case. I had 7 Rosetta threads and two GPUGRID wrappers running in 8 GB of ram with 8182MB swapfile. Every time a Rosetta COVID task would suddenly hog memory, one of the wrappers would give a message that an output file could not be found (can't remember which) and throw an error... ...Which made me curious what that particular host is running on the CPU. I see that _heinz has recently switched to running World Community Grid- https://boinc.netsoft-online.com/e107_plugins/boinc/get_user.php?cpid=5e024335320e436c4d050e073963e326 Does anyone here know how much memory those tasks use? I found that LHC@home tasks were too memory hungry to run at 2GB of ram per thread. That might be an issue here. |
|
Send message Joined: 13 Dec 17 Posts: 1419 Credit: 9,119,446,190 RAC: 731 Level ![]() Scientific publications ![]() ![]() ![]() ![]()
|
I recently discovered a website with the ability to dive deep into the data for all the BOINC projects. This page has the RAM requirements for all the projects cpu apps. http://wuprop.boinc-af.org/results/ram.py |
©2025 Universitat Pompeu Fabra