-
Notifications
You must be signed in to change notification settings - Fork 390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aborting a job doesn't kill children process #248
Comments
Adding the "trap" command at the beginning of my script resolves the problem
|
I have a problem that might be related to this issue. I use the shell script plugin with the following code.
|
I'm sorry about this. It was a design choice in Cronicle, because in my use case I never want any child subprocesses to be directly killed. But I now clearly see the need, so I will add a checkbox to enable this behavior as an option. For now, it looks like @dridk found a nice workaround using the #!/bin/bash
trap 'kill $(jobs -p)' EXIT
# Your shell commands here |
No problem at all and thanks a lot @jhuckaby. It isn't particularly my use case either, but I was trying to understand how Cronicle measures CPU usage and found this issue while playing with the Again, thanks. Cronicle is very useful! |
@mplattner does adding |
I haven't tried it and killed the processes manually. Sorry. |
I have a similar issue. Below is my script triggered from cronicle:
Below is how I invoke the job.
|
+1 for this |
This solution didn't work for me, the child process is still running after the abort. I tried with a different approach, trying to transfer the same PID of the parent process to its child with the |
Hello, we are using Cronicle for docker container running and scheduling.
We've made workaround to stop docker container after aborting job by coping shell-plugin.js and creating new custom docker-plugin.js and overriding function on SIGTERM signal. Then we added new custom plugin and created parameter and docker containers are stopped after hitting abort. |
Summary
I created a plugin which run the following script :
This script run a command which use multithread ( 10 threads here ) :
When I run it from cronicle, I can view my job from command line using htop as follow :
When I abort the job from cronicle UI , bcl2fastq is still running on the server. only run_bcl2fastq.sh has been killed
Your Setup
Operating system and version?
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
Node.js version?
v12.14.0
Cronicle software version?
0.8.38 ?
Are you using a multi-server setup, or just a single server?
Single server
Are you using the filesystem as back-end storage, or S3/Couchbase?
No
The text was updated successfully, but these errors were encountered: