kill subprocess if proc - c++

I'm using C++11 and linux. I am attempting to start up multiple ssh commands using fork() and popen() and monitor when the ssh command stops running. When I kill the ssh command on the other computer, it doesn't appear to kill the fork() child process that started it. The child process continues the run until I exit the program. What do I need to do to kill the child process once the ssh command which was called with popen() quits running? Is there something better I could use for this than popen() to call the ssh command?

You need to call wait or waitpid in order for the O/S to remove the child process. A completed child process which has not had its status retrieved by its parent with wait becomes a "zombie" process.
If you're not interested in the child process status but only want to have them cleaned up, you can install a signal handler for SIGCHLD, which will fire whenever one of your child processes finishes, and call wait within that handler to "reap" the child.

child process should query the status of ssh process id periodically and based on response you get back, decide to kill it or not. you can use getppid() to get parent id.
After getting process id. you can check if it is active or not by using
below script
if ps -p $PID > /dev/null
then
# pid is active do something
fi

Related

How can I detach a gdb session from outside?

I start a gdb session in the background with a command like this:
gdb --batch --command=/tmp/my_automated_breakpoints.gdb -p pid_of_proces> &> /tmp/gdb-results.log &
The & at the end lets it run in the background (and the shell is immediately closed afterwards as this command is issued by a single ssh command).
I can find out the pid of the gdb session with ps -aux | grep gdb.
However: How can I gracefully detach this gdb session from the running process just like I would if I had the terminal session in front of me with the (gdb) detach command?
When I kill the gdb session (and not the running process itself) with kill -9 gdb_pid, I get unwanted SIGABRTs afterwards in the running program.
A restart of the service is too time consuming for my purpose.
In case of a successful debugging session with this automated script I could use a detach command inside the batch script. This is however not my case: I want to detach/quit the running gdb session when there are some errors during the session, so I would like to gracefully detach gdb by hand from within another terminal session.
If you run the gdb command from terminal #1 in the background, you can always bring gdb back into foreground by running the command fg. Then, you can simply CTRL+C and detach as always to stop the debugging session gracefully.
Assuming that terminal #1 is now occupied by something else and you cannot use it, You can send a SIGHUP signal to the gdb process to detach it:
sudo kill -s SIGHUP $(pidof gdb)
(Replace the $(pidof gdb) with the actual PID if you have more than one gdb instance)

How to interrupt Python background daemon with Ctrl+C equivalent (forcing KeyboardInterrupt exception)

I have a Python daemon that monitors machine performance during the execution of another software. It basically retrieves data of target processes with ps and writes it to a CSV file to be plotted when the daemon is stopped.
If the daemon is running in a terminal as a foreground process, the user can stop it with Ctrl+C, what will force a KeyboardInterrupt exception. I capture that exception and plot the content of the CSVs then.
The problem comes when I have to launch the daemon in a background process with nohup myDaemon.py &. It works fine, as it generates the CSVs, but as I can't force a KeyboardInterrupt exception the CSVs are not automatically plotted if I kill or stop the background process with other methods than Ctrl+C.
What I want to avoid is having to move the plotting part to a separate script and run it manually after stopping the daemon.
Found the answer reading kill man page. It turns out that signal -2 (SIGINT) is the equivalent of Ctrl+C. Tested it running kill -2 <Background_Daemon_PID> and worked just fine.

How to run a QProcess in background?

I want to create a QProcess and run it in background. I have a scheduler that maintains a queue of jobs to be launched as QProcesses. These QProcess have commands that are run in an lsf machine. The requirement is that once a QProcess is running I have to poll the QProcess and get its status. To poll the qprocess and get its status it has to be run in background. If it's not run in background, the moment qprocess is launched it will show its status as 0. I want to fetch the status of qprocess running the command in lsf machine.How to run the QProcess in background to get the coreect status.
if QProcess running the unix command, QProcess poll should show it as running.
A QProcess runs asynchronously (in the "background") by default. You don't need to do anything special.
Create a QProcess instance, set up your signal/slots connections, and then start the process through one of the QProcess::start() functions.

Qt restart application whilst limiting instances to 1

I'm using Qt5.6, I have code which will restart the application, but I also want to limit the number of instances.
The code that limits the instances works and so does the code which restarts the application, but with the limiting code enabled, the application will not restart, it closes down but I'm guessing that the restart is being blocked because at the time it tries to launch the new instance the PID of the original hasn't cleared.
Question is, how to achieve the result of closing the application, whilst limiting the total number of instances to 1 ?
If this hasn't been solved by tomorrow I will post the code for restarting and limiting instances, I don't have it with me at the moment.
Code to restart the application:
qApp->quit();
QProcess::startDetached(qApp->arguments()[0], qApp->arguments());
These are just hints for the watchdog script:
1- you need to use QProcess::startDetached to run your script before quit your App. This will allow the script process to live after exiting your App.
QProcess::startDetached( "bash", QStringList() << "-c" << terminalCommand );
2- you need to pass the current App PID to your watchdog script via terminalCommand
to get the current App PID in Qt use
qApp->applicationPid();
3- in your watchdog script, have infinite loop that checks for the PID by doing
ps aux | grep -v 'grep' | grep $PID
once the PID is dieds, start your app again from your the watchdog script

How to restart Celery gracefully without delaying tasks

We use Celery with our Django webapp to manage offline tasks; some of these tasks can run up to 120 seconds.
Whenever we make any code modifications, we need to restart Celery to have it reload the new Python code. Our current solution is to send a SIGTERM to the main Celery process (kill -s 15 `cat /var/run/celeryd.pid`), then to wait for it to die and restart it (python manage.py celeryd --pidfile=/var/run/celeryd.pid [...]).
Because of the long-running tasks, this usually means the shutdown will take a minute or two, during which no new tasks are processed, causing a noticeable delay to users currently on the site. I'm looking for a way to tell Celery to shutdown, but then immediately launch a new Celery instance to start running new tasks.
Things that didn't work:
Sending SIGHUP to the main process: this caused Celery to attempt to "restart," by doing a warm shutdown and then relaunching itself. Not only does this take a long time, it doesn't even work, because apparently the new process launches before the old one dies, so the new one complains ERROR: Pidfile (/var/run/celeryd.pid) already exists. Seems we're already running? (PID: 13214) and dies immediately. (This looks like a bug in Celery itself; I've let them know about it.)
Sending SIGTERM to the main process and then immediately launching a new instance: same issue with the Pidfile.
Disabling the Pidfile entirely: without it, we have no way of telling which of the 30 Celery process are the main process that needs to be sent a SIGTERM when we want it to do a warm shutdown. We also have no reliable way to check if the main process is still alive.
celeryd has --autoreload option. If enabled, celery worker (main process) will detect changes in celery modules and restart all worker processes. In contrast to SIGHUP signal, autoreload restarts each process independently when the current executing task finishes. It means while one worker process is restarting the remaining processes can execute tasks.
http://celery.readthedocs.org/en/latest/userguide/workers.html#autoreloading
I've recently fixed the bug with SIGHUP: https://github.com/celery/celery/pull/662
rm *.pyc
This causes the updated tasks to be reloaded. I discovered this trick recently, I just hope there are no nasty side effects.
Well you using SIGHUP (1) for warm shutdown of celery. I am not sure if it actually causes a warm shutdown. But SIGINT (2) would cause a warm shutdown. Try SIGINT in place of SIGHUP and then start celery manually in your script (I guess).
Can you launch it with a custom pid file name. Possibly timestamped, and key off of that to know which PID to kill?
CELERYD_PID_FILE="/var/run/celery/%n_{timestamp}.pid"
^I dont know the timestamp syntax but maybe you do or you can find it?
then use the current system time to kill off any old pids and launch a new one?
A little late, but that can fixed by deleting the file called celerybeat.pid.
Worked for me.
I think you can try this:
kill -s HUP ``cat /var/run/celeryd.pid``
python manage.py celeryd --pidfile=/var/run/celeryd.pid
HUP may recycle every free worker and leave executing workers keep running and HUP will let these workers be trusted. Then you can safely restart a new celery worker main process and workers. Old workers may be killed itself when task has been finished.
I've use this way in our production and it seems safe now. Hope this can help you!