I want to create a QProcess and run it in background. I have a scheduler that maintains a queue of jobs to be launched as QProcesses. These QProcess have commands that are run in an lsf machine. The requirement is that once a QProcess is running I have to poll the QProcess and get its status. To poll the qprocess and get its status it has to be run in background. If it's not run in background, the moment qprocess is launched it will show its status as 0. I want to fetch the status of qprocess running the command in lsf machine.How to run the QProcess in background to get the coreect status.
if QProcess running the unix command, QProcess poll should show it as running.
A QProcess runs asynchronously (in the "background") by default. You don't need to do anything special.
Create a QProcess instance, set up your signal/slots connections, and then start the process through one of the QProcess::start() functions.
Related
I have successfully created a periodic task which updates each minute, in a django app. I everything is running as expected, using celery -A proj worker -B.
I am aware that using celery -A proj worker -B to execute the task is not advised, however, it seems to be the only way for the task to be run periodically.
I am logging on to the server using GitBash, after execution, I would like to exit GitBash with the celery tasks still being executed periodically.
When I press ctrl+fn+shift it is a cold worker exit, which stops execution completely (which is not desirable).
Any help?
If you are on a linux server, You might want to use a process manager like supervisord or even systemd to keep your process running.
On windows, one might look at running celery as a service or running as part of rabbitMQ.
In WSL, it seems like a bat file will get wsl commands to run as a service.
I have a Python daemon that monitors machine performance during the execution of another software. It basically retrieves data of target processes with ps and writes it to a CSV file to be plotted when the daemon is stopped.
If the daemon is running in a terminal as a foreground process, the user can stop it with Ctrl+C, what will force a KeyboardInterrupt exception. I capture that exception and plot the content of the CSVs then.
The problem comes when I have to launch the daemon in a background process with nohup myDaemon.py &. It works fine, as it generates the CSVs, but as I can't force a KeyboardInterrupt exception the CSVs are not automatically plotted if I kill or stop the background process with other methods than Ctrl+C.
What I want to avoid is having to move the plotting part to a separate script and run it manually after stopping the daemon.
Found the answer reading kill man page. It turns out that signal -2 (SIGINT) is the equivalent of Ctrl+C. Tested it running kill -2 <Background_Daemon_PID> and worked just fine.
I'm using Qt5.6, I have code which will restart the application, but I also want to limit the number of instances.
The code that limits the instances works and so does the code which restarts the application, but with the limiting code enabled, the application will not restart, it closes down but I'm guessing that the restart is being blocked because at the time it tries to launch the new instance the PID of the original hasn't cleared.
Question is, how to achieve the result of closing the application, whilst limiting the total number of instances to 1 ?
If this hasn't been solved by tomorrow I will post the code for restarting and limiting instances, I don't have it with me at the moment.
Code to restart the application:
qApp->quit();
QProcess::startDetached(qApp->arguments()[0], qApp->arguments());
These are just hints for the watchdog script:
1- you need to use QProcess::startDetached to run your script before quit your App. This will allow the script process to live after exiting your App.
QProcess::startDetached( "bash", QStringList() << "-c" << terminalCommand );
2- you need to pass the current App PID to your watchdog script via terminalCommand
to get the current App PID in Qt use
qApp->applicationPid();
3- in your watchdog script, have infinite loop that checks for the PID by doing
ps aux | grep -v 'grep' | grep $PID
once the PID is dieds, start your app again from your the watchdog script
I'm using C++11 and linux. I am attempting to start up multiple ssh commands using fork() and popen() and monitor when the ssh command stops running. When I kill the ssh command on the other computer, it doesn't appear to kill the fork() child process that started it. The child process continues the run until I exit the program. What do I need to do to kill the child process once the ssh command which was called with popen() quits running? Is there something better I could use for this than popen() to call the ssh command?
You need to call wait or waitpid in order for the O/S to remove the child process. A completed child process which has not had its status retrieved by its parent with wait becomes a "zombie" process.
If you're not interested in the child process status but only want to have them cleaned up, you can install a signal handler for SIGCHLD, which will fire whenever one of your child processes finishes, and call wait within that handler to "reap" the child.
child process should query the status of ssh process id periodically and based on response you get back, decide to kill it or not. you can use getppid() to get parent id.
After getting process id. you can check if it is active or not by using
below script
if ps -p $PID > /dev/null
then
# pid is active do something
fi
We use Celery with our Django webapp to manage offline tasks; some of these tasks can run up to 120 seconds.
Whenever we make any code modifications, we need to restart Celery to have it reload the new Python code. Our current solution is to send a SIGTERM to the main Celery process (kill -s 15 `cat /var/run/celeryd.pid`), then to wait for it to die and restart it (python manage.py celeryd --pidfile=/var/run/celeryd.pid [...]).
Because of the long-running tasks, this usually means the shutdown will take a minute or two, during which no new tasks are processed, causing a noticeable delay to users currently on the site. I'm looking for a way to tell Celery to shutdown, but then immediately launch a new Celery instance to start running new tasks.
Things that didn't work:
Sending SIGHUP to the main process: this caused Celery to attempt to "restart," by doing a warm shutdown and then relaunching itself. Not only does this take a long time, it doesn't even work, because apparently the new process launches before the old one dies, so the new one complains ERROR: Pidfile (/var/run/celeryd.pid) already exists. Seems we're already running? (PID: 13214) and dies immediately. (This looks like a bug in Celery itself; I've let them know about it.)
Sending SIGTERM to the main process and then immediately launching a new instance: same issue with the Pidfile.
Disabling the Pidfile entirely: without it, we have no way of telling which of the 30 Celery process are the main process that needs to be sent a SIGTERM when we want it to do a warm shutdown. We also have no reliable way to check if the main process is still alive.
celeryd has --autoreload option. If enabled, celery worker (main process) will detect changes in celery modules and restart all worker processes. In contrast to SIGHUP signal, autoreload restarts each process independently when the current executing task finishes. It means while one worker process is restarting the remaining processes can execute tasks.
http://celery.readthedocs.org/en/latest/userguide/workers.html#autoreloading
I've recently fixed the bug with SIGHUP: https://github.com/celery/celery/pull/662
rm *.pyc
This causes the updated tasks to be reloaded. I discovered this trick recently, I just hope there are no nasty side effects.
Well you using SIGHUP (1) for warm shutdown of celery. I am not sure if it actually causes a warm shutdown. But SIGINT (2) would cause a warm shutdown. Try SIGINT in place of SIGHUP and then start celery manually in your script (I guess).
Can you launch it with a custom pid file name. Possibly timestamped, and key off of that to know which PID to kill?
CELERYD_PID_FILE="/var/run/celery/%n_{timestamp}.pid"
^I dont know the timestamp syntax but maybe you do or you can find it?
then use the current system time to kill off any old pids and launch a new one?
A little late, but that can fixed by deleting the file called celerybeat.pid.
Worked for me.
I think you can try this:
kill -s HUP ``cat /var/run/celeryd.pid``
python manage.py celeryd --pidfile=/var/run/celeryd.pid
HUP may recycle every free worker and leave executing workers keep running and HUP will let these workers be trusted. Then you can safely restart a new celery worker main process and workers. Old workers may be killed itself when task has been finished.
I've use this way in our production and it seems safe now. Hope this can help you!