I'm using Qt5.6, I have code which will restart the application, but I also want to limit the number of instances.
The code that limits the instances works and so does the code which restarts the application, but with the limiting code enabled, the application will not restart, it closes down but I'm guessing that the restart is being blocked because at the time it tries to launch the new instance the PID of the original hasn't cleared.
Question is, how to achieve the result of closing the application, whilst limiting the total number of instances to 1 ?
If this hasn't been solved by tomorrow I will post the code for restarting and limiting instances, I don't have it with me at the moment.
Code to restart the application:
qApp->quit();
QProcess::startDetached(qApp->arguments()[0], qApp->arguments());
These are just hints for the watchdog script:
1- you need to use QProcess::startDetached to run your script before quit your App. This will allow the script process to live after exiting your App.
QProcess::startDetached( "bash", QStringList() << "-c" << terminalCommand );
2- you need to pass the current App PID to your watchdog script via terminalCommand
to get the current App PID in Qt use
qApp->applicationPid();
3- in your watchdog script, have infinite loop that checks for the PID by doing
ps aux | grep -v 'grep' | grep $PID
once the PID is dieds, start your app again from your the watchdog script
Related
here's a problem that is driving me nuts. First off, I am not a Linux expert, so I might just be missing some detail.
I am trying to restart an application (namely rpi-webrtc-streamer, but that shouldn't matter) using a shell script. The reason is that when a configuration change happens I need to update the config files and restart.
The idea is to call a bash script using system() function and pass in the pid of the current process. The script should then just kill the process using the supplied pid, and execute it again. In theory this shouldn't be a problem...
What may be complicating it is that the process needs to run with sudo. Not sure if that's the case but just thought I should mention it.
Now this is the script:
#!/bin/bash
echo "restarting streamer..."
echo "killing process with PID $1"
kill $1
# I have tried different intervals, even 10 seconds, doesn't help
sleep 2
echo "running new streamer instance"
echo "path:"
pwd
#printenv
echo "id -u"
# just to verify the script runs with sudo
id -u
./webrtc-streamer --verbose
echo "done"
The problem is that the application fails with the following error:
(direct_socket.cc:77): Failed to listen 0.0.0.0:8888.
... and then it shuts down. Well obviously it's not able open the port. It almost looks as if the previous instance of the app is still holding the port open. I have however tried tweaking the sleep amount of seconds in the script but that shouldn't be a problem, first I think the script will continue execution after the process is actually killed and second the process shuts down immediately anyway, I can see that from the logs.
If I however run the app immediately after the script fails from the shell that actually executed the initial app in the first place, it runs without any issues (being able to open the port). No matter how much seconds it waited in the sleep previously.
The only other thing I though of would be that the bash script might be running with different environment variables. I tried to print those but I don't see anything significant.
Also I verified that the app does not change the working directory, but that again should not be a problem as it actually launches. It then just exits after not being able to open the port.
I also tried adding sudo before the app execution in the script (which shouldn't be necessary AFAIK). Doesn't make a difference.
Any ideas?
As suggested by jordanm in the comments, I solved the problem by using systemd.
We have a flask script get_logs.py that uses APScheduler and contains following job
scheduler.add_job(id="create_recommendation_entries", trigger = 'interval',seconds=60*10,func=create_entries)
Someone ran the script and now the the logs show that this script is still running at 10 minutes interval even after terminating.
The process id is not listed nor does it show using grep and we don't know whether it was executed using nohup or gunicorn.
How do I kill this job based on id="create_recommendation_entries"because I don't know any of its stats(port,pid etc).
Rerunning the script creates a different thread and stops after Ctrl+C but the previous one remains still in process
I have a bash script which will take 5-6 hrs to complete and yesterday i accessed aws 12 month free tire and running ec2 (ubuntu) on it ,i want to run that bash script even after i close my main machine ...how can i do this ?
Assuming this is on linux system, you can run your script in the background using & optons. Something like this
yourBashScript.sh &
Where & tells the shell to run it in the background. So even if you close the shell or end your ssh session, it will keep running in the background till it finishes the job or crashes due to any error.
You can always check whether your script is running or not using ps command. Something like this
ps -eaf | grep yourBashScript
this may return the process information for your script, if it is in running state.
I have a bash script. I would like to run it continuously on google cloud server. I connected to my VM via SSH in browser but after I've closed my browser, script was stopped.
I tried to use Cloud Shell but if I restart my laptop, script launches from start. It doesn't work continuously!
Is it possible to launch my script in google cloud, shut down laptop and be sure what my script works?
The solution: GNU screen. This awesome little tool let's you run a process after you've ssh'ed into your remote server, and then detach from it - leaving it running like it would run in the foreground (not stopped in the background).
So after we've ssh'ed into our GCE VM, we will need to:
1. install GNU screen:
apt-get update
apt-get upgrade
apt-get install screen
type "screen". this will open up a new screen - kind of similar in look & feel to what "clear" would result in.
run the process (e.g.: ./init-dev.sh to fire up a ChicagoBoss erlang server)
type: Ctrl + A, and then Ctrl + D. This will detach your screen session but leave your processes running!
feel free to close the SSH terminal. whenever you feel like it, ssh back into your GCE VM, and type screen -r to resume your previously detached session.
to kill all detached screens, run:
screen -ls | grep pts | cut -d. -f1 | awk '{print $1}' | xargs kill
You have the following options:
1. Task schedules - which involves cron jobs. Check this sample. Via this answer;
2. Using startup scripts.
I performed the following test and it worked for me:
I created an instance in GCE, SSH-d into it and created the following script, myscript.bash:
#!/bin/bash
sleep 15s
echo Hello World > result.txt
and then, ran
$ bash myscript.bash
and immediately closed the browser window holding the SSH session.
I then waited for at least 15 seconds, re-engaged in an SSH connection with the VM in question and ran $ ls and voila:
myscript.bash result.txt
So the script ran even after closing the browser holding the SSH session.
Still, technically, I believe your solution lies with 1. or 2.
You can use
nohup yourscript.sh > output_log_file.log
I faced similar issue. I logged into Virtual Machine through google cloud command on my local machine, tried to exit by closing the terminal, It halted the script running in the instance.
Use command exit to log out of cloud consoles in local machine putty console (twice).
Make sure you have not enabled "PREEMPT INSTANCE" while creating a VM instance.
It will force to close the instance within 24 hours to reduce the costing by a huge difference.
I have a NodeJS project and I solved with pm2
I have recently created a version control page from my application to manage the deployment process.
(Yeah, I know, github + hooks are better than rewriting from zero. But we are in Iran and our beloved government has blocked all the ssh connections to outside of the country. :(( )
There is a merge + reload action in the page. the merge is working like the other parts, but the reload part fails without any message. I have added sudo row for kill command and the user of the worker process has enough permission. I even executed the code form django shell and it reloaded the process.
Is there any restriction for receiving signals, such as workers not being able to reload their master?
Here's the related codes:
def command(x):
return str(Popen(x.split(' '), stdout=PIPE).communicate()[0])
pid = open(PATH + "/logs/gunicorn.pid").readline().strip()
cmd = "sudo kill -HUP %s" % pid
content += command(cmd)
Guess off the top of my head is that the restart is not working because the process calling the reload is being killed. Maybe try to daemonize a subprocess that exits after calling the reload? Take a look at this post:
spawning process from python