We have a flask script get_logs.py that uses APScheduler and contains following job
scheduler.add_job(id="create_recommendation_entries", trigger = 'interval',seconds=60*10,func=create_entries)
Someone ran the script and now the the logs show that this script is still running at 10 minutes interval even after terminating.
The process id is not listed nor does it show using grep and we don't know whether it was executed using nohup or gunicorn.
How do I kill this job based on id="create_recommendation_entries"because I don't know any of its stats(port,pid etc).
Rerunning the script creates a different thread and stops after Ctrl+C but the previous one remains still in process
Related
here's a problem that is driving me nuts. First off, I am not a Linux expert, so I might just be missing some detail.
I am trying to restart an application (namely rpi-webrtc-streamer, but that shouldn't matter) using a shell script. The reason is that when a configuration change happens I need to update the config files and restart.
The idea is to call a bash script using system() function and pass in the pid of the current process. The script should then just kill the process using the supplied pid, and execute it again. In theory this shouldn't be a problem...
What may be complicating it is that the process needs to run with sudo. Not sure if that's the case but just thought I should mention it.
Now this is the script:
#!/bin/bash
echo "restarting streamer..."
echo "killing process with PID $1"
kill $1
# I have tried different intervals, even 10 seconds, doesn't help
sleep 2
echo "running new streamer instance"
echo "path:"
pwd
#printenv
echo "id -u"
# just to verify the script runs with sudo
id -u
./webrtc-streamer --verbose
echo "done"
The problem is that the application fails with the following error:
(direct_socket.cc:77): Failed to listen 0.0.0.0:8888.
... and then it shuts down. Well obviously it's not able open the port. It almost looks as if the previous instance of the app is still holding the port open. I have however tried tweaking the sleep amount of seconds in the script but that shouldn't be a problem, first I think the script will continue execution after the process is actually killed and second the process shuts down immediately anyway, I can see that from the logs.
If I however run the app immediately after the script fails from the shell that actually executed the initial app in the first place, it runs without any issues (being able to open the port). No matter how much seconds it waited in the sleep previously.
The only other thing I though of would be that the bash script might be running with different environment variables. I tried to print those but I don't see anything significant.
Also I verified that the app does not change the working directory, but that again should not be a problem as it actually launches. It then just exits after not being able to open the port.
I also tried adding sudo before the app execution in the script (which shouldn't be necessary AFAIK). Doesn't make a difference.
Any ideas?
As suggested by jordanm in the comments, I solved the problem by using systemd.
I have successfully created a periodic task which updates each minute, in a django app. I everything is running as expected, using celery -A proj worker -B.
I am aware that using celery -A proj worker -B to execute the task is not advised, however, it seems to be the only way for the task to be run periodically.
I am logging on to the server using GitBash, after execution, I would like to exit GitBash with the celery tasks still being executed periodically.
When I press ctrl+fn+shift it is a cold worker exit, which stops execution completely (which is not desirable).
Any help?
If you are on a linux server, You might want to use a process manager like supervisord or even systemd to keep your process running.
On windows, one might look at running celery as a service or running as part of rabbitMQ.
In WSL, it seems like a bat file will get wsl commands to run as a service.
I have a bash script which will take 5-6 hrs to complete and yesterday i accessed aws 12 month free tire and running ec2 (ubuntu) on it ,i want to run that bash script even after i close my main machine ...how can i do this ?
Assuming this is on linux system, you can run your script in the background using & optons. Something like this
yourBashScript.sh &
Where & tells the shell to run it in the background. So even if you close the shell or end your ssh session, it will keep running in the background till it finishes the job or crashes due to any error.
You can always check whether your script is running or not using ps command. Something like this
ps -eaf | grep yourBashScript
this may return the process information for your script, if it is in running state.
I got exact same problem described in this post, but the answer doesn't help at all. In short, I am using Tivix django-cron, the cron job is not running at regular basis.
To illustrate the problem, the following cron job class is intended to send email every min once running runcrons command. But in fact, it only sends out one email and no more. That defeats the purpose of cron... What am I missing?
class TestCron(CronJobBase):
schedule = Schedule(run_every_mins=1)
code = 'test_cron_philip'
def do(self):
send_mail('cron test', 'body is test body', 'coach_zhong#163.com',
['admin#dessert.webfactional.com'],fail_silently=False)
Yes, you miss something ("runcrons" is not background deamon). From documentation:
"Now everytime you run the management command python manage.py
runcrons all the crons will run if required. Depending on the
application the management command can be called from the Unix crontab
as often as required. Every 5 minutes usually works for most of my
applications."
That means you have to put "runcrons" command in your crontab.
Example:
You have some CronJob that do something every 30 min.
To get this running you must edit you crontab (linux, mac) or task scheduler (windows) to run "python manage.py runcrons" for every, let say 1 min.
If you get this running, your CronJob will be pinged every 1 min and run if necessary (every 30 min or whatever value you have set).
Hope this helps.
I run the following command using bash to start a Django application without any problems even if I exit from that shell.
python manage.py runfcgi daemonize=true ...
When Jenkins runs same command above, the Django application runs as well as using bash to run. But why the application is killed when the job ends?
I would guess that Jenkins starts a new shell session for each job, and then closes it when the job is complete. This will terminate any processes started in that session.
If you want a process to persist after closing the session, you can start it with nohup:
nohup python manage.py runfcgi daemonize=true ...
I had a similar problem in the past using fabric - the service would terminate even if I set the daemonize flag to true. I used nohup to work around it.
I found a solution here and it works for me
https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller