I need to kill the Django development server from a shellscript in linux. How can this be done, since the shell script can't hold the 'CONTROL' key? Is there sytnax that performs the same function?
I tried:
$CONTROL-C
$^C
$SIGINT
Use kill to kill the process that is running your Django server:
ps -ax shows you all the running processes, you could ps -ax | grep runserver to just show the processes with runserver in them
kill xxxx kills the process number xxxx
Related
I'm using Qt5.6, I have code which will restart the application, but I also want to limit the number of instances.
The code that limits the instances works and so does the code which restarts the application, but with the limiting code enabled, the application will not restart, it closes down but I'm guessing that the restart is being blocked because at the time it tries to launch the new instance the PID of the original hasn't cleared.
Question is, how to achieve the result of closing the application, whilst limiting the total number of instances to 1 ?
If this hasn't been solved by tomorrow I will post the code for restarting and limiting instances, I don't have it with me at the moment.
Code to restart the application:
qApp->quit();
QProcess::startDetached(qApp->arguments()[0], qApp->arguments());
These are just hints for the watchdog script:
1- you need to use QProcess::startDetached to run your script before quit your App. This will allow the script process to live after exiting your App.
QProcess::startDetached( "bash", QStringList() << "-c" << terminalCommand );
2- you need to pass the current App PID to your watchdog script via terminalCommand
to get the current App PID in Qt use
qApp->applicationPid();
3- in your watchdog script, have infinite loop that checks for the PID by doing
ps aux | grep -v 'grep' | grep $PID
once the PID is dieds, start your app again from your the watchdog script
I have never ran into this before because I can always just run the dev server, open up a new tab in terminal and curl from there. I can't do this now because I am running the Django Development server from a Docker container and so if I open a new tab, I will be in the local shell and not the docker container.
How can I leave the development server running and still be able to curl or run other commands?
When I run the development server I'm left with this message:
Django version 1.10.3, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
and so unable to type any commands.
You can use & to run the server as a background job in the current shell:
$ python manage.py runserver &
[1] <pid>
$
You can use the fg command to get back direct control over the runserver process, then you can stop it as usual using Ctrl+C.
To set a foreground process as a background job, you can pause it using Ctrl+Z, and run the bg command. You can see a list of running backgrounds job in the current shell using the jobs command.
The difference with screen is that this will run the server in the current shell. If you exit the shell, the server will stop as well, while screen uses a separate process that will continue after you exit the current shell.
In a development environment you can do following also.
Let the server run in one terminal window.
Open a new terminal window/tab and run
docker exec -it <Container ID/Name> /bin/bash
It will give you interactive access to your container, i.e. you can execute any command in your container rather than in your local shell.
Type exit to come out container shell to local shell.
I am using django-supervisor and my supervisor configuration is here.
Running python manage.py supervisor works great on my 3 Ubuntu machines, but when I try to run it on my mac (10.10), I have 2 problems with django:
If I add print statements in code, I can't see it in the console
If I stop the supervisor and start it again, it says that my django port is already in use
and I have to run: lsof -i :8000 | awk '{print $2}' | tail -n +2 | xargs kill every time in order to run the django inside the supervisor.
These 2 problems don't happen on Ubuntu machines.
My Mac is used for dev environment only.
Any ideas how can I fix them?
While issuing a new build to update code in workers how do I restart celery workers gracefully?
Edit:
What I intend to do is to something like this.
Worker is running, probably uploading a 100 MB file to S3
A new build comes
Worker code has changes
Build script fires signal to the Worker(s)
Starts new workers with the new code
Worker(s) who got the signal after finishing the existing job exit.
According to https://docs.celeryq.dev/en/stable/userguide/workers.html#restarting-the-worker you can restart a worker by sending a HUP signal
ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | xargs kill -HUP
celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
celery multi restart 1 --pidfile=/var/run/celery/%n.pid
http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker
If you're going the kill route, pgrep to the rescue:
kill -9 `pgrep -f celeryd`
Mind you, this is not a long-running task and I don't care if it terminates brutally. Just reloading new code during dev. I'd go the restart service route if it was more sensitive.
You can do:
celery multi restart w1 -A your_project -l info # restart workers
Example
You should look at Celery's autoreloading
What should happen to long running tasks? I like it this way: long running tasks should do their job. Don't interrupt them, only new tasks should get the new code.
But this is not possible at the moment: https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ
I have repeatedly tested the -HUP solution using an automated script, but find that about 5% of the time, the worker stops picking up new jobs after being restarted.
A more reliable solution is:
stop <celery_service>
start <celery_service>
which I have used hundreds of times now without any issues.
From within Python, you can run:
import subprocess
service_name = 'celery_service'
for command in ['stop', 'start']:
subprocess.check_call(command + ' ' + service_name, shell=True)
If you're using docker/docker-compose and putting celery into a separate container from the Django container, you can use
docker-compose kill -s HUP celery
, where celery is the container name. The worker will be gracefully restarted and the ongoing task is not brutally stopped.
Tried pkill, kill, celery multi stop, celery multi restart, docker-compose restart. All not working. Either the container is stopped abruptly or the code is not reloaded.
I just want to reload my code in the prod server manually with a 1-liner. Don't want to play with daemonization.
Might be late to the party. I use:
sudo systemctl stop celery
sudo systemctl start celery
sudo systemctl status celery
I am trying out gunicorn, and I installed it inside a virtualenv with a django site. I got gunicorn running with this command:
gunicorn_django -b 127.0.0.1:9000
Which is all well and good. I haven't setup a bash script or hooked it to upstart (I am on Ubuntu) yet, because I am testing it out.
Meantime, my connection to the server was broken, and thus I lost the console, and I can no longer do CTRL + C to stop the server after reconnecting.
How do I stop gunicorn_django, when it is already running?
The general solution to problems like this is to do ps ax|grep gunicorn to look for the relevant process, then do kill xxxx where xxxx is the number in the first column.
Just found this also - pkill - which will kill all processes matching the search text:
$ pkill gunicorn
No idea how well supported it is, but can confirm that it works on Ubuntu 12.04
(from http://www.howtogeek.com/howto/linux/kill-linux-processes-easier-with-pkill/)
A faster way:
> kill -9 `ps aux | grep gunicorn | awk '{print $2}'`
updated code
This was a bug that has just been fixed here.