I am trying to run my server, but I am getting this, Error: That port is already in use. I looked up on stackoeverflow and found that I should use sudo lsof -t -i tcp:8000 | xargs kill -9, but it's asking me for a password which I don't know.
What should I do? How do I reset my password? Is there another way to kill the current port.
Thanks!
Can you try something like
python manage.py runserver 127.0.0.1:<empty_port>
I need to kill the Django development server from a shellscript in linux. How can this be done, since the shell script can't hold the 'CONTROL' key? Is there sytnax that performs the same function?
I tried:
$CONTROL-C
$^C
$SIGINT
Use kill to kill the process that is running your Django server:
ps -ax shows you all the running processes, you could ps -ax | grep runserver to just show the processes with runserver in them
kill xxxx kills the process number xxxx
I have below on Windows 10:
o Ember-cli: 2.4.3
o Node: 6.11.0
o Npm – 5.0.3
I am executing command ember server from admin command prompt and get below error:
Livereload failed on http://localhost:49152. It is either in use or you do not have permission.
Then I tried ember serve --port 8080 --live-reload-port 35735 and it is hanging up. Please tell me how to correct this.
This is a shot in the dark, but it could be you already have another process running on that port. Given the error, it sounds unlikely, though. Run the following command:
netstat -a -o -n
Check that command's output for any processes running on the ports in question.
You can try killing any processes running on those ports using (obviously, some caution is in order here):
taskkill /F /PID [pid from previous command here]
And in case you're curious, I found this on a Java developer's blog: http://therealdanvega.com/blog/2015/04/16/windows-kill-process-by-port-number
I am using django-supervisor and my supervisor configuration is here.
Running python manage.py supervisor works great on my 3 Ubuntu machines, but when I try to run it on my mac (10.10), I have 2 problems with django:
If I add print statements in code, I can't see it in the console
If I stop the supervisor and start it again, it says that my django port is already in use
and I have to run: lsof -i :8000 | awk '{print $2}' | tail -n +2 | xargs kill every time in order to run the django inside the supervisor.
These 2 problems don't happen on Ubuntu machines.
My Mac is used for dev environment only.
Any ideas how can I fix them?
While issuing a new build to update code in workers how do I restart celery workers gracefully?
Edit:
What I intend to do is to something like this.
Worker is running, probably uploading a 100 MB file to S3
A new build comes
Worker code has changes
Build script fires signal to the Worker(s)
Starts new workers with the new code
Worker(s) who got the signal after finishing the existing job exit.
According to https://docs.celeryq.dev/en/stable/userguide/workers.html#restarting-the-worker you can restart a worker by sending a HUP signal
ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | xargs kill -HUP
celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
celery multi restart 1 --pidfile=/var/run/celery/%n.pid
http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker
If you're going the kill route, pgrep to the rescue:
kill -9 `pgrep -f celeryd`
Mind you, this is not a long-running task and I don't care if it terminates brutally. Just reloading new code during dev. I'd go the restart service route if it was more sensitive.
You can do:
celery multi restart w1 -A your_project -l info # restart workers
Example
You should look at Celery's autoreloading
What should happen to long running tasks? I like it this way: long running tasks should do their job. Don't interrupt them, only new tasks should get the new code.
But this is not possible at the moment: https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ
I have repeatedly tested the -HUP solution using an automated script, but find that about 5% of the time, the worker stops picking up new jobs after being restarted.
A more reliable solution is:
stop <celery_service>
start <celery_service>
which I have used hundreds of times now without any issues.
From within Python, you can run:
import subprocess
service_name = 'celery_service'
for command in ['stop', 'start']:
subprocess.check_call(command + ' ' + service_name, shell=True)
If you're using docker/docker-compose and putting celery into a separate container from the Django container, you can use
docker-compose kill -s HUP celery
, where celery is the container name. The worker will be gracefully restarted and the ongoing task is not brutally stopped.
Tried pkill, kill, celery multi stop, celery multi restart, docker-compose restart. All not working. Either the container is stopped abruptly or the code is not reloaded.
I just want to reload my code in the prod server manually with a 1-liner. Don't want to play with daemonization.
Might be late to the party. I use:
sudo systemctl stop celery
sudo systemctl start celery
sudo systemctl status celery