How to free up the used port in virtual env - flask

I am using virtual env to run flask app and often my local host port does not work and I have to do export PORT=500* . I mean after using foreman start couple of times at a specific port, the port gets engaged and when I try again to start and it tries to connect saying retrying to connect and then fails.
I have to change the port every time I experience this problem. Is there a command by which I can free the port or delete the port.

often happens because foreman doesn't shutdown properly. Try looking to see if there are process still running in the background that might be using the port. For example, if you use forman to launch a python app, try:
ps aux | grep python
to see all your running python processes. You can automatically kill all running python process using the following command
ps aux | grep python | tr -s ' ' '\t' | awk '{system("kill " $2)}'
but be careful as this will kill all running python processes that you have running.

Related

Shell Script stops after connecting to external server

I am in the process of trying to automate deployment to an AWS Server as a cool project to do for my coding course. I'm using ShellScript to automate different processes but when connecting to the AWS E2 Ubuntu server. When connected to the server, it will not do any other shell command until I close the connection. IS there any way to have it continue sending commands while being connected?
read -p "Enter Key Name: " KEYNAME
read -p "Enter Server IP With Dashes: " IPWITHD
chmod 400 $KEYNAME.pem
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
ANYTHING HERE AND BELOW WILL NOT RUN UNTIL SERVER IS DISCONNECTED
A couple of basic points:
A shell script is a sequential set of commands for the shell to execute. It runs a program, waits for it to exit, and then runs the next one.
The ssh program connects to the server and tells it what to do. Once it exits, you are no longer connected to the server.
The instructions that you put in after ssh will only run when ssh exits. Those commands will then run on your local machine instead of the server you are sshed into.
So what you want to do instead is to run ssh and tell it to run a set of steps on the server, and then exit.
Look at man ssh. It says:
ssh destination [command]
If a command is specified, it is executed on the remote host instead of a login shell
So, to run a command like echo hi, you use ssh like this:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com "echo hi"
Or, for longer commands, use a bash heredoc:
ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com <<EOF
echo "this will execute on the server"
echo "so will this"
cat /etc/os-release
EOF
Or, put all those commands in a separate script and pipe it to ssh:
cat commands-to-execute-remotely.sh | ssh -i "$KEYNAME.pem" ubuntu#ec2-$IPWITHD.us-east-2.compute.amazonaws.com
Definitely read What is the cleanest way to ssh and run multiple commands in Bash? and its answers.

Thousands of extraneous gunicorn workers

I'm using gunicorn 19.7.1 appserver with nginx reverse proxy for a Django project (Ubuntu 14.04 machine).
ps aux | grep gunicorn | grep -v grep | wc -l yields 3043 at the moment.
Whereas in /etc/init/gunicorn.conf, I've always had -w 33. Yet these extra workers persist even if I do sudo service gunicorn stop and sudo service gunicorn start.
How do I kill the extraneous workers?
How did this happen?
The worker count of 33 has always been properly configured on my busy production system.
However a few hours ago, I was trying python's multiprocessing on the server and things went south. Gunicorn workers ate up all the memory and took out the resident redis instances as well.
I reverted the change and have managed to get everything back online, except the memory hasn't been released and I've had to cope with these legacy gunicorn workers. What's going on?
Yet these extra workers persist even if I do sudo service gunicorn stop and sudo service gunicorn start.
service only manages service-initiated processes, so if you started Gunicorn workers outside of the service framework, these workers will continue to live even if you stop.
How do I kill the extraneous workers?
The fast way:
Run this command to list all gunicorn process IDs and terminate them, and then restart Gunicorn:
$ pkill gunicorn
$ sudo service gunicorn start
The better way:
Identify your "desired" Gunicorn workers by finding the parent:
$ sudo service gunicorn status
Note the parent process ID. Let's say it's 123.
Save a list of all the "desired" workers' PIDs:
$ echo 123 > desired_workers
$ pgrep -P 123 >> desired_workers
Save a list of all workers' PIDs:
$ pgrep gunicorn > all_workers
Terminate the "undesired" workers:
$ cat desired_workers all_workers | sort | uniq -u | xargs kill

django/gunicorn app restart

I have 2 different projects running on the same server. They are both Django projects with Gunicorn as wsgi server. The server on top is Apache. Currently there is a Jenkins job that updates the source code from the repo and restart(Kill and start) gunicorn. This worked fine till the server was only serving 1 site.
I killed the gunicorn as follows
#!/bin/bash
ps -ef | grep gunicorn | grep -v grep | awk '{print $2}' | xargs kill -9
and then restarted it. However this approach will will not work with 2 sites, since killing Gunicorn completely kills all Gunicorn processes. At any time I run the build, only the gunicorn for that that site will get re spawned.
I looked around and i found that Supervisor was one utility that I should use to prevent this and seamlessly restart Gunicorn.
Do you guys have have other suggestions or best practices that I should follow?
Thanks
To only grab your project's gunicorn and restart it, you can use the following:
ps aux |grep gunicorn |grep yourappname | awk '{ print $2 }' |xargs kill -HUP
Other gunicorn processes will not be affected.
Gunicorn + Supervisor is pretty standard stack, you could have your sites separated as different Supervisor tasks and instead of telling Jenkins to restart Supervisor, use the Supervisor method for restarting just one of your tasks, and you're done.
Supervisor is also great if your site crashes and Gunicorn needs to be executed again.

How to kill a process running in a port in a remote aws server

I am trying to kill a process running in
http://54.218.73.244:7002/
i have used the command
fuser -k 7002/tcp
it is not working the process continues to run
I am using expressJS in server to run the server script
how can i resolve this ?
ssh 54.218.73.244 fuser -k 7002/tcp

How to 'clear' the port when restarting django runserver

Often, when restarting Django runserver, if I use the same port number, I get a 'port is already in use' message. Subsequently, I need to increment the port number each time to avoid this.
It's not the case on all servers, however, so I'm wondering how I might achieve this on the current system that I'm working on?
BTW, the platform is Ubuntu 8.10
I found this information (originally from Kristinn Örn Sigurðsson) to solve my problem:
To kill it with -9 you will have to list all running manage.py processes, for instance:
ps aux | grep -i manage
You'll get an output similar to this if you've started on many ports:
14770 8264 0.0 1.9 546948 40904 ? S Sep19 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8006
14770 15215 0.0 2.7 536708 56420 ? S Sep13 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8001
14770 30144 0.0 2.1 612488 44912 ? S Sep18 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8000
14770 30282 0.0 1.9 678024 40104 ? S Sep18 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8002
14770 30592 0.0 2.1 678024 45008 ? S Sep18 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8003
14770 30743 0.0 2.1 678024 45044 ? S Sep18 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8004
Then you'll have to select the pid (which is the second number on the left) for the right manage.py process (python manage.py runserver... etc) and do:
kill -9 pid
For the above example, if you wanted to free up port 8000, you'd do:
kill -9 30144
You're getting that message because the server is already running (possibly in the background). Make sure to kill the process (bring it to the foreground and press ctrl-c) to stop the process.
If the ps aux command (as per Meilo's answer) doesn't list the process that you wanted to kill but shows the port active in netstat -np | grep 8004 network activity, try this command (worked on Ubuntu).
sudo fuser -k 8004/tcp
where as, 8004 is the port number that you want to close.
This should kill all the processes associated with port 8004.
No, he's not an idiot guys. Same thing happens to me. Apparently it's a bug with the python UUID process with continues running long after the django server is shutdown which ties the port up.
fuser -k 8000/tcp
Run in terminal it works in ubutu. 8000 is the port.
This error is due to the server already running.
Background
I am answering on a more general level not specific to Django like the original question asks. So that those that land here from Google can easily fix the problem.
Solution
When you need to clear a port, all you need to do is these two steps
In the terminal run fg
Press Control-C (if on a mac)
Explanation
fg brings the process to the foreground. Then Control-C stops the server.
Example
I was actually having this issue with my port 8000 when running an angular app. I was getting an error when I ran npm start
So I ran fg, then I stopped the server with Control-C
Then I was able to successfully run the server
Type fg in the terminal to bring up the background task to the foreground.
Press Ctrl+C to close/stop the running server.
I use pkill -If 'manage.py' (-I means interactive, -f matches more than just the process name). See How to kill all processes with a given partial name? for more info on pkill.
sudo lsof -t -i tcp:8000 | xargs kill -9
If you want to free 8000 port than just copy command and paste in your cmd it will ask for sudo password. And then you are good to go.
If the port number that you are trying is 8001, then use this command
sudo fuser -k 8001/tcp
You do not want to simply increment the port number when restarting a Django server. This will result in having multiple instances of the Django server running simultaneously. A better solution is to kill the current instance and start a new instance.
To do this, you have multiple options. The easiest is
Python2: $ killall -9 python
Python3: $ killall -9 python3
If for some reason, this doesn't work, you can do
$ kill <pid> where <pid> is the process id found from a simple $ ps aux | grep python command.
netstat -tulpn |grep 8000|awk '{print $7}'|cut -d/ -f 1|xargs kill
Repost from https://stackoverflow.com/a/27138521/1467342:
You can use this script in place of ./manage.py runserver. I put it in scripts/runserver.sh.
#!/bin/bash
pid=$(ps aux | grep "./manage.py runserver" | grep -v grep | head -1 | xargs | cut -f2 -d" ")
if [[ -n "$pid" ]]; then
kill $pid
fi
fuser -k 8000/tcp
./manage.py runserver
Like mipadi said, you should be terminating the server (ctrl+c) and returning to the command prompt before calling manage.py runserver again.
The only thing that could be disrupting this would be if you've somehow managed to make runserver act as a daemon. If this is the case, I'm guessing you're using the Django test server as the actual web server, which you should NOT do. The Django test server is single threaded, slow and fragile, suitable only for local development.
In Leopard, I bring on the Activity Monitor and kill python. Solved.
Happened so often that I wrote an alias to kill the process with python in the name (careful if you have other such processes). Now I just run (no Ubuntu)
kill $(ps | grep "python" | awk "{print $1}")
You can even add python manage.py runserver ... to the same alias so you can restart with two keystrokes.
You must have been doing control + z .. Instead do control + c that will kill the server session... Cheers!!!
Add the following library in manage.py
import os
import subprocess
import re
Now add the following python code after if __name__ == "__main__":
ports = ['8000']
popen = subprocess.Popen(['netstat', '-lpn'],
shell=False,
stdout=subprocess.PIPE)
(data, err) = popen.communicate()
pattern = "^tcp.*((?:{0})).* (?P<pid>[0-9]*)/.*$"
pattern = pattern.format(')|(?:'.join(ports))
prog = re.compile(pattern)
for line in data.split('\n'):
match = re.match(prog, line)
if match:
pid = match.group('pid')
subprocess.Popen(['kill', '-9', pid])
This will first find the process id of port 8000 , will kill it and then restart your project. Now each time you don't need to kill the pid manually.
netstat -ntlp
See my complete answer here. https://stackoverflow.com/a/34824239/5215825