When I run docker-compose up in my Docker project it fails with the following message:
Error starting userland proxy: listen tcp 0.0.0.0:3000: bind: address already in use
netstat -pna | grep 3000
shows this:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
I've already tried docker-compose down, but it doesn't help.
In your case it was some other process that was using the port and as indicated in the comments, sudo netstat -pna | grep 3000 helped you in solving the problem.
While in other cases (I myself encountered it many times) it mostly is the same container running at some other instance. In that case docker ps was very helpful as often I left the same containers running in other directories and then tried running again at other places, where same container names were used.
How docker ps helped me:
docker rm -f $(docker ps -aq) is a short command which I use to remove all containers.
Edit: Added how docker ps helped me.
This helped me:
docker-compose down # Stop container on current dir if there is a docker-compose.yml
docker rm -fv $(docker ps -aq) # Remove all containers
sudo lsof -i -P -n | grep <port number> # List who's using the port
and then:
kill -9 <process id> (macOS) or sudo kill <process id> (Linux).
Source: comment by user Rub21.
I had the same problem. I fixed this by stopping the Apache2 service on my host.
You can kill the process listening on that port easily with one command below :
kill -9 $(lsof -t -i tcp:<port#>)
ex :
kill -9 $(lsof -t -i tcp:<port#>)
or for ubuntu:
sudo kill -9 `sudo lsof -t -i:8000`
Man page for lsof : https://man7.org/linux/man-pages/man8/lsof.8.html
-9 is for hard kill without checking any deps.
(Not related, but might be useful if its PORT 5000 mystery) - the culprit process is due to Mac OS monterery.
The port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I was unable the docker to bind to port 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use.)
By running lsof -i :5000, I found out the process using the port was named ControlCenter, which is a native macOS application. If this is happening to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000 returns that Control Center is being used by process id 433. I could do killall -p 433, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in
System Preferences › Sharing, and unchecking AirPlay Receiver to release port 5000.
I had same problem,
docker-compose down --rmi all (in the same directory where you run docker-compose up)
helps
UPD: CAUTION - this will also delete the local docker images you've pulled (from comment)
For Linux/Unix:
Simple search for linux utility using following command
netstat -nlp | grep 8888
It'll show processing running at this port, then kill that process using PID (look for a PID in row) of that process.
kill PID
In some cases it is critical to perform a more in-depth debugging to the problem before stopping a container or killing a process.
Consider following the checklist below:
1) Check you current docker compose environment
Run docker-compose ps. If port is in use by another container, stop it with docker-compose stop <service-name-in-compose-file> or remove it by replacing stop with rm.
2) Check the containers running outside your current workspace
Run docker ps to see list of all containers running under your host.
If you find the port is in use by another container, you can stop it with docker stop <container-id>.
(*) Because you're not under the scope of the origin compose environment - it is a good practice first to use docker inspect to gather more information about the container that you're about to stop.
3) Check if port is used by other processes running on the host
For example if the port is 6379 run:
$ sudo netstat -ltnp | grep ':6379'
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 915/redis-server 12
tcp6 0 0 ::1:6379 :::* LISTEN 915/redis-server 12
(*) You can also use the lsof command which is mainly used to retrieve information about files that are opened by various processes (I suggest running netstat before that).
So, In case of the output above the PID is 915. Now you can run:
$ ps j 915
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 915 915 915 ? -1 Ssl 123 0:11 /usr/bin/redis-server 127.0.0.1:6379
And see the ID of the parent process (PPID) and the execution command.
You can also run: $ pstree -s <PID> to a visual display of the process and its related processes.
In our case we can see that the process probably is a daemon (PPID is 1) - In that case consider running: A) $ cat /proc/<PID>/status in order to get a more in-depth information about the process like the number of threads spawned by the process, its capabilities, etc'.
B) $ systemctl status <PID> in order to see the systemd unit that caused the creation of a specific process. If the service is not critical - you can stop and disable the service.
4) Restart Docker service
Run: sudo service docker restart.
5) You reached this point and..
Only if its not placing your system at risk - consider restarting the server.
In my case it was
Error starting userland proxy: listen tcp 0.0.0.0:9000: bind: address already in use
And all that I need is turn off debug listening in php storm
Most probably this is because you are already running a web server on your host OS, so it conflicts with the web server that Docker is attempting to start.
So try this one-liner before trying anything else:
sudo service apache2 stop; sudo service nginx stop; sudo nginx -s stop;
I had apache running on my ubuntu machine. I used this command to kill it!
sudo /etc/init.d/apache2 stop
I was getting the below error when i was trying to launch a new container -
listen tcp 0.0.0.0:8080: bind: address already in use.
To check which process is running on port 8080, run below command:
netstat -tulnp | grep 8080
i got the output below
[root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]# netstat -tulnp | grep 8080 tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN **12749**/java [root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]#
run
kill -9 12749
Then try to relaunch the container it should work
If redis server is started as a service, it will restart itself when you using kill -9 <process_id> or sudo kill -9 `sudo lsof -t -i:<port_number>` . In that case you will need to stop the redis service using following command.
sudo service redis-server stop
I upgraded my docker this afternoon and ran into the same problem. I tried restarting docker but no luck.
Finally, I had to restart my computer and it worked. Definitely a bug.
Check docker-compose.yml, it might be the case that the port is specified twice.
version: '3'
services:
registry:
image: mysql:5.7
ports:
- "3306:3306" <--- remove either this line or next
- "127.0.0.1:3306:3306"
Changing network_mode: "bridge" to "host" did it for me.
This with
version: '2.2'
services:
bind:
image: sameersbn/bind:latest
dns: 127.0.0.1
ports:
- 172.17.42.1:53:53/udp
- 172.17.42.1:10000:10000
volumes:
- "/srv/docker/bind:/data"
environment:
- 'ROOT_PASSWORD=secret'
network_mode: "host"
I ran into the same issue several times. Restarting docker seems to do the trick
A variation of #DmitrySandalov's answer: I had tomcat/java running on 8080, which needed to keep going. Looked at the docker-compose.yml file and altered the entry for 8080 to another of my choosing.
nginx:
build: nginx
ports:
#- '8080:80' <-- original entry
- '8880:80'
- '8443:443'
Worked perfectly. (The only wrinkle is the change will be wiped if I ever update the project, since it's coming from an external repo.)
At first, make sure which service you are running in your specific port. In your case, you are already using port number 3000.
netstat -aof | findstr :3000
now stop that process which is running on specific port
lsof -i tcp:3000
I resolve the issue by restarting Docker.
It makes more sense to change the port of the docker update instead of shutting down other services that use port 80.
Just a side note if you have the same issue and is with Windows:
In my case the process in my way is just grafana-server.exe. Because I first downloaded the binary version and double click the executable, and it now starts as a service by user SYSTEM which I cannot taskkill (no permission)
I have to go to "Service manager" of Windows and search for service "Grafana", and stop it. After that port 3000 is no longer occupied.
Hope that helps.
The one that was using the port 8888 was Jupiter and I had to change the configuration file of Jupiter notebook to run on another port.
to list who is using that specific port.
sudo lsof -i -P -n | grep 9
You can specify the port you want Jupyter to run uncommenting/editing the following line in ~/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.port = 9999
In case you don't have a jupyter_notebook_config.py try running jupyter notebook --generate-config. See this for further details on Jupyter configuration.
Before it was running on :docker run -d --name oracle -p 1521:1521 -p 5500:5500 qa/oracle
I just changed the port to docker run -d --name oracle -p 1522:1522 -p 5500:5500 qa/oracle
it worked fine for me !
On my machine a PID was not being shown from this command netstat -tulpn for the in-use port (8080), so i could not kill it, killing the containers and restarting the computer did not work. So service docker restart command restarted docker for me (ubuntu) and the port was no longer in use and i am a happy chap and off to lunch.
maybe it is too rude, but works for me. restart docker service itself
sudo service docker restart
hope it works for you also!
I have run the container with another port, like... 8082 :-)
I came across this problem. My simple solution is to remove the mongodb from the system
Commands to remove mongodb in Ubuntu:
sudo apt-get purge mongodb mongodb-clients mongodb-server mongodb-dev
sudo apt-get purge mongodb-10gen
sudo apt-get autoremove
Let me add one more case, because I had the same error and none of the solutions listed so far works:
serv1:
...
networks:
privnet:
ipv4_address: 10.10.100.2
...
serv2:
...
# no IP assignment, no dependencies
networks:
privnet:
ipam:
driver: default
config:
- subnet: 10.10.100.0/24
depending on the init order, serv2 may get assigned the IP 10.10.100.2 before serv1 is started, so I just assign IPs manually for all containers to avoid the error. Maybe there are other more elegant ways.
I have the same problem and by stopping docker container it was resolved.
sudo docker container stop <container-name>
i solved with this sudo service redis-server stop
I'm using gunicorn 19.7.1 appserver with nginx reverse proxy for a Django project (Ubuntu 14.04 machine).
ps aux | grep gunicorn | grep -v grep | wc -l yields 3043 at the moment.
Whereas in /etc/init/gunicorn.conf, I've always had -w 33. Yet these extra workers persist even if I do sudo service gunicorn stop and sudo service gunicorn start.
How do I kill the extraneous workers?
How did this happen?
The worker count of 33 has always been properly configured on my busy production system.
However a few hours ago, I was trying python's multiprocessing on the server and things went south. Gunicorn workers ate up all the memory and took out the resident redis instances as well.
I reverted the change and have managed to get everything back online, except the memory hasn't been released and I've had to cope with these legacy gunicorn workers. What's going on?
Yet these extra workers persist even if I do sudo service gunicorn stop and sudo service gunicorn start.
service only manages service-initiated processes, so if you started Gunicorn workers outside of the service framework, these workers will continue to live even if you stop.
How do I kill the extraneous workers?
The fast way:
Run this command to list all gunicorn process IDs and terminate them, and then restart Gunicorn:
$ pkill gunicorn
$ sudo service gunicorn start
The better way:
Identify your "desired" Gunicorn workers by finding the parent:
$ sudo service gunicorn status
Note the parent process ID. Let's say it's 123.
Save a list of all the "desired" workers' PIDs:
$ echo 123 > desired_workers
$ pgrep -P 123 >> desired_workers
Save a list of all workers' PIDs:
$ pgrep gunicorn > all_workers
Terminate the "undesired" workers:
$ cat desired_workers all_workers | sort | uniq -u | xargs kill
I have 2 different projects running on the same server. They are both Django projects with Gunicorn as wsgi server. The server on top is Apache. Currently there is a Jenkins job that updates the source code from the repo and restart(Kill and start) gunicorn. This worked fine till the server was only serving 1 site.
I killed the gunicorn as follows
#!/bin/bash
ps -ef | grep gunicorn | grep -v grep | awk '{print $2}' | xargs kill -9
and then restarted it. However this approach will will not work with 2 sites, since killing Gunicorn completely kills all Gunicorn processes. At any time I run the build, only the gunicorn for that that site will get re spawned.
I looked around and i found that Supervisor was one utility that I should use to prevent this and seamlessly restart Gunicorn.
Do you guys have have other suggestions or best practices that I should follow?
Thanks
To only grab your project's gunicorn and restart it, you can use the following:
ps aux |grep gunicorn |grep yourappname | awk '{ print $2 }' |xargs kill -HUP
Other gunicorn processes will not be affected.
Gunicorn + Supervisor is pretty standard stack, you could have your sites separated as different Supervisor tasks and instead of telling Jenkins to restart Supervisor, use the Supervisor method for restarting just one of your tasks, and you're done.
Supervisor is also great if your site crashes and Gunicorn needs to be executed again.
I want to restart a Django server which is running using gunicorn.
I know how to use gunicorn in my system. But now I need to restart a remote server which is not set up by me.
I don't know masterpid to restart the server how can I get the masterPID.
Usually I HUP gunicorn with sudo kill -s HUP masterpid.
I tried with ps aux|grep gunicorn
and I did not find the gunicorn.pid file anywhere.
How can I get the masterpid?
the one liner below, gets the job perfectly done:
kill -HUP `ps -C gunicorn fch -o pid | head -n 1`
Explanation
pc -C gunicorn only lists the processes with gunicorn command, i.e., workers and master process. Workers are children of master as can be seen using ps -C gunicorn fc -o ppid,pid,cmd. We only need the pid of the master, therefore h flag is used to remove the first line which is PID text. Note that, f flag assures that master is printed above workers.
The correct procedure is to send HUP signal only to the master. In this way gunicorn is gracefully restarted, only the workers, not master, are recreated.
You can run gunicorn with option '-p', so you can get the pid of the master process from the pid file.
For example:
gunicorn -p app.pid your_app.wsgi.app
You can get the pid of the master by:
cat app.pid
This should also work to restart gunicorn:
ps aux |grep gunicorn |grep yourapp | awk '{ print $2 }' |xargs kill -HUP
Step 1:
Go to /etc/systemd/system/gunicorn.service and open file
add bellow line
PIDFile=/run/gunicorn/gunicorn.pid
--pid /run/gunicorn/gunicorn.pid
Example:
[Service]
PIDFile=/run/gunicorn/gunicorn.pid
WorkingDirectory=/home/django/django_project
ExecStart=/usr/bin/gunicorn --pid /run/gunicorn/gunicorn.pid --name=django_project.....
User=django
Group=django
Step 2:
Go to /etc/tmpfiles.d/ and create new file gunicorn.conf if not exist
add Bellow line
d /run/gunicorn 0755 django django -
where django = user and group name
Step 3:
Reboot your server or /etc/init.d/gunicorn restart to restart gunicorn to take effect
your pid file location is /run/gunicorn/gunicorn.pid check now..
Building on krizex's answer answer, when your master pid is stored in a file, you can gracefully reload your app in one command like this
$ cat app.pid |xargs kill -HUP
I would have liked to comment on the answer itself but I don't have enough reputation to comment yet 😢.
I'm starting gunicorn with the Django command python manage.py run_gunicorn. How can I stop gunicorn properly?
Note: I have a semi-automated server deployment with fabric. Thus using something like ps aux | grep gunicorn to kill the process manually by pid is not an option.
To see the processes is ps ax|grep gunicorn and to stop gunicorn_django is pkill gunicorn.
One option would be to use Supervisor to manage Gunicorn.
Then again i don't see why you can't kill the process via Fabric.
Assuming you let Gunicorn write a pid file you could easily read that file in a Fabric command.
Something like this should work:
run("kill `cat /path/to/your/file/gunicorn.pid`")
pkill gunicorn
or
pkill -P1 gunicorn
should kill all running gunicorn processes
pkill gunicorn stops all gunicorn daemons. So if you are running multiple instances of gunicorn with different ports, try this shell script.
#!/bin/bash
Port=5000
pid=`ps ax | grep gunicorn | grep $Port | awk '{split($0,a," "); print a[1]}' | head -n 1`
if [ -z "$pid" ]; then
echo "no gunicorn deamon on port $Port"
else
kill $pid
echo "killed gunicorn deamon on port $Port"
fi
ps ax | grep gunicorn | grep $Port shows the daemons with specific port.
Here is the command which worked for me :
pkill -f gunicorn
It will kill any process with the name gunicorn
Start:
gunicorn --pid PID_FILE APP:app
Stop:
kill $(cat PID_FILE)
The --pid flag of gunicorn requires a single parameter: a file where the process id will be stored. This file is also automatically deleted when the service is stopped.
I have used PID_FILE for simplicity but you should use something like /tmp/MY_APP_PID as file name.
If the PID file exists it means the service is running. If it is not there, the service is not running. To stop the service just kill it as mentioned.
You could also want to include the --daemon flag in order to detach the process from the current shell.
To start the service which is running on gunicorn
sudo systemctl enable myproject
sudo systemctl start myproject
or
sudo systemctl restart myproject
But to stop the service running on gunicorn
sudo systemctl stop myproject
to know more about python application hosting using gunicorn please refer here
kill -9 `ps -eo pid,command | grep 'gunicorn.*${moduleName:appName}' | grep -v grep | sort | head -1 | awk '{print $1}'`
ps -eo pid,command will only fetch process id, command and args out
grep -v grep to get rid of output like 'grep --color=auto xxx'
sort | head -1 to do ascending sort and get first line
awk '{print $1}' to get pid back
One more thing you may need to pay attention to: Where gunicorn is installed and which one you're using?
Ubuntu 16 has gunicorn installed by default, the executable is gunicorn3 and located on /usr/bin/gunicorn3, and if you installed it by pip, it's located on /usr/local/bin/gunicorn. You would need to use which gunicorn and gunicorn -v to find out.
In your terminal, do:
ps ax|grep gunicorn
Then to kill the Gunicorn process, just do that:
kill -9 <gunicorn pid number>
In my case I dealt with many processes
For example: kill -9 398 399 4225 4772
The above solutions does not remove pid file when the process is killed.
cat <pid-file> | xargs kill -2
This solution reads pid file and send interrupt signal. This closes gunicorn properly and pid file is also removed.
PID file can be generated by
gunicorn --pid PID-FILE
or by adding the following in config file
pidfile = "pid_file"
If we run:
pkill gunicorn
We stop all gunicorn services, in this case to start gunicorn we only need to stop the parent process associated with the service that attends the port where gunicorn will be executed.
The following script searches for said process (pid), if it exists it kills this process:
#!/bin/bash
# ---------------------
stop_unicorn_on_port() {
pid=$(lsof -w -t -i "TCP:${1}" | head -1)
if [ -z "${pid}" ]; then
echo "🦄 no service deamon on port ${1}"
else
kill -9 "${pid}"
echo "🦄 killed service deamon(${pid}) on port ${1}"
fi
}
# Example/Testing
stop_unicorn_on_port 5000
stop_unicorn_on_port 5001
stop_unicorn_on_port 5002
more info check: man lsoft
-t specifies that lsof should produce terse output with process identifiers only and no header - e.g., so
that the output may be piped to kill(1). -t selects the -w option.
-iselects the listing of files any of whose Internet address matches the address specified in i. If no
address is specified, this option selects the listing of all Internet and x.25 (HP-UX) network files...
Here are some sample addresses:
-i6 - IPv6 only
TCP:25 - TCP and port 25
#1.2.3.4 - Internet IPv4 host address 1.2.3.4
I built upon #David's recommendation to use --pid (PID_FILE) to fix the problem I faced because killing the parent pid didn't kill worker processes.
import os
import sys
import psutil
def stop_pid(pid):
if sys.platform == 'win32':
p = psutil.Process(pid)
p.terminate() # or p.kill()
else:
os.system('kill -9 {0}'.format(pid))
def get_child_pids(ppid):
pid_list = []
for process in psutil.process_iter():
_ppid = process.ppid()
if _ppid == ppid:
_pid = process.pid
pid_list.append(_pid)
return pid_list
def send_kill_cmd(ppid, cpids):
stop_pid(ppid) # Killing the parent proc first
for pid in cpids:
stop_pid(pid)
if __name__ == '__main__':
parent_pid = int(sys.argv[1])
child_pids = get_child_pids(parent_pid)
send_kill_cmd(parent_pid, child_pids)
Then finally excecuted above python script with below commands
#!/bin/bash
FILE_NAME=PID_FILE
if [ -f "$FILE_NAME" ]; then
pypy stop_gunicorn.py "$(cat PID_FILE)"
echo "killed - $(cat PID_FILE) and it's child processes."
sleep 2
fi
echo 'Starting gunicorn'
nohup gunicorn --workers 1 --bind 0.0.0.0:5050 app:app --thread 50 --worker-class eventlet --reload --pid PID_FILE > nohup_outs/nohup_process.out &