Based on image 'python:2.7', I created two containers: container1, container2
Dockerfile for test1:
FROM python:2.7
EXPOSE 6789
CMD ["bash", "-c", "'while true;do sleep 1000;done;'"]
Dockerfile for test2:
FROM python:2.7
EXPOSE 9876
CMD ["bash", "-c", "'while true;do sleep 1000;done;'"]
Then I built two new images with the dockerfiles above, named: test1, test2
Container1:
docker run --name container1 test1
And I also setup a Django server on port 6789 in container1 with:
#In Django workspace
./manage.py runserver 6789
Container2:
docker run --name container2 --link container1:container1 test2
And I also setup a Django server on port 9876 in container2 with:
#In Django workspace
./manage.py runserver 9876
In container2, when I run
curl container1_ip:6789
I got connection refused error.
How can I configure it to make it work?
I also created a container with official nginx image, it has two default ports (80, 443) exposed, and then created another container linked to nginx container, in the container, I did the same thing
curl nginx_ip:80 #successful
and
curl nginx_ip:443 #connection refused
Why this happen? Why 80 works well, and 443 doesn't work?
What are you trying to do by "curling" port 6789/9876 of your container ? is there anything (a ftp server or something else?) behind these ports ?
Before trying to reach it from the other container, you should try reaching it from the container itself
In container1 :
curl container1_ip:6789
I think you can access these ports, but there is just nothing listening to them on your container.
EDIT : If you downvote me, please comment explaining why, so i can improve my answers, thanks.
The problem is not how to configure docker, it is some config issue about Django. Django is listening localhost:port by default, we should specify a ip address to listen, or with '0.0.0.0' to make it listen all ip address.
Related
My Dockerfile is:
FROM nginx
I start a container on AWS docker run -d --name ng_ex -p 8082:80 nginx and :
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6489cbb430b9 nginx "nginx -g 'daemon of…" 22 minutes ago Up 22 minutes 0.0.0.0:8082->80/tcp ng_ex
And inside a container:
service nginx status
[ ok ] nginx is running.
But when I try to send a request thought browser on my.ip.address:8082 I get a timeout error instead Nginx welcome page. What is my mistake and how to fix it?
If you're on an VM on aws, means that you must setup your security group to allow connection on port 8082 from all internet or only your IP/proxyIP. (The timeout may come from this).
Then my.ip.address:8082 should works
If you're inside your VM get the container IP:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
Then curl < container IP >:8082
If stil not working confirm on build your container EXPOSE 80
He everyone. I'm working with docker and trying to dockerize a simple django application that does an external http connect to a web page (real website)
so when I set in the Docker file the address of my django server that should work in the container - 127.0.0.1:8000. my app wasn't working because of the impossibility to do an external connection to the website.
but when I set the port for my server: 0.0.0.0:8000 it started to work.
So my question is: Why it behaves like that? What is the difference in this particular case? I just want to understand it.
I read some articles about 0.0.0.0 and it's like a 'generic' or 'placeholder' port that allows to use the OC default port.
127.0.0.1 is like a host that redirects the request to the current machine. I knew it.
But when I run the app at my localmachine (host: 127.0.0.0:8000) everything was working and the app could do the connection to the real website but in case of docker it stopped to work.
Thanks for any help!
Here are my sources:
Docker file
FROM python:3.6
RUN mkdir /code
WORKDIR /code
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
EXPOSE 8000
ENTRYPOINT [ "python", "manage.py" ]
CMD [ "runserver", "127.0.0.1:8000" ] # doesn't work
# CMD [ "runserver", "0.0.0.0:8000" ] - works
docker-compose.yml
version: "3"
services:
url_rest:
container_name: url_keys_rest
build:
context: .
dockerfile: Dockerfile
image: url_keys_rest_image
stdin_open: true
tty: true
volumes:
- .:/var/www/url_keys
ports:
- "8000:8000"
here is the http error that I received in case of 127.0.0.1. Maybe it will be useful.
http: error: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/urls (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10cd51e80>: Failed to establish a new connection: [Errno 61] Connection refused')) while doing GET request to URL: http://127.0.0.1:8000/api/urls
You must set a container’s main process to bind to the special 0.0.0.0 “all interfaces” address, or it will be unreachable from outside the container.
In Docker 127.0.0.1 almost always means “this container”, not “this machine”. If you make an outbound connection to 127.0.0.1 from a container it will return to the same container; if you bind a server to 127.0.0.1 it will not accept connections from outside.
One of the core things Docker does is to give each container its own separate network space. In particular, each container has its own lo interface and its own notion of localhost.
At a very low level, network services call the bind(2) system call to start accepting connections. That takes an address parameter. It can be one of two things: either it can be the IP address of some system interface, or it can be the special 0.0.0.0 “all interfaces” address. If you pick an interface, it will only accept connections from that interface; if you have two network cards on a physical system, for example, you can use this to only accept connections from one network but not the other.
So, if you set a service to bind to 127.0.0.1, that’s the address of the lo interface, and the service will only accept connections from that interface. But each container has its own lo interface and its own localhost, so this setting causes the service to refuse connections unless they’re initiated from within the container itself. It you set it to bind to 0.0.0.0, it will also accept connections from the per-container eth0 interface, where all connections from outside the container arrive.
My understanding is that docker is randomly assigning IP address to each container instead of localhost(127.*.*.*). So using 0.0.0.0 to listen inside the docker application will work. I tried to connect local database inside a docker file before with localhost. It doesn't work as well. I guess it is due to this reason. Correct me if I am wrong plz!
Update: I attach an intuitive image to show how docker interact with those ip addresses. Hope this will help to understand.
I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app
I have a docker container running on my system which i started using this command:
docker run -it -v ~/some/dir -p 8000:80 3cce3211b735 bash
Now docker ps lists this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44de7549d38e 3cce3211b735 "bash" 14 minutes ago Up 14 minutes 22/tcp, 443/tcp, 8082/tcp, 0.0.0.0:8000->80/tcp hardcore_engelbart
Inside the container i run my django app using the command : python manage.py runserver 80
But i am not able to view the page using either of these:
1.localhost:8000
2.127.0.0.1:8000
I do understand that my 8000 port is mapped to 80 port on the container. But why am i not able to access it. I am using docker for mac not docker toolbox. Please help and comment if you need any more info.
Okay so i found the solution to my problem. The issue was not in the docker port mapping. The actual problem is this line :
python manage.py runserver 80
This runs the server on 127.0.0.1:80 . The localhost inside the docker container is not the localhost on your machine . So the solution is running the server using this command :
python manage.py runserver 0.0.0.0:80
I was able to access the webpage after this. If you run into the same problem where you are not able to connect to the django server running inside your docker container , you should try running the server on 0.0.0.0:port. You will be able to access it in your browser using localhost:port . Hope this helps someone.
I have a centOS 7.2 box as my web server hosted in aws. I found something interesting : when I run my web site using a nginx docker container, I'm able to access it from my local machine. i.e. run docker command
docker run -d -p 8000:80 my-nginx-image
and access the web site through the below url (my local machine is connected to that aws host machine via a vpn connection)
http://10.77.20.253/index.html
This works perfectly well. However, when I try to host the site using webpack-dev-server, i.e.
webpack-dev-server --host 0.0.0.0 --port 8000
I can access it from that web server with no problem, but I can't access it from my local machine. I always get a timeout error.
I then did a
netstat -anp
on that linux box, I noticed that when running from docker, it is listening on
:::8000
while when I run from wds, it was listening on
0.0.0.0:8000
I'm not sure what i'm missing here, so far I have tried
webpack-dev-server --host localhost --port 8000
webpack-dev-server --host 127.0.0.1--port 8000
webpack-dev-server --host 10.77.20.253 --port 8000 (the internal ip address)
but none of them works.
Any thoughts on it??