Nginx inside docker container not responding - amazon-web-services

My Dockerfile is:
FROM nginx
I start a container on AWS docker run -d --name ng_ex -p 8082:80 nginx and :
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6489cbb430b9 nginx "nginx -g 'daemon of…" 22 minutes ago Up 22 minutes 0.0.0.0:8082->80/tcp ng_ex
And inside a container:
service nginx status
[ ok ] nginx is running.
But when I try to send a request thought browser on my.ip.address:8082 I get a timeout error instead Nginx welcome page. What is my mistake and how to fix it?

If you're on an VM on aws, means that you must setup your security group to allow connection on port 8082 from all internet or only your IP/proxyIP. (The timeout may come from this).
Then my.ip.address:8082 should works
If you're inside your VM get the container IP:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
Then curl < container IP >:8082
If stil not working confirm on build your container EXPOSE 80

Related

Unable to connect to Docker container: Connection Refused

I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app

Can't connect to server running inside docker container (Docker for mac)

I have a docker container running on my system which i started using this command:
docker run -it -v ~/some/dir -p 8000:80 3cce3211b735 bash
Now docker ps lists this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44de7549d38e 3cce3211b735 "bash" 14 minutes ago Up 14 minutes 22/tcp, 443/tcp, 8082/tcp, 0.0.0.0:8000->80/tcp hardcore_engelbart
Inside the container i run my django app using the command : python manage.py runserver 80
But i am not able to view the page using either of these:
1.localhost:8000
2.127.0.0.1:8000
I do understand that my 8000 port is mapped to 80 port on the container. But why am i not able to access it. I am using docker for mac not docker toolbox. Please help and comment if you need any more info.
Okay so i found the solution to my problem. The issue was not in the docker port mapping. The actual problem is this line :
python manage.py runserver 80
This runs the server on 127.0.0.1:80 . The localhost inside the docker container is not the localhost on your machine . So the solution is running the server using this command :
python manage.py runserver 0.0.0.0:80
I was able to access the webpage after this. If you run into the same problem where you are not able to connect to the django server running inside your docker container , you should try running the server on 0.0.0.0:port. You will be able to access it in your browser using localhost:port . Hope this helps someone.

port number not accessible when running webpack-dev-server

I have a centOS 7.2 box as my web server hosted in aws. I found something interesting : when I run my web site using a nginx docker container, I'm able to access it from my local machine. i.e. run docker command
docker run -d -p 8000:80 my-nginx-image
and access the web site through the below url (my local machine is connected to that aws host machine via a vpn connection)
http://10.77.20.253/index.html
This works perfectly well. However, when I try to host the site using webpack-dev-server, i.e.
webpack-dev-server --host 0.0.0.0 --port 8000
I can access it from that web server with no problem, but I can't access it from my local machine. I always get a timeout error.
I then did a
netstat -anp
on that linux box, I noticed that when running from docker, it is listening on
:::8000
while when I run from wds, it was listening on
0.0.0.0:8000
I'm not sure what i'm missing here, so far I have tried
webpack-dev-server --host localhost --port 8000
webpack-dev-server --host 127.0.0.1--port 8000
webpack-dev-server --host 10.77.20.253 --port 8000 (the internal ip address)
but none of them works.
Any thoughts on it??

docker container port mapping issue

I think I am missing something obvious but I can't seem to crack this one. I am trying to map a port from a django application running uwsgi in a docker container to my local Macintosh host. Here is the setup.
Mac 10.11 running docker-machine 0.5.1 with virtualbox 5.0.10 and docker 1.9.1
I created a server with docker-machine setup my docker file and successfully built my docker container. In the container I have the following command
# Port to expose
EXPOSE 8000
Which maps to the port used via uwsgi inside the container. When I runt he container via
eval "$(docker-machine env dev)"
docker-machine ip dev
192.168.99.100
docker run -P launch
The container starts properly. If I enter the container and perform a
curl http://localhost:8000
I get my HTML as I would expect. On the outside a docker inspect container_id gets me a
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32768"
}
]
},
So i can see the mapping to 32768 on the docker-machine host of 192.168.99.100 as from the above commands. However whenever I try and curl http://192.168.99.100:32768
curl http://192.168.99.100:32768
curl: (7) Failed to connect to 192.168.99.100 port 32768: Connection refused
So any thoughts on this?? Everything should work as I understand docker.
Thanks
Craig
Since you are running through a VirtualBox VM, I would still recommend mapping the port on the VirtualBox level, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port8000 ,tcp,,8000,,8000"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port8000 ,udp,,8000,,8000"
And run the container with an explicit port mapping (instead of the random -P)
docker run -p 8000:8000 launch

haproxy in docker container

I'm new to docker and haproxy.. I tried to follow the example from the official docker hub repo.
So, I have Dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
and simple haproxy config (which I expect to redirect local calls to my EB instance)
global
# daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 {my-app}.elasticbeanstalk.com:80 maxconn 32
Build and run
$ docker build .
$ docker run --rm d4598bcc293f
Container starts and stucks, Ctrl+C doen't stop it. "docker kill" helps only.
My EB resource is up and running
$ curl {my-app}.elasticbeanstalk.com/status
{
"status": "OK"
}
But local calls fail
$ boot2docker ip
192.168.59.104
$ curl 192.168.59.104/status
curl: (7) Failed to connect to 192.168.59.104 port 80: Connection refused
What am I missing or doing wrong?
Thank you!
UPDATE: I've found the problem with calls redirections. Wrong port
number in haproxy.cfg.
But this problem still annoys me... Container starts and stucks,
Ctrl+C doen't stop it. "docker kill" helps only.
If you want to be able to exit with control-c, do docker run -i <image>. The -i means to pass input to the containerized program, and if HAProxy gets a control-c then it will terminate which will stop the container.
HAProxy doesn't produce any output unless you run it in debug mode, so there's not really much point to running attached, though. You might have a better time with docker run -d <image>, which will detach from the container and let it run in the background. To stop it, use docker kill.