docker: networking without linking - web-services

I have the following setup running on one host:
1 container with nginx: this one serves as reverse proxy for some webservices
x container offering webservices, having exposed a port to the host
x "oldschool"/non dockerized webservices
when configuring nginx to proxy to "localhost:$EXPOSED_OR_NATIVE_PORT", this does not work, because nginx can't connect to this port.
How do I have to configure the dockerized nginx in order to serve as proxy for container and standard services?
Linking nginx with the docker webserives might be one solution, although i don't like the idea to have all containers linked to the nginx. And this does not solve the problem, that this nginx should also serve as reverse for standard services on this host.
Any idea/recommendation?
Thanks

If you want nginx inside a container to proxy for services on the host, you might just run that container with --net=host, so it is not placed inside a network namespace and accesses the host's network interfaces directly.

Answering myself after trying a lot of stuff. I hope this helps someone.
I had the following process:
As #Ben mentioned, using the bridge ip helped and everthing was fine.
But then i realized, that this setup does not work with UFW on ubuntu and every exposed port of every dockercontainer running was reachable from the internet.
The reason for that is, that docker is fiddling around with iptables and this conflicts with the UFW generated iptables rules. Quite dangerous in my eyes. In order to fix that problem, i started the dockerdaemon with DOCKER_OPTS="--iptables=false". That solved the problem of the worldwide reachable exposed dockerports. But now I can't access the docker container again from the ngix container. This is where #Bryan helped out: The container started with --net host has access to localhost and all exposed ports.
One last step was nessesary: adding this iptables rule was needed in order to have access to the www from within a docker container: iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
LG
Dakky

If your nginx is dockerized and you want to reach an other container or host you should use the hosts ip and NOT localhost. The default is 172.17.42.1 as can be read here https://docs.docker.com/articles/networking/
So you should proxy to:
proxy_pass http://172.17.42.1:$EXPOSED_OR_NATIVE_PORT;

Related

Nginx Proxy Manager and django with nginx

I have a stack Django+Gunicorn+nginx running in docker containers. It is accessible from outside by domain and Port, like web.example.com:1300 . Also, there is Nginx Proxy Manager (NPM) running (uses ports 80 and 443) and succesfully managing some other resources (for example nextcloud). But it doesn't proxy to my django project at port 1300, shows "502 Bad Gateway".
In the Proxy Hosts of NPM I've added config:
domain names: web.example.com
Forward Hostname / IP: nginx_docker_container_name (this way it works with other resources)
Forward Port: 1300
Other settings: tried multiple combinations without success (like with and without SSL certificates etc.)
Is it possible to proxy using NPM?
Sorry if I missed to write some information, actually I do not know what else to state.
I managed to solve the problem myself.
So, nginx in docker container serves web-site with static pages. Nginx proxy manager proxying htpp protocol to nginx and secures communication (and also works from docker container in my set-up).
My mistake was that I didn't connect those docker containers by virtual network.
Ones I put them into one network - everything works.
Then I unpublished nginx port (1300).
NPM proxy settings are "standard", e.g. no "custom location" and nothing in "Advanced" tab. Just "Forward Hostname / IP" is docker container tag and "Forward Port" is nginx port it listens to (80 by default).
With WhiteNoise , you don't need to configure nginx for django static files
❤️❤️❤️

NGINX Docker on Server with pre-existing NGINX on Ubuntu Server

I am currently running into an issue with one of my projects that will be running in Docker on my Ubuntu Server with a NGINX docker container to manage the reverse proxy for the Django Project. My issue I am running into is I already have previous Django projects running on that particular Ubuntu server so port 80 is already being used by a NGINX block running on the actual server.
Is there a workaround to running my Docker NGINX as well as the Ubuntu NGINX and have my docker image run as a "add on" site because the Django sites hosted there are clients websites, so I would prefer to not interfere with them if I dont have to.
My project needs HTTPS because it is serving data to a React-Native app running on Android APK 28 which for some reason has a security rule that blocks non HTTPS connections from happening in the app. If anyone else has run into an issue like this I would gladly appreciate the advice on how to tackle this issue.
I have tried running NGINX in Docker with port 81 instead of port 80 and that works perfectly, but I dont think there is a way to make a secure connection to port 81 is there?
Thanks in advance.
You can't just mess with default HTTP ports for endpoints - user browsers use 80 and 443 by default. If you change those, your users would have to connect to your.server.com:81 or something similar. Nobody would do that for a public server, but this can be an option for a private one.
I think a reasonable way out of this will be to use host's NGINX to proxy requests into Docker's NGINX (if there is sense in keeping it at all). You can handle HTTPS termination on host's NGINX and pass plain HTTP into Docker's one.
Another adequate option is to use another server, so that everything works with no dirty hacking involved.

Unable to connect to Docker container: Connection Refused

I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app

Configuring nginx and docker on EC2 (Jetbrains)

I have setup Jetbrains Upsource and Teamcity on the same EC2 instance for evalution purposes. If I expose each container on port 80 seperately, I can reach it from the outside world. I want to know how to setup nginx so that I can reach each container over a subdomain like eg. "upsource.example.aws.com" and "teamcity.example.aws.com". I exposed the containers on ports 8080 and 8111 to the host. Is it even possible to achieve this ? If so, I dont know how to start. I read on ways to host multiple domains on one machine for a node web app though exposing the static context. But i have no idea how to make it work with the preconfigured docker images.
Can this be achieved via nginx conf file ?
If not, do I have to use two instances or is there another possibility within aws ?
Its possible with nginx. You have to use something called reverse proxy.
You can expose both the containers in different ports and redirect to these with the help of the nginx configuration.
For example if you have some containers running in port 8000 and 8001 in 127.0.0.1 you can redirect like this:
location /1 {
proxy_pass http://127.0.0.1:8000;
}
location /2 {
proxy_pass http://127.0.0.1:8001;
}
Updated Answer
You need to have 3 containers. The nginx server should be running in port 80. The other two containers will host the sites in say port 8000 (upsource.example.aws.com) and port 8111(teamcity.example.aws.com).
Update the configuration file with the location settings as shown above. Make sure that location / forwards to port 8000 and location /teamcity forwards to port 8111 in your host. More details on how to configure nginx is on the docker hub.
Working
When you go to blabla.com the nginx sever forwards it to port 8000 and when you go to blabla.com/teamcity its goes to port 8111.

How to use nginx with docker as a reverse proxy

I have a django app and running it with gunicorn on localhost:8000
I have configs for nginx to use it as a reverse proxy.
upstream django {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server localhost:8000 fail_timeout=0;
}
I know how to expose 80 port and run nginx in container, but I don't understand how to connect gunicorn running on localhost and nginx in container.
You'll need to utilize the IP of the bridge created by docker. There is a good article on Docker explaining this:
https://docs.docker.com/v1.6/articles/networking/
When Docker starts, it creates a virtual interface named docker0 on
the host machine. It randomly chooses an address and subnet from the
private range defined by RFC 1918 that are not in use on the host
machine, and assigns it to docker0.
If we take a look at the IP address assigned to docker0 (sudo ip addr show docker0) we could use this as the IP address to communicate with the host from within a docker container.
upstream django {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server IP_OF_DOCKER0:8000 fail_timeout=0;
}
I haven't tested the above but I believe it should work. If not you may need to bind gunicorn to the docker0 IP as well.
This answer has some good insight into this process as well...
From inside of a Docker container, how do I connect to the localhost of the machine?
A better approach might be "dokerize" the django application too, build a
network between the dokerized nginx and the dockerized django application
then expose the http port from the dockerized nginx to all interfaces.
Here is a good post about this, maybe you can take some hints from it :)