How to use nginx with docker as a reverse proxy - django

I have a django app and running it with gunicorn on localhost:8000
I have configs for nginx to use it as a reverse proxy.
upstream django {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server localhost:8000 fail_timeout=0;
}
I know how to expose 80 port and run nginx in container, but I don't understand how to connect gunicorn running on localhost and nginx in container.

You'll need to utilize the IP of the bridge created by docker. There is a good article on Docker explaining this:
https://docs.docker.com/v1.6/articles/networking/
When Docker starts, it creates a virtual interface named docker0 on
the host machine. It randomly chooses an address and subnet from the
private range defined by RFC 1918 that are not in use on the host
machine, and assigns it to docker0.
If we take a look at the IP address assigned to docker0 (sudo ip addr show docker0) we could use this as the IP address to communicate with the host from within a docker container.
upstream django {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server IP_OF_DOCKER0:8000 fail_timeout=0;
}
I haven't tested the above but I believe it should work. If not you may need to bind gunicorn to the docker0 IP as well.
This answer has some good insight into this process as well...
From inside of a Docker container, how do I connect to the localhost of the machine?

A better approach might be "dokerize" the django application too, build a
network between the dokerized nginx and the dockerized django application
then expose the http port from the dockerized nginx to all interfaces.
Here is a good post about this, maybe you can take some hints from it :)

Related

Pointing Nginx to Gunicorn on a different server

I have a bunch of different services running across several lxc containers, including Django apps.
What I want to do is install Nginx on the Host machine and proxy pass to the Gunicorn instance running in an lxc container. The idea is that custom domain names and certs are added on the Host and the containers remain unchanged for different installs.
it works if I proxy_pass from the host to nginx running in a container, but I then have issues with csrf, which for the Django admin, cannot be turned off.
Thus, what I would like to do is only run nginx on the host server connecting to the Gunicorn instance on the lxc container.
Not sure if that will fix the csrf issues, but it does not seem right to run multiple nginx instances

Using an Nginx container to proxy to a React container on Elastic Beanstalk with docker-compose: is that a valid approach?

I'm hoping for some general feedback on my approach to trying to get a multi-container app up on Elastic Beanstalk via docker-compose (choosing "docker running on 64bit amazon linux 2" as the platform). Two main questions:
`
Am I right that Elastic Beanstalk listens on port 80 by default when we use docker-compose, and so we need to treat port 80 as the entry point to our application? (reasons for thinking so below)
I'm using an nginx container (listening on port 80 of the host machine) to route traffic to my front end container (which talks to a back end container). Is that approach approach to getting multiple containers running on EB valid? Or is there some other much more common/straighforward way of doing this, with docker-compose? (Something like, "yes! that's a valid way to do this," or "no, you are going about this the totally wrong way, go in this other direction" would be helpful)
Why Elastic Beanstalk instead of ECS? I'm just trying to understand the basic functionality of Elastic Beanstalk first.
More context:
I have three containers: 1) a react front end, 2) an express back end, and 3) an nginx proxy. Why did I add nginx? Because it's my understanding that Elastic Beanstalk sets the proxy server configuration to none when you use docker-compose (see here, under "Environments with Docker Compose"), so what we have to do is get nginx to listen on the default point of entry for an Elastic Beanstalk app, port 80, and then forward requests to other containers on the bridge network created by docker-compose. In other words, when you go to the link associated with an EB app, it will default to whatever is listening on port 80. And if that's an Nginx server, you will be routed according to the configuration of the Nginx server.
I'll paste my docker-compose.yml file below, along with my nginx Dockerfile and its default.conf file. At this point, it mostly runs on my local machine (I can get to the front and back end directly on their respective ports), but I still get an error Invalid Host header when I go to localhost:80/, where I should be proxied (by Nginx) to the React app homepage. Any errors noticed would be helpful. But I'm not necessarily looking for a solution to that specific problem. Rather, my main two questions are up above, looking for some confirmation/correction about my general approach.
Thank you!
docker-compose.yml
version: "3"
services:
nginx:
image: mfatigati/shop-proxy
ports:
- "80:80"
depends_on:
- client
server:
image: mfatigati/shop-server-amd
container_name: shop-server
ports:
- "4001:4000"
client:
image: mfatigati/shop-client-amd
container_name: shop-client
depends_on:
- server
ports:
- "3001:3000"
Dockerfile for nginx
FROM nginx
EXPOSE 80
RUN rm -r /usr/share/nginx/html*
COPY configs/default.conf /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
default.conf for nginx
upstream shop-client {
server shop-client:3001;
}
server {
listen 80;
location / {
proxy_pass http://shop-client;
}
}
I think it might be fine. I have a similar approach on ecs. However, you will need to use nginx to serve as a reverse proxy otherwise you will run into CORS issues as soon as you run it anywhere else then on localhost.
On ecs you have options to run all containers in the same pod or in different ones. If you choose the latter (for better scalabilty, amongst other reasons), you will need to set up discovery service as well.
On Beanstalk there might be options I do not know about, but at least using nginx should be fine.
You can set beanstalk to different ports if you want, but for reverse proxy you will still need something, so I'd just leave it at 80.
EDIT: addition on reverse proxy:
The problem to solve is CORS issues you encounter.
The directive upstream is used for loadbalancing/routing. (see this question and it's answers)
To the general mechanism is to have several locations in your nginx conf that point specific requests to other servers. This way the requests can be done to the same origin (domain, port, scheme), and then nginx will forward it to the servers who then can happily reply.
Something like:
server {
listen 80;
listen [::]:80;
server_name my-app.com;
location /api/ {
proxy_pass http://api-service:8080;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
You can have as many locations as necessary. This is very basic, and you might want to look into some other features, but this should be the basic setup.

Configuring nginx and docker on EC2 (Jetbrains)

I have setup Jetbrains Upsource and Teamcity on the same EC2 instance for evalution purposes. If I expose each container on port 80 seperately, I can reach it from the outside world. I want to know how to setup nginx so that I can reach each container over a subdomain like eg. "upsource.example.aws.com" and "teamcity.example.aws.com". I exposed the containers on ports 8080 and 8111 to the host. Is it even possible to achieve this ? If so, I dont know how to start. I read on ways to host multiple domains on one machine for a node web app though exposing the static context. But i have no idea how to make it work with the preconfigured docker images.
Can this be achieved via nginx conf file ?
If not, do I have to use two instances or is there another possibility within aws ?
Its possible with nginx. You have to use something called reverse proxy.
You can expose both the containers in different ports and redirect to these with the help of the nginx configuration.
For example if you have some containers running in port 8000 and 8001 in 127.0.0.1 you can redirect like this:
location /1 {
proxy_pass http://127.0.0.1:8000;
}
location /2 {
proxy_pass http://127.0.0.1:8001;
}
Updated Answer
You need to have 3 containers. The nginx server should be running in port 80. The other two containers will host the sites in say port 8000 (upsource.example.aws.com) and port 8111(teamcity.example.aws.com).
Update the configuration file with the location settings as shown above. Make sure that location / forwards to port 8000 and location /teamcity forwards to port 8111 in your host. More details on how to configure nginx is on the docker hub.
Working
When you go to blabla.com the nginx sever forwards it to port 8000 and when you go to blabla.com/teamcity its goes to port 8111.

docker: networking without linking

I have the following setup running on one host:
1 container with nginx: this one serves as reverse proxy for some webservices
x container offering webservices, having exposed a port to the host
x "oldschool"/non dockerized webservices
when configuring nginx to proxy to "localhost:$EXPOSED_OR_NATIVE_PORT", this does not work, because nginx can't connect to this port.
How do I have to configure the dockerized nginx in order to serve as proxy for container and standard services?
Linking nginx with the docker webserives might be one solution, although i don't like the idea to have all containers linked to the nginx. And this does not solve the problem, that this nginx should also serve as reverse for standard services on this host.
Any idea/recommendation?
Thanks
If you want nginx inside a container to proxy for services on the host, you might just run that container with --net=host, so it is not placed inside a network namespace and accesses the host's network interfaces directly.
Answering myself after trying a lot of stuff. I hope this helps someone.
I had the following process:
As #Ben mentioned, using the bridge ip helped and everthing was fine.
But then i realized, that this setup does not work with UFW on ubuntu and every exposed port of every dockercontainer running was reachable from the internet.
The reason for that is, that docker is fiddling around with iptables and this conflicts with the UFW generated iptables rules. Quite dangerous in my eyes. In order to fix that problem, i started the dockerdaemon with DOCKER_OPTS="--iptables=false". That solved the problem of the worldwide reachable exposed dockerports. But now I can't access the docker container again from the ngix container. This is where #Bryan helped out: The container started with --net host has access to localhost and all exposed ports.
One last step was nessesary: adding this iptables rule was needed in order to have access to the www from within a docker container: iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
LG
Dakky
If your nginx is dockerized and you want to reach an other container or host you should use the hosts ip and NOT localhost. The default is 172.17.42.1 as can be read here https://docs.docker.com/articles/networking/
So you should proxy to:
proxy_pass http://172.17.42.1:$EXPOSED_OR_NATIVE_PORT;

Why does Gunicorn use port 8000/8001 instead of 80?

I busy setting up a development environment for Django Framework using Gunicorn (as Django service) and NGINX (as a Reverse Proxy).
When I look at several tutorials like this one and this one, I see that they use port 8000 and port 8001 (http://127.0.0.1:8000 and http://127.0.0.1:8001). Is there a special reason not to use port 80, like any other webserver?
Port 8000 is often used for radio streaming and malware, so why?
BTW: I am running it using Virtualenv on a Ubuntu 12.04 system.
All ports under 1024 are privileged ports. To bind to a privileged port requires root user permissions and typically you don't want to run gunicorn with root level permissions.
What's done instead is to allow nginx to bind to 127.0.0.1:80 and then proxy requests to port 80 to a non-privileged port like 8000 using an nginx configuration like:
server {
location / {
proxy_pass http://127.0.0.1:8000;
}
}
NGINX listens on port 80 and forwards to Gunicorn. Gunicorn operates on the 127.0.0.1 IP rather than 0.0.0.0, so it isn't listening publicly, and therefore the only way to access the site externally is through port 80.