I am running a dockerized django app. I deployed it on EC2. Nginx is also in a docker container. Nginx in my docker-container is configured so that it uses ssl certificates from Lets Encrypt.
Lets Encrypt certificates are only valid for 90 day, that's why I set a cronjob to renew them.
My question now is: Will my nginx that runs in a docker container automatically use the updated file? Or do I need to spin up my docker container again and build it anew for the changes to take effect? In the latter case, is it possible to tell nginx to use the renewed file so I don't have to rebuild my container? I'm asking because I'd like to minimize downtime for my application.
For more clarity I provide my config. The important files are the referenced ssl certificates:
server {
listen 443 ssl;
server_name mydomain;
charset utf-8;
ssl_stapling off;
ssl_stapling_verify off;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
location / {
proxy_pass http://django:5000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Here my compose file:
production-nginx-container:
container_name: 'production-nginx-container'
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- /home/ubuntu/nginx-conf/myconf.conf:/etc/nginx/conf.d/default.conf
- /etc/letsencrypt/live/mydomain/fullchain.pem:/etc/letsencrypt/live/mydomain/fullchain.pem
- /etc/letsencrypt/live/mydomain/privkey.pem:/etc/letsencrypt/live/mydomain/privkey.pem
depends_on:
- django
I can only see two options: Either nginx keeps this file open the whole time while my docker container is running or it doesn't.
In case it keeps it open I assume I need to restart the docker container which I do not want :).
I'd appreciate any input! Thanks in advance!
Nginx reads the certificates / configs provided at the start. To re-read them you can restart nginx (container) or send reload signal to nginx:
nginx -s reload - command in the container. Also paired with nginx -t beforehand to check that config files' syntax is ok.
Related
I am having issues making requests to a backend Django container from a frontend app that is reverse proxied by NGINX.
I have a backend Django server which serves database information, carries out authenticated etc. It is containerised thru a docker container. This is locally served on http://127.0.0.1:8000/. I then have NGINX project.conf as follows:
server {
listen 80;
server_name docker_flask_gunicorn_nginx;
location / {
proxy_pass http://my_app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
rewrite ^/static(.*) /$1 break;
root /static;
}}
There are a few different endpoints in the backend app, but it fails at the first hurdle which is trying to authenticate at /api/token/. When the frontend app makes a request to http://127.0.0.1:8000/api/token/ the following error is returned:
HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/token/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc3208f6910>: Failed to establish a new connection: [Errno 111] Connection refused'))
For completeness the docker-compose for the frontend / NGINX setup is:
version: '3'
services:
my_app:
container_name: my_app-frontend
restart: always
build: ./my_app
ports:
- "8080:8080"
command: gunicorn -w 2 -b :8080 app:server
env_file:
- ./my_app/.env
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- my_app
From what I can see on the Django container, the request is never being received by the backend and when I run the frontend app without NGINX it works as expected. As such, I guess that it is an issue with the NGINX setup. I did search through existing questions and some look similar, however I tried the proposed solutions and could not get them to work. For example, I tried changing the API url to point to the docker bridge ip, but that didn't seem to work either. Apologies if this has been answered before, but any help is much appreciated!
Thanks!
The default IP address that gunicorn binds to is 127.0.0.1 which means that it'll only accept connections from inside the container.
Use
command: gunicorn -w 2 -b 0.0.0.0:8080 app:server
to make it accept connections from outside the container.
I want to deploy Django Application with Docker Swarm.
I was following this guide where it does not use the docker swarm nor docker-compose, and specifically created two Django containers, one Nginx container, and a Certbot container for the SSL certificate.
The Nginx container reverse proxy and load balance across the two Django containers which are in the two servers using their IPs
upstream django {
server APP_SERVER_1_IP;
server APP_SERVER_2_IP;
}
server {
listen 80 default_server;
return 444;
}
server {
listen 80;
listen [::]:80;
server_name your_domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your_domain.com;
# SSL
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
client_max_body_size 4G;
keepalive_timeout 5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://django;
}
location ^~ /.well-known/acme-challenge/ {
root /var/www/html;
}
}
I want to implement all this same functionality but with Docker Swarm so that I can scale the containers with one command docker service update --replicas 3 <servicename>
The problem is I am not able to understand How to use implement the Nginx container in this scenario, Docker Swarm provides its load balancing so I did not need Nginx for that but Nginx is still needed for SSL certificates. So how do I implement Nginx in Swarm so it would provide SSL certificates for all nodes and reverse proxy to Django containers?
I only used Nginx before for reverse proxying so I am not able to figure how to write the Nginx conf and make the Nginx Container work with the Django Container with SSL included all inside a Docker Swarm.
####################
# docker-stack.yml #
####################
version: '3.7'
services:
web:
image: 127.0.0.1:5000/django-image
deploy:
replicas: 3
command: gunicorn mydjangoapp.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
depends_on:
- nginx
nginx:
image: 127.0.0.1:5000/nginx-image
deploy:
replicas: 2
ports:
- 80:80
depends_on:
- web
nginx.conf that I used for compose-file for pointing towards one Django Container
upstream django {
server web:8000; #web is name of django service
}
server {
#SSL STUFF
listen 80;
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
So, between nginx and the world you can choose to let dockers ingress loadbalance to your nginx instances, or use an external loadbalancer. If you had a fixed set of nodes that an external loadbalancer was pointing to then
nginx:
image: 127.0.0.1:5000/nginx-image
ports:
- 443:443
networks:
- proxy
deploy:
mode: global
placement:
constraints:
- node.labels.myorg.lb==true
and label the corresponding nodes with myorg.lb=true
Next, as to your service, docker basically has 2 ways of advertizing replicated services: vip and dnsrr. With vip mode - the default - docker will assign a single ip address to the name "web" - which is what you give to the nginx replicas, and then it will loadbalance traffic between that. You can switch a service to dnsrr mode, in which case dns queries on web will be a dynamic changing list of the current ips of all the service replicas. Alternatively, you can use the explicit name tasks.<service> to get the same dnsrr entry.
Now. I dont know if nginx supports loadbalancing to dnsrr out the box. but I do know that it caches entries for a long time and so you will want to setup nginx with an explicit resolver (127.0.0.11) with a short refresh interval.
web:
image: 127.0.0.1:5000/django-image
command: gunicorn mydjangoapp.wsgi:application --bind 0.0.0.0:8000
networks:
- proxy
deploy:
replicas: 3
endpoint_mode: dnsrr
I'm using docker-compose to deploy a Django app on a VM with Nginx installed on the VM as a web server.
but I'm getting " 502 Bad gateway" I believe it's a network issue I think Nginx can't access the docker container! however, when I use the same configuration in an Nginx container it worked perfectly with the Django app but I need to use the installed one not the one with docker.
This is my docker-compose file:
version: "3.2"
services:
web:
image: ngrorra/newsapp:1.0.2
restart: always
ports:
- "8000:8000"
volumes:
- type: volume
source: django-static
target: /code/static
- type: volume
source: django-media
target: /code/media
environment:
- "DEBUG_MODE=False"
- "DB_HOST=…”
- "DB_PORT=5432"
- "DB_NAME=db_1”
- "DB_USERNAME=username1111"
volumes:
django-static:
django-media:
And this is my nginx.conf file:
upstream web_app {
server web:8000;
}
server {
listen 80;
location /static/ {
autoindex on;
alias /code/static/;
}
location /media/ {
autoindex on;
alias /code/media/;
}
location / {
proxy_pass http://web_app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
#For favicon
location /favicon.ico {
alias /code/assets/favicon.ico;
}
# Error pages
error_page 404 /404.html;
location = /404.html {
root /code/templates/;
}
}
Does anyone know what is the issue?
Thank you!
As commented above, using "web" as host name will not work, you could try localhost or the docker ip (you can get it using ifconfig in Ubuntu, for example).
For the network issue, I think you could create a new docker external network using docker network create and adding to your "network" [definition inside compose] (https://docs.docker.com/compose/networking/#use-a-pre-existing-network). Another possibility is to use the host as network
When I run docker aplications with Nginx, usualy I create first an external docker network with defined IP (using some docker network IP - usualy 172.x.x.x), then add a Nginx container to my docker-compose.yaml and my server inside nginx.conf is something like this:
upstream web_app {
server 172.x.x.x:8000;
}
.
.
.
It works without problems. Hope this can help you.
I use shiny server to build a web-app on port 3838, when i use nginx in my server it works well. But when I stop nginx on my server and try to use docker nginx, I find the site comes to a '502-Bad Gate Way' error and nginx log shows:
2016/04/28 18:51:15 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, ...
I install docker-nginx by this command:
sudo docker pull nginx
My docker command line is something like (for clear i add some indent):
sudo docker run --name docker-nginx -p 80:80
-v ~/docker-nginx/default.conf:/etc/nginx/conf.d/default.conf
-v /usr/share/nginx/html:/usr/share/nginx/html nginx
I create a folder name 'docker-nginx' in my home dir, move my nginx conf file in this folder, and then remove my original conf in etc/nginx dir just in case.
My nginx conf file looks like this:
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://127.0.0.1:3838/;
proxy_redirect http://127.0.0.1:3838/ $scheme://$host/;
auth_basic "Username and Password are required";
auth_basic_user_file /etc/nginx/.htpasswd;
# enhance the performance
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
You have to define upstream directly. Currently your nginx can not proxy to your web application.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
My situation was running 3 containers, a nginx container and two containerized services. I was using the Nginx container as a reverse proxy for my go services.
Issue is the nginx container was looking for microservice ports in its own container environment. I didn't realize that at the time and I didn't use the docker-compose.yml then. When using docker-compose.yml file you specify a network and that's that.
So when running the containers you should use --net=host.
Info on that: What does --net=host option in Docker command really do?
This worked for me, I hope it saves someone the pain :):
docker run --net=host nginx:someTag
docker run --net=host service1:someTag
Goal: The set of docker containers for a production django website deployment.
My hang up in this process is that usually nginx directly serves the static files... Based on my understanding of a good architecture using docker, you would have a container for your wsgi server (probably gunicorn), a separate nginx container with an upstream server configuration pointing to your gunicorn container. The nginx container can load balance between multiple gunicorn containers.
But what this implies is that I have to install my django app's static files in the nginx container, which seems like bad practice since it's primary goal is really load balancing
Is it better to have three containers: nginx, gunicorn, and a dedicated static server (possibly nginx or lighthttpd) for static files?
With reference to serving static files, your options depend on the functionality of your application. There's a very nifty tool called dj-static which will help you serve static files by adding very minimal code.
The documentation is fairly simple and all you have to do is follow these steps.
I found this answer from Michael Hampton:
"This only works if the processes are in the same host, VM or container, because it tries to make a connection to the same machine. When they are in different containers, it does not work.
You need to alter your nginx configuration so that it uses the internal IP address of the uwsgi container." Link from the post
And definitely is something you have to keep in mind if you will have Nginx in a different container, also you have to set the nginx.conf, pointing your statics file directory as alias preventing a security issue.
I hope this code work for everybody, it took me a couple of ours to figure about how to compose Gunicorn, docker, and Nginx:
# nginx.conf
upstream djangoA {
server $DOCKER_CONTAINER_SERVICE:9000 max_fails=3 fail_timeout=0;
# In my case looks like: web:9000
}
Server {
include mime.types;
# The port your site will be served on
listen 80;
# the domain name it will serve for
server_name $YOUR_SERVER_NAME;# substitute your machine's IP address or FQDN
charset utf-8;
#Max upload size
client_max_body_size 512M; # adjust to taste
location /site_media {
alias $DIRECTORY_STATIC_FILES/site_media;#your Django project's media files have to be inside of the container have nginxs, you can copy them with volumes.
expires 30d;
}
location / {
try_files $uri #proxy_to_app;
}
# Finally, send all non-media requests to the Django server.
location #proxy_to_app {
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header Host $host;
proxy_pass http://djangoA;
}
}
And for the docker-compose:
#production.yml
version: '2'
services:
db:
extends:
file: base.yml
service: db
nginx:
image: nginx:latest
volumes:
- ./nginx:/etc/nginx/conf.d/
- ./$STATIC_FILE_ROOT/site_media:/$STATIC_FILE_ROOT/site_media
ports:
- "80:80"
depends_on:
- web
web:
extends:
file: base.yml
service: web
build:
args:
- DJANGO_ENV=production
command: bash -c "python manage.py collectstatic --noinput && chmod 775 -R project/site_media/static && gunicorn project.wsgi:application"
volumes:
- ./$DIRECTORY_APP:/$DIRECTORY_APP
ports:
- "9000:9000"
depends_on:
- db
volumes:
db_data:
external: true
If it's a django app using docker and/or kubernetes take a look at whitenoise http://whitenoise.evans.io/en/stable/ to provide a solution here.
This is a straightforward recommendation, but I spent much time searching around before I found it referenced.