User uploads with nginx and docker - django

I have a Django app where users can upload files containing data they want to be displayed in the app. The app is containerised using docker. In production I am trying to configure nginx to make this work and as far as I can tell it is working to some extent.
As far as I can tell the file does actually get uploaded as I can see it in the container, and I can also download it from the app. The problem I am having is that once the form has been submitted it is supposed to redirect to another form, where the user can assign stuff to the data in the app (not really relevant to the question). However, I am getting a 500 error instead.
I have taken a look at the nginx error logs and I am seeing:
[info] 8#8: *11 client closed connection while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
and
[info] 8#8: *14 client timed out (110: Operation timed out) while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
when the operation is performed.
I also want the media files to be persisted so they are in a docker volume.
I suspect the first log message may be the culprit but is there a way to prevent this from happening, or is it just a poor connection on my end?
Here is my nginx conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
proxy_headers_hash_bucket_size 52;
client_body_buffer_size 1M;
client_max_body_size 10M;
gzip on;
upstream app {
server django:5000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name dali.vpt.co.uk;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name dali.vpt.co.uk;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
# cookiecutter-django app
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /media/ {
autoindex on;
alias /app/tdabc/media/;
}
}
}
and here is my docker-compose file:
version: '2'
volumes:
production_postgres_data: {}
production_postgres_backups: {}
production_media: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: production_django:0.0.1
depends_on:
- postgres
- redis
volumes:
- .:/app
- production_media:/app/tdabc/media
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start.sh
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: production_postgres:0.0.1
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_backups:/backups
env_file:
- ./.envs/.production/.postgres
nginx:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
image: production_nginx:0.0.1
depends_on:
- django
volumes:
- production_media:/app/tdabc/media
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
Any help or insight into this problem would be much appreciated.
Thanks for your time.
Update
Another thing that I should mention is that when I run the app with my production settings but set DEBUG to True it works perfectly but this is only happening when DEBUG is set to false.

Related

wsgi.url_scheme http in docker using nginx

I am using Apache on CentOS. On this server I have a Django project with docker. There are two containers in docker (nginx and python).
In the Apache I have .conf that have proxy to nginx container that is exposed on port 803. SSL is set in the Apache conf as well.
ProxyPreserveHost On
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Scheme "https"
ProxyPass / http://127.0.0.1:803/
ProxyPassReverse / http://127.0.0.1:803/
On the docker I have app.conf for nginx that looks like this:
upstream project {
server project-python:5000;
}
server {
listen 80;
server_name _;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
client_max_body_size 64M;
location / {
gzip_static on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Scheme "https";
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header X-Forwarded-Protocol "ssl";
proxy_set_header X-Forwarded-Ssl=on;
proxy_set_header Host $host;
proxy_pass http://project;
proxy_redirect off;
}
}
In the Dockerfile Python is exposed on port 5000 and in the docker-compose.prod.yml file for production the python is started with gunicorn with this command:
gunicorn project.wsgi:application --preload --bind 0.0.0.0:5000
So I have two issues.
In the Django when I dump request.META I got wsgi.url_scheme that is http.
The second one is that I don't even understand how nginx is communicating with gunicorn because when I set app.conf to be just like below it is working also. How the nginx know that Python is exposed on port 5000.
server {
listen 80;
server_name _;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
client_max_body_size 64M;
location / {
proxy_pass http://project;
proxy_redirect off;
}
}
docker-compose.yml
version: '3'
services:
project-python:
build:
context: .
dockerfile: docker/python/Dockerfile
container_name: project-python
volumes:
- .:/var/www:rw
- .aws:/home/www/.aws
project-nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
container_name: project-nginx
ports:
- "127.0.0.1:803:80"
depends_on:
- project-python
docker-compose.prod.yml
version: '3'
services:
project-python:
restart: unless-stopped
env_file:
- ./.env.prod
command: gunicorn project.wsgi:application --preload --bind 0.0.0.0:5000
expose:
- 5000
project-nginx:
restart: unless-stopped
environment:
APP_ENV: "production"
APP_NAME: "project-nginx"
APP_DEBUG: "False"
SERVICE_NAME: "project-nginx"

https with nginx and docker compose not working

Please I need some assistance. Your contributions will be greatly appreciated
I am trying to add ssl to my nginx and docker compose configuration.
Currently, everything works fine with http, but it won't work with https.
Here is my docker-compose.yml file
version: '3.8'
services:
web_gunicorn:
image: ACCT_ID.dkr.ecr.us-east-2.amazonaws.com/web_gunicorn:latest
volumes:
- static:/static
- media:/media
# env_file:
# - .env
pull_policy: always
restart: always
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- PYTHONDONTWRITEBYTECODE=1
nginx:
image: ACCT_ID.dkr.ecr.us-east-2.amazonaws.com/nginx:latest
pull_policy: always
restart: always
volumes:
- static:/static
- media:/media
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
depends_on:
- web_gunicorn
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on:
- nginx
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
static:
media:
Here is my nginx.conf configuration that works (http)
upstream web {
server web_gunicorn:8000;
}
server {
listen 80;
server_name domain.com;
location / {
resolver 127.0.0.11;
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /static/;
}
location /media/ {
alias /media/;
}
}
Here is my nginx.conf configuration that does not work (http and https)
upstream web {
server web_gunicorn:8000;
}
server {
location / {
resolver 127.0.0.11;
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /static/;
}
location /media/ {
alias /media/;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = domain.com) {
return 301 https://$host$request_uri;
}
listen 80;
server_name domain.com;
return 404;
}
Below is nginx logs, when I do docker-compose logs nginx
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx_1 | 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
One more thing. On my server, I can see all ssl files generate by certbot, and are stored in folder called cerbot.
Finally found the problem. So all my configuration was actually okay -- The issue was that port 443 was not opened on my server
I had only opened it in the outbound rule, I didn't realise I had to open it in the inbound rule too.
My application was running in an ec2 server, on aws.
I used this tool https://www.yougetsignal.com/tools/open-ports/ to check whether the port was open or closed.
The closed port also caused my requests to the server to timeout.

Can't connect to Website - Connection refused - Nginx - SSL

I'm working with Docker, Nginx and Django. I would like to secure my application with ssl but it won't work.
I got a valid certificate using certbot
This is my nginx.conf file:
upstream app {
server app:80;
}
server {
listen 80;
listen [::]:80;
server_name mydomain.de;
return 301 https://$server_name$request_uri;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mydomain.de;
ssl_certificate /etc/nginx/ssl/live/mydomain.de/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/mydomain.de/privkey.pem;
location / {
proxy_pass https://app;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header Host $host;
# proxy_redirect off;
}
location /staticfiles/ {
alias /app/staticfiles/;
add_header Access-Control-Allow-Origin *;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
}
That's my docker-compose file:
version: '3.4'
services:
app:
image: django
build:
context: ./app
dockerfile: Dockerfile
env_file:
- ./.env
volumes:
- ./app/:/app/
- ./app/staticfiles/:/app/staticfiles
command: gunicorn --bind 0.0.0.0:8000 --chdir /app/ Webserver.wsgi
nginx:
build: ./nginx
ports:
- 80:80
- 433:433
depends_on:
- app
volumes:
- ./app/staticfiles/:/app/staticfiles
- ./certbot/conf:/etc/nginx/ssl
- ./certbot/data:/var/www/certbot
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_DB_PORT: "5432"
POSTGRES_DB_HOST: "myhost"
POSTGRES_PASSWORD: "mypw"
POSTGRES_USER: myname
POSTGRES_DB: dev_db
volumes:
postgres_data:
If I try to access my Website I just see the browser message "Connection refused"
I renamed sensitive information like domain name and passwords
Below I'm providing a working certbot nginx configuration example:
server {
# show half the users an optimized site, half the regular site
listen 80;
gzip on;
gzip_http_version 1.0;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 9;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# side note: only use TLS since SSLv2 and SSLv3 have had recent vulnerabilities
access_log /var/www/vhosts/mydomain.de/logs/access_log;
error_log /var/www/vhosts/mydomain.de/logs/error_log;
server_name 3dact.com www.mydomain.de;
location ~* .+.>(xml|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf) {
access_log off;
expires 30d;
break;
}
charset utf-8;
root /var/www/vhosts/mydomain.de/public/dist;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
# what to serve if upstream is not available or crashes
error_page 500 502 503 504 /media/50x.html;
location ~* .+.>(xml|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf) {
root /var/www/vhosts/mydomain.de/public/dist;
access_log off;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
break;
}
location /dist {
alias /var/www/vhosts/mydomain.de/public/dist;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/nginx/ssl/live/mydomain.de/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/nginx/ssl/live/mydomain.de/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mydomain.de) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mydomain.de) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mydomain.de www.mydomain.de;
return 404; # managed by Certbot
}
The first server block provides the actual locations and certbot configurations, the second one is used by certbot for domain redirections (www.). Given you properly map the volumes in docker-compose.yml, that should preserve the consistency when you're connecting. Also, make sure ports 80 and 443 get proper exposure outside of the container.
In your docker-compose.yml:
nginx:
build: ./nginx
ports:
- 80:80
- 433:433
depends_on:
- app
volumes:
- ./app/staticfiles/:/app/staticfiles
- ./certbot/conf:/etc/nginx/ssl # Make sure it maps into /etc/nginx/ssl/live/mydomain.de
- ./certbot/data:/var/www/certbot
- ./letsencrypt:/etc/letsencrypt # This is where options-ssl-nginx.conf and ssl-dhparams.pem are located
If you don't have a local ./letsencrypt dir or the files are located in a different place, please create any dirs, copy the files there and configure the mapping accordingly.

How to properly configure certbot in docker?

Please help me with this problem, i have been trying to solve it for 2 days!
Please, just tell me what i am doing wrong. And what i should to change to make it work! And what i should to do to take it work.
ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found
My docker-compose file
version: '3'
services:
nginx:
restart: always
build:
context: ./
dockerfile: ./nginx/Dockerfile
depends_on:
- server
ports:
- 80:80
volumes:
- ./server/media:/nginx/media
- ./conf.d:/nginx/conf.d
- ./dhparam:/nginx/dhparam
- ./certbot/conf:/nginx/ssl
- ./certbot/data:/usr/share/nginx/html/letsencrypt
server:
build:
context: ./
dockerfile: ./server/Dockerfile
command: gunicorn config.wsgi -c ./config/gunicorn.py
volumes:
- ./server/media:/server/media
ports:
- "8000:8000"
depends_on:
- db
environment:
DEBUG: 'False'
DATABASE_URL: 'postgres://postgres:#db:5432/postgres'
BROKER_URL: 'amqp://user:password#rabbitmq:5672/my_vhost'
db:
image: postgres:11.2
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/usr/share/nginx/html/letsencrypt --email artasdeco.ru#gmail.com --agree-tos --no-eff-email -d englishgame.ru
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/logs:/var/log/letsencrypt
- ./certbot/data:/usr/share/nginx/html/letsencrypt
My Dockerfile
FROM python:3.7-slim AS server
RUN mkdir /server
WORKDIR /server
COPY ./server/requirements.txt /server/
RUN pip install -r requirements.txt
COPY ./server /server
RUN python ./manage.py collectstatic --noinput
#########################################
FROM nginx:1.13
RUN rm -v /etc/nginx/nginx.conf
COPY ./nginx/nginx.conf /etc/nginx/
RUN mkdir /nginx
COPY --from=server /server/staticfiles /nginx/static
nginx.conf file
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 443 ssl http2;
server_name englishgame.ru;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/englishgame.ru/fullchain.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
return 301 https://englishgame.ru$request_uri;
}
}
server {
listen 80;
server_name englishgame.ru;
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html/letsencrypt;
}
location /static {
alias /nginx/static/;
expires max;
}
location /media {
alias /nginx/media/;
expires 10d;
}
location /robots.txt {
alias /nginx/static/robots.txt;
}
location /sitemap.xml {
alias /nginx/static/sitemap.xml;
}
location / {
proxy_pass http://server:8000;
proxy_redirect off;
proxy_read_timeout 60;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thank you for your help!
Alright, so based on the error ERROR: for certbot Cannot start service certbot: network 4d3b22b1f02355c68a900a7dfd80b8c5bb64508e7e12d11dadae11be11ed83dd not found, the issue is not related to any of the other services defined in your compose file, so those and your Dockerfile and nginx configuration should be irrelevant to the problem.
Then to solve the problem of "why certbot service cannot be created". Usually this kind of error happens when a network that was configured for a service has been removed manually. In this case, however, no service is even referring to a network. Thus only the hash sum is printed, not any network name.
Googling the error brings up a similar problem from let's encrypt: https://github.com/BirgerK/docker-apache-letsencrypt/issues/8, which points to
an actual docker compose issue https://github.com/docker/compose/issues/5745.
The solution there is to run the docker compose with "--force-recreate" option to resolve the problem.
So, the problem should be fixed by running docker compose up -d --force-recreate.

Nginx can not proxy pass to Django docker container

I'm running my Django app in docker-compose and I'm exposing port 8000.
I have Nginx installed in the VM without docker and I want it to forward requets to my Django app, I tried to use host network in my docker-compose and create an external network but nothing worked and I'm always getting 502 Bad gateway
this my nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
include /etc/nginx/conf.d/*.conf; #includes all files of file type.conf
server {
listen 80;
location /static/ {
autoindex on;
alias /var/lib/docker/volumes/app1_django-static;
}
location /media/ {
autoindex on;
alias /var/lib/docker/volumes/app1_django-media;
}
location / {
proxy_pass http://localhost:8000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
#For favicon
location /favicon.ico {
alias /code/assets/favicon.ico;
}
# Error pages
error_page 404 /404.html;
location = /404.html {
root /code/templates/;
}
}
}
and this is my docker-compose file:
version: "3.2"
services:
web:
network_mode: host
image: nggyapp/newsapp:1.0.2
restart: always
ports:
- "8000:8000"
volumes:
- type: volume
source: django-static
target: /code/static
- type: volume
source: django-media
target: /code/media
environment:
- "DEBUG_MODE=False"
- "DB_HOST=.."
- "DB_PORT=5432"
- "DB_NAME=db"
- "DB_USERNAME=.."
- "DB_PASSWORD=.."
volumes:
django-static:
django-media: