I am struggling with the connection of nginx and django (docker container)
My strategy is like this ,
run uwsgi http option and 8001 port. (not socket)
uwsgi --http :8001 --module myapp.wsgi --py-autoreload 1 --logto /tmp/mylog.log
then I confirmed wget http://127.0.0.1:8001 works.
but from, nginx, It can't connect somehow. (111: Connection refused) error
However from nginx wget http://django:8001 works.
How can I connect between containars
upstream django {
ip_hash;
server 127.0.0.1:8001;
}
server {
listen 8000;
server_name 127.0.0.1;
charset utf-8;
location /static {
alias /static;
}
location / {
proxy_pass http://127.0.0.1:8001/;
include /etc/nginx/uwsgi_params;
}
}
server_tokens off;
I am trying this config but, if I try this, my container doesn't launch.
log is like this
2020/03/24 08:24:04 [emerg] 1#1: upstream "django" may not have port 8001 in /etc/nginx/conf.d/app_nginx.conf:16
upstream django {
ip_hash;
server django:8001;
}
server {
listen 8000;
server_name 127.0.0.1;
charset utf-8;
location /static {
alias /static;
}
location / {
proxy_pass http://django:8001/;
include /etc/nginx/uwsgi_params;
}
}
server_tokens off;
my docker compose is very simple...
nginx:
image: nginx:1.13
container_name: nginx
ports:
- "8000:8000"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
- ./nginx/static:/static
depends_on:
- django
Finally thanks to helps. my server works. final conf is like this below.
remove upstream and use name 'django' instead of 127.0.0.1
server {
listen 8000;
server_name 127.0.0.1;
charset utf-8;
location /static {
alias /static;
}
location / {
proxy_pass http://django:8001/;
include /etc/nginx/uwsgi_params;
}
}
server_tokens off;
If they run in different containers, 127.0.0.1 is not the correct IP; use the name of the other container, e.g.
proxy_pass http://django:8001;
so Docker's internal DNS can route things.
Related
As the title says I want to host two different django project in the same droplet (NGINX, Gunicorn, ubuntu) with different subdomains. One will be our main site example.com. which is up and running and working perfectly. We want to host the staging site staging.example.com in the same droplet.
We have created new sockets and service files for the staging site and activated and enabled them but the issue is nginx still points to the files in main domain directory rather than the staging directory and hence we get this error below even though these domains have been added in the allowed hosts of settings.py of the staging site
DisallowedHost at /
Invalid HTTP_HOST header: 'staging.example.com'. You may need to add 'staging.example.com' to ALLOWED_HOSTS
Here is our staging.guinicorn.service file
[Unit]
Description=staging.gunicorn daemon
Requires=staging.gunicorn.socket
After=network.target
[Service]
User=admin
Group=www-data
WorkingDirectory=/home/admin/example1staging
ExecStart=/home/admin/example1staging/venv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/staging.gunicorn.sock djangoproject.wsgi:application
[Install]
WantedBy=multi-user.target
Here is our staging.guicorn.socket file
[Unit]
Description=staging.gunicorn socket
[Socket]
ListenStream=/run/staging.gunicorn.sock
[Install]
WantedBy=sockets.target
Lastly here is our nginx config
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 302 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_client_certificate /etc/ssl/cloudflare.crt;
ssl_verify_client on;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/admin/example1;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
server {
listen 80;
listen [::]:80;
server_name staging.example.com www.staging.example.com;
return 302 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_client_certificate /etc/ssl/cloudflare.crt;
ssl_verify_client on;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/admin/example1staging;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/staging.gunicorn.sock;
}
}
Some help here would be extremely welcome.
I'm trying to set up a simple blogging site that I wrote using the django framework. The website works except that it isn't serving static files. I imagine that's because nginx isn't running. However, when I configure it to run on any port other than 80 I get the following error:
nginx: [emerg] bind() to 172.17.0.1:9000 failed (99: Cannot assign requested address)
When I run it on a port that is already being used by gunicorn I get the following error:
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
My nginx configuration file is as follows:
upstream django {
server 127.0.0.1:8080;
}
server {
listen 172.17.0.1:9000;
server_name my.broken.blog;
index index.html;
location = /assets/favicon.ico { access_log off; log_not_found off; }
location /assets {
autoindex on;
alias /var/www/html/mysite/assets;
}
location / {
autoindex on;
uwsgi_pass unix:///run/uwsgi/django/socket;
include /var/www/html/mysite/mysite/uwsgi_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
But if I run nginx without starting guincorn it runs properly but I get a 403 forbidden error.
Going with the suggested answer I don't get any errors, but the site is returning 403 forbidden and doesn't present the part of the website gunicorn is supposed to deliver.
Configure Nginx and Gunicorn in following way to make it work,
Use unix socket to comminicate between nginx and gunicron rather than running gunicorn in some port
Create a unit file for gunicorn in the following location
sudo nano /etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=root
Group=nginx
WorkingDirectory=/path-to-project-folder
ExecStart=/<path-to-env>/bin/gunicorn --workers 9 --bind unix:/path-to-sockfile/<blog>.sock app.wsgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then start and enable gunicorn service
It will generate a sock file in the specified path.
sudo systemctl enable gunicorn
sudo systemctl start gunicorn
Note: Choose the suitable number of workers for gunicorn, can sepecify log files as follows
ExecStart=//bin/gunicorn --workers 9 --bind unix:/path-to-sockfile/.sock app.wsgi:application --access-logfile /var/log/gunicorn/access.log --error-logfile /var/log/error.log
create a new configuration file specific to the project in /etct/nginx rather than edit the default nginx.conf
nano /etc/nginx/blog.conf
and add the following lines ( can also add the file in /etc/nginx/default.d/)
server {
listen 80;
server_name 172.17.0.1; # your Ip
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /path-to-static-folder;
}
location /media/ {
root /path-to-media-folder;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/path-to-sock-file/<blog-sock-file>.sock;
}
}
include the /etc/nginx/blog.conf to nignx.conf
---------
----------
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/blog.conf; # the new conf file added
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
------
-------
run sudo nginx -t // to check for errors in nginx configuration.
run sudo systemctl nginx restart
refer: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
I'm running my Django app in docker-compose and I'm exposing port 8000.
I have Nginx installed in the VM without docker and I want it to forward requets to my Django app, I tried to use host network in my docker-compose and create an external network but nothing worked and I'm always getting 502 Bad gateway
this my nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
include /etc/nginx/conf.d/*.conf; #includes all files of file type.conf
server {
listen 80;
location /static/ {
autoindex on;
alias /var/lib/docker/volumes/app1_django-static;
}
location /media/ {
autoindex on;
alias /var/lib/docker/volumes/app1_django-media;
}
location / {
proxy_pass http://localhost:8000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
#For favicon
location /favicon.ico {
alias /code/assets/favicon.ico;
}
# Error pages
error_page 404 /404.html;
location = /404.html {
root /code/templates/;
}
}
}
and this is my docker-compose file:
version: "3.2"
services:
web:
network_mode: host
image: nggyapp/newsapp:1.0.2
restart: always
ports:
- "8000:8000"
volumes:
- type: volume
source: django-static
target: /code/static
- type: volume
source: django-media
target: /code/media
environment:
- "DEBUG_MODE=False"
- "DB_HOST=.."
- "DB_PORT=5432"
- "DB_NAME=db"
- "DB_USERNAME=.."
- "DB_PASSWORD=.."
volumes:
django-static:
django-media:
I am pretty new to NGINX, GUNICORN, DJANGO setup. I am using supervisor between nginx, gunicorn. Without NGINX, setup works well with supervisor and gunicorn and I can see the result through my server IP. But when i am using nginx to serve the requests, the error "upstream prematurely closed connection while reading response header from upstream" occurs. please anyone help me in this?
Supervisor command I am using:
sudo /path/to/gunicorn/gunicorn -k gevent --workers 4 --bind unix:/tmp/gunicorn.sock --chdir /path/to/application wsgi:application --timeout 120
below is the nginx.conf i am currently using and it is working as expected. but i am not sure it is up to the mark. Please look into this. Thanks.
==============Update=============
upstream xxxx {
server unix:/tmp/gunicorn.sock;
}
server{
listen 80;
listen [::]:80;
server_name xxx.in www.xxx.in;
return 301 https://$host$request_uri;
}
server{
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/xxx.in/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx.in/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root path/to/project;
}
location / {
include uwsgi_params;
proxy_pass http://unix:/tmp/gunicorn.sock;
}
}
Please check this steps:
First of all be sure that gunicorn is creating .sock file running with supervisor. You can ensure it with
$sudo supervisorctl status <name-of-supervisor-task-for-it>
(check if service is RUNNING)
$ls /tmp
(There should be a gunicorn.sock file existing there)
Also be aware of user that you are assigning to supervisor config. In this case you don't need to set root before command, just give privilege of root user to config file. like this:
[program:myprogram]
command=/path/to/gunicorn/gunicorn -k gevent --workers 4 --bind unix:/tmp/gunicorn.sock --chdir /path/to/application wsgi:application --timeout 120
<other commands>
user=root
And you nginx config should should look like this:
upstream django {
server unix://tmp/gunicorn.sock;
}
server {
listen 80;
server_name <your_app_domain_here>;
location / {
include uwsgi_params;
proxy_pass http://django/;
}
I have a Django app where users can upload files containing data they want to be displayed in the app. The app is containerised using docker. In production I am trying to configure nginx to make this work and as far as I can tell it is working to some extent.
As far as I can tell the file does actually get uploaded as I can see it in the container, and I can also download it from the app. The problem I am having is that once the form has been submitted it is supposed to redirect to another form, where the user can assign stuff to the data in the app (not really relevant to the question). However, I am getting a 500 error instead.
I have taken a look at the nginx error logs and I am seeing:
[info] 8#8: *11 client closed connection while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
and
[info] 8#8: *14 client timed out (110: Operation timed out) while waiting for request, client: 192.168.0.1, server: 0.0.0.0:443
when the operation is performed.
I also want the media files to be persisted so they are in a docker volume.
I suspect the first log message may be the culprit but is there a way to prevent this from happening, or is it just a poor connection on my end?
Here is my nginx conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
proxy_headers_hash_bucket_size 52;
client_body_buffer_size 1M;
client_max_body_size 10M;
gzip on;
upstream app {
server django:5000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name dali.vpt.co.uk;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name dali.vpt.co.uk;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
# cookiecutter-django app
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /media/ {
autoindex on;
alias /app/tdabc/media/;
}
}
}
and here is my docker-compose file:
version: '2'
volumes:
production_postgres_data: {}
production_postgres_backups: {}
production_media: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: production_django:0.0.1
depends_on:
- postgres
- redis
volumes:
- .:/app
- production_media:/app/tdabc/media
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start.sh
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: production_postgres:0.0.1
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_backups:/backups
env_file:
- ./.envs/.production/.postgres
nginx:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
image: production_nginx:0.0.1
depends_on:
- django
volumes:
- production_media:/app/tdabc/media
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
Any help or insight into this problem would be much appreciated.
Thanks for your time.
Update
Another thing that I should mention is that when I run the app with my production settings but set DEBUG to True it works perfectly but this is only happening when DEBUG is set to false.