Django + Nginx on subdomain Bad request 400 - django

I need to deploy app to prod to subdomain. DNS A-record of app.mysite.com have value of machine. A-record of mysite.com have ip of different computer. Stack: Nginx, Django, Gunicorn.
Nginx works fine on IP, but invokes 400 on subdomain.
I've tried adding proxy_set_header values
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
but it doesn't help.
nginx/sites-enabled/mysite:
(If I change server_name ti IP it wirks fine)
server {
listen 80;
server_name app.mysite.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/mysite;
}
location / {
proxy_set_header Host $host;
include proxy_params;
proxy_pass http://unix:/home/user/mysite.sock;
}
}
settings.py
ALLOWED_HOSTS = [ip of machine,'127.0.0.1', 'app.mysite.com','mysite.com']
I want app to work only at subdomain. How could I achieve it?
Possibly helpful last Nginx process logs
Aug 10 21:23:59 my-machine systemd[1]: Starting A high performance web server and a reverse proxy server...
Aug 10 21:23:59 my-machine systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Aug 10 21:23:59 my-machine systemd[1]: Started A high performance web server and a reverse proxy server.
Aug 10 21:25:09 my-machine systemd[1]: Stopping A high performance web server and a reverse proxy server...
Aug 10 21:25:09 my-machine systemd[1]: Stopped A high performance web server and a reverse proxy server.

After hours of testing and configuration the subdomain started after
sudo systemctl restart gunicorn
in /etc/systemd/system

I stopped all gunicorn workers with:
pkill gunicorn
and then restarted gunicorn, in my particular case is:
PYENV_VERSION=3.5.2 gunicorn -c gunicorn_cfg.py testing_webpage.wsgi --timeout 300 --workers=9 --bind=unix:/opt/peaku_co/run/gunicorn.sock

I had the same error too... Turns out I had turned on Force HTTPS redirect on my DNS provider and yet I was listening on port 80 on my VPS on HTTP without SSL certificate installed.
I fixed by installing an SSL certificate on my VPS and rebooting gunicorn.

Related

Django Channels Nginx production

I have a django project and recently added channels to use websockets. This seems to all work fine, but the problem I have is to get the production ready.
My setup is as follows:
Nginx web server
Gunicorn for django
SSL enabled
Since I have added channels to the mix. I have spent the last day trying to get it to work.
On all the turtotials they say you run daphne on some port then show how to setup nginx for that.
But what about having gunicorn serving django?
So now I have guncorn running this django app on 8001
If I run daphne on another port, lets say 8002 - how should it know its par of this django project? And what about run workers?
Should Gunicorn, Daphne and runworkers all run together?
This question is actually addressed in the latest Django Channels docs:
It is good practice to use a common path prefix like /ws/ to
distinguish WebSocket connections from ordinary HTTP connections
because it will make deploying Channels to a production environment in
certain configurations easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on
path to either (1) a production-grade WSGI server like Gunicorn+Django
for ordinary HTTP requests or (2) a production-grade ASGI server like
Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy
where Daphne serves all requests - HTTP and WebSocket - rather than
having a separate WSGI server. In this deployment configuration no
common path prefix like is /ws/ is necessary.
In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):
upstream daphne_server {
server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;
}
upstream gunicorn_server {
server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name _;
location /ws/ {
proxy_pass http://daphne_server;
}
location / {
proxy_pass http://gunicorn_server;
}
}
(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)
I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:
websockets connections are going to daphne server
HTTP connections (REST API) are going to gunicorn server
Here is my nginx configuration file:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
I recently answered a similiar question, have a look there for an explanation on how django channels work.
Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.
To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server: $ daphne my_project.asgi:channel_layer
and your worker:
$ python manage.py runworker
NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.

docker nginx connection refused while connecting to upstream

I use shiny server to build a web-app on port 3838, when i use nginx in my server it works well. But when I stop nginx on my server and try to use docker nginx, I find the site comes to a '502-Bad Gate Way' error and nginx log shows:
2016/04/28 18:51:15 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, ...
I install docker-nginx by this command:
sudo docker pull nginx
My docker command line is something like (for clear i add some indent):
sudo docker run --name docker-nginx -p 80:80
-v ~/docker-nginx/default.conf:/etc/nginx/conf.d/default.conf
-v /usr/share/nginx/html:/usr/share/nginx/html nginx
I create a folder name 'docker-nginx' in my home dir, move my nginx conf file in this folder, and then remove my original conf in etc/nginx dir just in case.
My nginx conf file looks like this:
server {
listen 80 default_server;
# listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://127.0.0.1:3838/;
proxy_redirect http://127.0.0.1:3838/ $scheme://$host/;
auth_basic "Username and Password are required";
auth_basic_user_file /etc/nginx/.htpasswd;
# enhance the performance
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
You have to define upstream directly. Currently your nginx can not proxy to your web application.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
My situation was running 3 containers, a nginx container and two containerized services. I was using the Nginx container as a reverse proxy for my go services.
Issue is the nginx container was looking for microservice ports in its own container environment. I didn't realize that at the time and I didn't use the docker-compose.yml then. When using docker-compose.yml file you specify a network and that's that.
So when running the containers you should use --net=host.
Info on that: What does --net=host option in Docker command really do?
This worked for me, I hope it saves someone the pain :):
docker run --net=host nginx:someTag
docker run --net=host service1:someTag

Intermittent Bad Gateway nginx (with Django)

I'm using vagrant and virtualbox for my Django environment. The django environment uses nginx. Everything works fine except intermittently I'll see 502 bad gateway errors. When these errors happen, there is nothing in nginx access.log or error.log. Here are my configurations
Vagrant file private network
config.vm.network "private_network", ip: "192.168.33.10"
nginx.conf
server {
listen 80 default_server;
server_name _;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header Host 192.168.33.10;
proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
}
}
I'm not sure how to debug or fix this issue. Any ideas?
You can try python manage.py runserver 192.168.33.10:8000 since Django defaults your ip to 127.0.0.1 and nginx might have problems with that.

Nginx responds with 502 error

While trying to deploy my app to Digital ocean I did everything according to this tutorial: How To Deploy a Local Django App to a VPS.
While Gunicorn is working perfectly and http://95.85.34.87:8001/ opens my app, Nginx, however, does not work, http://95.85.34.87 or http://95.85.34.87/static causes a 502 error.
Nginx log says, that :
2014/04/19 02:43:52 [error] 896#0: *62 connect() failed (111: Connection refused) while connecting to upstream, client: 78.62.163.9, server: 95.85.34.87, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "95.85.34.87"
My nginx configuration file looks like this:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name 95.85.34.87;
access_log off;
location /static/ {
alias /opt/myenv/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
In Django.settings I have ALLOWED_HOSTS set to '[*]'
Nginx is listening to port 80:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 894/nginx
tcp6 0 0 :::80 :::* LISTEN 894/nginx
I think that the point is that Nginx does not point user to Gunicorn for some reason...
EDIT: I changed the proxy_pass http://127.0.0.1:8001; line under location / to my servers IP address (instead of loccalhost) and everything worked. I am not sure if it's good decission or not.
I see the instructions tell you to use this to start Gunicorn:
$ gunicorn_django --bind yourdomainorip.com:8001
If you start it like this then Gunicorn will listen only on the interface that is bound to yourdomainorip.com. So it won't listen on the loopback interface and won't receive anything sent to 127.0.0.1. Rather than changing nginx's configuration like you mention in your edit, you should do:
$ gunicorn_django --bind localhost:8001
This would cause Gunicorn to listen on the loopback. This is preferable because if you bind Gunicorn to an external interface people can access it without going through nginx.
With this setup the interaction between nginx and your Django app is like this:
nginx is the entry point for all HTTP requests. It listens on 95.85.34.87:80.
When an request is made to a URL that should be forwarded to your application, nginx forwards it by connecting on localhost:8001 (same as 127.0.0.1:8001).
Your Django application is listening on localhost:8001 to receive forwards from nginx.
By the way, gunicorn_django is deprecated.
And another thing: don't set ALLOWED_HOSTS to serve all domains. If you do so you are opening yourself to cache poisoning. Set it only to the list of domains that your Django project is meant to serve. See the documentation for details.

nginx can't listen on port 80

I'm trying to set nginx with gunicorn but I keep getting the "Welcome to nginx!" page. I am able to successfully listen to other ports (like 8080) but port 80 does not work at all.
server {
listen 80;
server_name host.ca www.host.ca;
access_log /var/log/nginx/example2.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:8000;
}
}
I'm running the server as root.
I can't seem to see anything running in port 80.
Diagnosing the Problem
Make sure to check your logs (likely /var/log/nginx or some variant).
Check to see what might be hogging port 80
netstat -nlp | grep 80
Sites-enabled, port hogging
Then, make sure you have the Django site enabled in sites-enabled. Delete any old symlinks if you created one first.
rm /etc/nginx/sites-enabled/django
ln -s /etc/nginx/sites-available/django /etc/nginx/sites-enabled/django
Double check your /etc/nginx/nginx.conf to make sure it's loading sites-enabled and not loading some other default.
http {
...
include /etc/nginx/sites-enabled/*;
}
After you do all this, shut down and restart the nginx service.
Either service nginx restart or service nginx stop && service nginx start