I have deployed django with nginx following the tutorials in digital ocean. Then I blindly followed the section "Example Setup" in the channels document after installation.
My confusions are:
When setting up the configuration file for supervisor, it says to set the directory as
directory=/my/app/path
Should I write down the path where the manage.py is or the path where the settings.py is?
When I reload nginx after changing nginx configuration file, I get an error saying that
host not found in upstream "channels-backend" in
/etc/nginx/sites-enabled/mysite:18 nginx: configuration file
/etc/nginx/nginx.conf test failed
I did replace "mysite" by the name of my website. I had another error earlier saying that
no live upstreams while connecting to upstream
but could not recreate the situation.
I am new to using the channels, so any additional information on upstream would be helpful. Please let me know if I need to provide more information.
Edit:
Here is the nginx.conf file. I changed some sensitive data inside the <>.
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name <domain name> <ip address>;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root <root to static>;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
This passes nginx -t. The error message I have in the error.log
connect() failed (111: Connection refused) while connecting to upstream, client: <some ip>, server: <domain name>, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "<domain name>"
The problem actually in supervisor configuration file.
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
To check if supervisor was running correctly, I ran
sudo supervisorctl status
This gave me a FATAL status. The problem was that I am currently using a virtual environment, and daphne was only installed inside the virtual environment. Therefore your command should be something like
command= /my/project/virtualenv/path/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
Related
nginx access logs show 502 errors
nginx error logs show: failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /health HTTP/1.1", upstream: "http://10.0.0.2:3000/health", host: "x.x.x.x"
the drupal site is running health module that uses path /health. this container runs fine in ECS. the docker-entrypoint.sh script just executes a few drush commands.
Dockerfile:
FROM drupal:9-php7.4-apache
# code that installs soap, drush, composer etc ...
# Assign the drupal web to apache web folder
RUN rm /var/www/html
RUN ln -s /opt/website/web /var/www/html
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3000
nginx.conf in .platform/nginx
events {
worker_connections 1024;
}
http {
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
I have a couple of .ebextension files but those are for env variables. The container will build, start, connect to the database but then fail health check and eventually get terminated. any help or pointers would be great.
I know many others have asked the same question, but I haven't found any answers that are relevant or work for me. If you do know of a duplication, feel free to direct me to it.. I'm getting lost in the maze of nginx threads!
I am new to this and used the following tutorials to set up my django site with gunicorn and nginx:
https://vahiwe.medium.com/deploy-django-and-flask-applications-in-the-cloud-using-nginx-gunicorn-and-systemd-centos-7-4b6aef3a8578
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-centos-7
My website works if I access it via the IP address but I get a Bad Request error when I try by the domain name.
In nginx.conf my server block looks like:
server {
listen 80;
server_name 123.456.78.910 mywebsite.com www.mywebsite.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/userf/website;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/var/www/userf/website/website.sock;
}
}
My gunicorn.service file is:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ssej91D
Group=nginx
WorkingDirectory=/var/www/ssej91D/pwebsite
ExecStart=/var/www/userf/website/env/bin/gunicorn --workers 3 --error-logfile - --bind unix:/var/www/userf/website/website.sock website.wsgi:application
EnvironmentFile=/var/www/userf/website/.env
[Install]
WantedBy=multi-user.target
And my ALLOWED_HOSTS in django's settings.py:
ALLOWED_HOSTS = ["mywebsite.com", "www.mywebsite.com", "123.456.78.910"]
I have not added any SSL related settings to the Django settings file yet.
To test the domain name, I've tried making a test index.html file in another directory (let's call it testwebsite and then changing the nginx.conf to:
server {
listen 80;
server_name 123.456.78.910 mywebsite.com www.mywebsite.com;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
root /var/www/userf/testwebsite;
}
}
This worked perfectly. My domain name showed index.html.
I've checked the logs, and they are always empty. I'll be totally honest, I just copied all of the proxy server settings from the tutorial and I don't actually understand them. I guess my suspicion is I'm doing something wrong in setting up nginx as a proxy server.
Any help would be very appreciated.
Thanks
I was missing a host in the ALLOWED_HOSTS list.
The host name of the droplet I'm using is different because I'm sharing it.
I have been on this for a month now without a working solution. Everything works fine in production but I have been trying to deploy my django-channels application using nginx as reverse proxy, supervisor to keep servers running, gunicorn to serve http requests and I am stuck at the weboscket request part using daphne to process http requests.
I am bindig with unix sockets: gunicorn.sock and daphne.sock
The Console returns:
WebSocket connection to 'ws://theminglemarket.com/ws/chat/undefined/' failed:
Error during WebSocket handshake: Unexpected response code: 500
My supervisor config:
directory=/home/path/to/src
command=/home/path/to/venv/bin/gunicorn_start
user=root
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/path/to/log/gunicorn/gunicorn-error.log
[program:serverinterface]
directory=/home/path/to/src
command=/home/path/to/venv/bin/daphne -u /var/run/daphne.sock chat.asgi:application
autostart=true
autorestart=true
stopasgroup=true
user=root
stdout_logfile = /path/to/log/gunicorn/daphne-error.log
Redis server is up and Running, Sure of that, using redis-server
my nginx configurations:
upstream channels-backend {
# server 0.0.0.0:8001;
server unix:/var/run/daphne.sock fail_timeout=0;
}
upstream app_server {
server unix:/var/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
listen [::]:80;
server_name theminglemarket.com www.theminglemarket.com;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/path/to/logs/nginx-access.log;
error_log /home/path/to/logs/nginx-error.log;
location /static/ {
alias /home/path/to/src/static/;
# try_files $uri $uri/ =404;
}
location / {
try_files $uri #proxy_to_app;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location #proxy_to_app {
proxy_pass http://app_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
}
}
Please ask for any other thing needed, I'll update as quickly as I can. Thank You.
It's a chatting application, do you think I should use only Daphne, I'm considering the scalability, and that's why I used gunicorn to serve http requests. Hosting on Ubuntu Server
Try putting socket=tcp://0.0.0.0:8001 or socket=tcp://localhost:8001 in your [program:serverinterface] part of supervisord.conf. After that read your supervisor_log.log file to find out how it behaves. I had similar problems with it too. I hope that this helps. Use socket=tcp://localhost:8001 if it's inside of docker container. And make sure that nginx container is on the same docker network as that container.
I am trying to deploy my ASGI application.
I have followed the exact docs of https://channels.readthedocs.io/en/latest/deploying.html
But when I am checking my logs i am ending up with:
CRITICAL Listen failure: Couldn't listen on 0.0.0.0:x: [Errno 98] Address already in use.
I have tried changing ports but at last, ending up with the same:
Eg: for port 8005 daphne -p 8005 -b 0.0.0.0 myproject.asgi:application
I am ending up with:
Configuring endpoint unix:/run/daphne/daphne1.sock
2020-07-16 08:22:10,324 INFO Starting server at fd:fileno=0, tcp:port=8005:interface=0.0.0.0, unix:/run/daphne/daphne0.sock
2020-07-16 08:22:10,324 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2020-07-16 08:22:10,325 INFO Configuring endpoint fd:fileno=0
2020-07-16 08:22:10,345 INFO Listening on TCP address 127.0.0.1:8005
2020-07-16 08:22:10,345 INFO Configuring endpoint tcp:port=8005:interface=0.0.0.0
In my asgi.log file.
My /etc/supervisor/conf.d/abc.conf file:
[fcgi-program:isbuddy]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8005
# Directory where your site's project files are located
directory=/home/bhaskar/myprojectdir
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/bhaskar/venv/bin/daphne -p 8005 -b 0.0.0.0 -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers myproject.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=2
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/bhaskar/asgi.log
redirect_stderr=true
My /etc/nginx/sites-enabled/abcd as:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://0.0.0.0:8005;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I tried all the possibilities available by checking every docs.
I want my website to run in my public IP.
I got a 502 error when I'm trying to open a website. I used the instructions from the official website link
Added new file lifeline.conf at /etc/supervisor/conf.d/
lifeline.conf
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/home/ubuntu/lifeline/lifeline-backend
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-head$
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/ubuntu/asgi.log
redirect_stderr=true
Setup nginx conf
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name staging.mysite.com www.staging.mysite.com;
client_max_body_size 30M;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I checked the asgi log file and it contains an error .
daphne: error: the following arguments are required: application
I'm guessing a mistake in lifeline.conf.
I am assuming you are not passing asgi application to daphne, because configuration you pasted in question has missing line. You have to pass it correctly. Assuming you have conf package with asgi.py module inside it containing asgi application instance, you have to do
command=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock conf.asgi:application
conf.asgi:application should be at the end.