Cannot deploy django-channels in production - django

I am trying to deploy django-channels in production using Gunicorn,Nginx,Postgres and Supervisor.Though i have been able to serve http requests properly but i cannot configure websocket configuration
Here is nginx configuration
upstream app_server {server unix:/home/datasleek/tracker/run/gunicorn.sock fail_timeout=0;}
upstream websocket {server ip-address:80;}
server
{
listen 80;
server_name ip-address;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ { alias /home/datasleek/tracker/staticfiles/; }
location /media/ { alias /home/datasleek/tracker/media/; }
client_max_body_size 4G;
access_log /home/datasleek/tracker/logs/nginx-access.log;
error_log /home/datasleek/tracker/logs/nginx-error.log;
location /ws/ {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_pass http://app_server;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
This is supervisor configuration
[program:tracker]
command=/home/datasleek/trackervenv/bin/gunicorn_start
user=datasleek
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/datasleek/tracker/logs/gunicorn.log
[program:serverinterface]
directory=/home/datasleek/tracker/
command= /home/datasleek/trackervenv/bin/daphne -b 0.0.0.0 -p 80 tracker.asgi:channel_layer
autostart=true
autorestart=true
stopasgroup=true
user=datasleek
stdout_logfile = /home/datasleek/tracker/logs/daphne.log
redirect_stderr=true
#[program:tracker_asgi_daphne]
#directory=/home/datasleek/tracker/
#command=/home/datasleek/trackervenv/bin/daphne -u /home/datasleek/tracker/daphne.sock --root-path= home/datasleek/tracker tracker.asgi:channel_layer
#stdout_logfile = /home/datasleek/tracker/logs/daphne.log
[program:tracker_asgi_workers]
command=/home/datasleek/trackervenv/bin/python /home/datasleek/tracker/manage.py runworker
stdout_logfile = home/datasleek/tracker/logs/worker.log
process_name=asgi_worker%(process_num)s
numprocs=3
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8 ; Set UTF-8 as default encoding
autostart=true
autorestart=true
redirect_stderr=True
stopasgroup=true
These are some websocket paths that i am trying to connect
ws://ip-address/agent-presence/
ws://ip-address/stream/
And i am getting the following error;
websocketbridge.js:118 WebSocket connection to 'ws://ip-address/agent-presence/' failed: Error during WebSocket handshake: Unexpected response code: 404
P.S: I know that questions about this error have been asked multiple times but i am really helpless to get out of this problem. I have been trying since last week but could not solve this problem despite applying different methods from google and youtube.

Related

Why does my websocket keep disconnecting in Django Channels App?

I have been on this for a month now without a working solution. Everything works fine in production but I have been trying to deploy my django-channels application using nginx as reverse proxy, supervisor to keep servers running, gunicorn to serve http requests and I am stuck at the weboscket request part using daphne to process http requests.
I am bindig with unix sockets: gunicorn.sock and daphne.sock
The Console returns:
WebSocket connection to 'ws://theminglemarket.com/ws/chat/undefined/' failed:
Error during WebSocket handshake: Unexpected response code: 500
My supervisor config:
directory=/home/path/to/src
command=/home/path/to/venv/bin/gunicorn_start
user=root
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/path/to/log/gunicorn/gunicorn-error.log
[program:serverinterface]
directory=/home/path/to/src
command=/home/path/to/venv/bin/daphne -u /var/run/daphne.sock chat.asgi:application
autostart=true
autorestart=true
stopasgroup=true
user=root
stdout_logfile = /path/to/log/gunicorn/daphne-error.log
Redis server is up and Running, Sure of that, using redis-server
my nginx configurations:
upstream channels-backend {
# server 0.0.0.0:8001;
server unix:/var/run/daphne.sock fail_timeout=0;
}
upstream app_server {
server unix:/var/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
listen [::]:80;
server_name theminglemarket.com www.theminglemarket.com;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/path/to/logs/nginx-access.log;
error_log /home/path/to/logs/nginx-error.log;
location /static/ {
alias /home/path/to/src/static/;
# try_files $uri $uri/ =404;
}
location / {
try_files $uri #proxy_to_app;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location #proxy_to_app {
proxy_pass http://app_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
}
}
Please ask for any other thing needed, I'll update as quickly as I can. Thank You.
It's a chatting application, do you think I should use only Daphne, I'm considering the scalability, and that's why I used gunicorn to serve http requests. Hosting on Ubuntu Server
Try putting socket=tcp://0.0.0.0:8001 or socket=tcp://localhost:8001 in your [program:serverinterface] part of supervisord.conf. After that read your supervisor_log.log file to find out how it behaves. I had similar problems with it too. I hope that this helps. Use socket=tcp://localhost:8001 if it's inside of docker container. And make sure that nginx container is on the same docker network as that container.

django-channels nginx settings

My django app uses django-channels.
I was able to configure django to run using gunicorn and nginx.
The app run if i use python manage.py runserver and redis-server sends notification etc but i am unable to configure it using nginx.
server {
listen 80;
server_name IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/amir/clientcode;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/amir/clientcode/adminpanel.sock;
}
}
However when I try to configure it for django-channels it is giving me status 502
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/amir/clientcode;
}
location / {
try_files $uri #proxy_to_app;
include proxy_params;
proxy_pass http://unix:/home/amir/clientcode/adminpanel.sock;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
My asgi.py file
import os
import django
from channels.routing import get_default_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "adminpanel.settings")
django.setup()
application = get_default_application()
``
First of all, install Daphne in your app:
Here I use daphne==1.3.0
To start Daphne server, I use this command:
export DJANGO_SETTINGS_MODULE="config.settings"
exec daphne -b 0.0.0.0 --proxy-headers config.asgi:channel_layer
Besides Daphne, you have to start a worker:
python manage.py runworker
With this, you can use the sockets in the same URL's Projects.
Take a look in this article: https://medium.com/labcodes/introduction-to-django-channels-d1047e56f218
Regards

Trouble with deploy django channels using Daphne and Nginx

I got a 502 error when I'm trying to open a website. I used the instructions from the official website link
Added new file lifeline.conf at /etc/supervisor/conf.d/
lifeline.conf
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/home/ubuntu/lifeline/lifeline-backend
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-head$
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/ubuntu/asgi.log
redirect_stderr=true
Setup nginx conf
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name staging.mysite.com www.staging.mysite.com;
client_max_body_size 30M;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I checked the asgi log file and it contains an error .
daphne: error: the following arguments are required: application
I'm guessing a mistake in lifeline.conf.
I am assuming you are not passing asgi application to daphne, because configuration you pasted in question has missing line. You have to pass it correctly. Assuming you have conf package with asgi.py module inside it containing asgi application instance, you have to do
command=/home/ubuntu/Env/lifeline/bin/daphne -u /run/daphne/daphne%(process_num)d.sock conf.asgi:application
conf.asgi:application should be at the end.

Django: failed: Error during WebSocket handshake

I'm getting below error
WebSocket connection to 'ws://localhost/ws/testNoti?subscribe-broadcast&publish-broadcast&echo' failed: Error during WebSocket handshake: Unexpected response code: 500
Websocket connection is broken!
supervisor conf file
[unix_http_server]
username = ubuntu
password = password
[program:uwsgi]
command=/home/ubuntu/bxd-life/venv/bin/uwsgi --ini /home/ubuntu/bxd-life/bxd/bxd.ini
autostart=true
user=ubuntu
autorestart=true
stderr_logfile = /home/ubuntu/bxd-life/logs/err.log
stdout_logfile = /home/ubuntu/bxd-life/logs/out.log
stopsignal=INT
[program:uwsgi_ws]
command = /home/ubuntu/bxd-life/venv/bin/uwsgi --http :8080 --gevent 1000 --http-websockets --workers=2 --master --module bxd.wsgi_websockets
#directory=/home/ubuntu/bxd-life/bxd
autostart=true
autorestart=true
starttries=5
user=ubuntu
environment=DJANGO_SETTINGS_MODULE='bxd.settings'
nginx conf file
upstream app_server {
server localhost:8000;
}
upstream web_socket_server {
server localhost:8080 fail_timeout=0;
}
server {
listen 80;
server_name _;
location /static/ {
alias /home/ubuntu/bxd-life/bxd/static/;
expires 30d;
}
location /ws/ {
proxy_pass http://web_socket_server;
proxy_http_version 1.1;
#proxy_redirect ws://$server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
wsgi_websockets.py
import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bxd.settings")
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
The above is working fine with ./manage.py runserver but not with nginx!
Any help would be very much appreciated.

how to setup subdomain in following environment: nginx, supervisor, django, gunicorn?

I have a django app setup with nginx+gunicorn+supervisor and its working fine. But i need to create a subdomain for staging or development like "dev.domain.com". I have added another server block in nginx.conf for my subdomain. But my subdomain url was always pointing main domain site. so i changed the port no in proxy_pass as suggested on other posts. but due to gunicorn and supervisord i needed to add another conf file for this subdomain in "/etc/supervisord/conf.d/subdomain.conf" but when i reload supervisord its not able to start my subdomain program. below is my nginx.conf, subdomain.conf, script.sh:
nginx.conf
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_types application/x-javascript text/css text/html application/json text/css text/json;
server {
listen 80;
server_name domain_name
# no security problem here, since / is alway passed to upstream
root /home/path/to/project/base
# serve directly - analogous for static/staticfiles
location /static/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
autoindex off;
root /home/path/to/static/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name subdomain_name
# no security problem here, since / is alway passed to upstream
root /home/path/to/subdomain_directory(which is different, you can say it is fully differnt project which i want to run as development project);
# serve directly - analogous for static/staticfiles
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:9000/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
script.sh
set -e
NUM_WORKERS=4
# user/group to run as
USER=user_name
#GROUP=your_unix_group
cd /home/path/to/subdomain_base
source subdomain_virtualenv_activation
LOGFILE=log_file_path
LOGDIR=$(dirname $LOGFILE)
test -d $LOGDIR || mkdir -p $LOGDIR
exec virtualenvironment/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
subdomain.conf
[program:programname]
directory = /home/path/to/subdomainbase/
user = user_name
command = /home/path/to/script.sh
stdout_logfile = /home/path/to/log
stderr_logfile = /home/path/to/log
I have a procfile too as suggested in gunicorn which is in base directory
Procfile
./manage.py runserver_plus 0.0.0.0:$PORT
Ok so these are my configurations. Please check where i am doing the wrong thing. I just want to run my development server as a different project but under same domain as subdomain. after all this whatever changes i am doing, main domain is working fine wioth the same process. Please let me know if you need more info on this error.
EDIT
I am reading again your post, and... Should not you must set ADDRESS in your gunicorn script? gunicorn by default uses port 8000, maybe your subdomain is trying to use the same port?
END EDIT
I have two Django applications running with nginx, gunicorn and supervisor as you want to do (well, not the same, but very similar, i have two domains and a subdomain). I don't see where is your mistake, I think must be in nginx configuration. Maybe the "root" line?
Have you seen if supervisord returns you an error when you try to start it using "supervisorctl" command?
I can show you my configuration and you can compare it:
I have two .conf files for nginx:
domain1.conf:
server {
listen 80;
server_name domain1.net;
return 301 $scheme://www.domain1.net$request_uri;
}
server {
listen 80;
server_name www.domain1.net;
access_log /var/log/nginx/domain1.log;
location /static {
alias /var/www/domain1/media/;
autoindex on;
access_log off;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8000/;
}
}
and domain2.conf:
server {
listen 80;
server_name subdomain.domain2.es;
access_log /var/log/nginx/domain2.log;
location /static {
alias /var/www/dev/domain2/domain2/static/;
autoindex on;
access_log off;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8005/;
}
}
My two gunicor scripts are the same, just changing paths and ADDRESS in one of them:
#!/bin/bash
set -e
LOGFILE=/var/log/gunicorn/domain1.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=1
# user/group to run as
USER=user
GROUP=user
ADDRESS=127.0.0.1:8005
cd /var/www/dev/domain1
source /path/to/venv/domain1/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec gunicorn_django -w $NUM_WORKERS --bind=$ADDRESS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
My two supervisor scripts are the same too:
[program:domain1]
directory = /var/www/dev/domain1/
user = user
command = /path/to/bin/gunicorn_domain1.sh
stdout_logfile = /var/log/nginx/domain1.log
stderr_logfile = /var/log/nginx/domain1.log
I hope you found this helpful.