Django channels daphne returns 200 status code - django

I have setup a Django application with Nginx + uwsgi. The application also uses django-channels with redis. When deploying the setup in an individual machine, everything works fine.
But when I tried to setup the app in 2 instances and setup a common load balancer to coordinate the requests, the request get properly routed to the daphne process and I can see the logs. But the status code returned from the daphne process is 200 instead of 101.
Load balancer nginx conf:
upstream webservers {
server 10.1.1.2;
server 10.1.1.3;
}
server {
location / {
proxy_pass http://webservers;
}
}
Versions used:
daphne==2.2.4
channels==2.1.6
channels-redis==2.3.2
All the routing works fine and there are no errors, but just that the status code returned is 200 instead of 101.

Try to add following headers, hope that this will help:
server {
location / {
proxy_pass http://webservers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Full official instruction about how to setup Django Channels + Nginx can be found here

Related

How to setup front end and back end on one server and on one port number? [duplicate]

On my aws ubuntu (18.04) machine, I have 2 applications running on 2 ports
(1) I have a .net core 3.1 angular spa with identity server 4 running on 5000 and I set it up using the steps below
The nginx is a reverse proxy only
(2) I have an angular ssr application running on port 4000.
What I want to achieve is for the reverse proxy to proxy social media bots to port 4000 while all other requests are proxied to the 5000.
Currently nginx is only proxying to the .net core app on port 5000
You can use "location and proxy_pass" to access your desire applications which are working on different ports.
If you have all stuffs on a same vm just use localhost insted of ip address i wrote it down.
But if application are running on another vm use its IP address which in my configuration the destination server is : 172.16.0.100
You can edit the hosts file and use "example.com" or whatever to point your site and use in your nginx configuration file instead of IP or localhost.
sudo vi /etc/hosts
172.16.0.100 example.com
and add your desire FQDN to the destination host or if you have a dns, add an AAAA record which would be available in whole local network.
I write this configuration in my nginx server and it works like a charm.
Anyway you can write and edit this configuration base on your environment.
server {
{
listen 80;
server_name 172.16.0.100;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
location /angular {
proxy_pass http://172.16.0.100:5000;
}
location /ssr {
proxy_pass http://172.16.0.100:4000;
}
}

Socketio Not Posting Within NGINX Flask Application

I have a Flask application running behind NGINX and I am using Gunicorn to deploy. When I deploy, everything works perfectly fine, I can hit my servers IP and see the app running with no issues, however when I execute an action that uses socketio, the action does not get passed to the backend and I believe this is an issue with my configuration on NGINX. My conf.d file has the following
server {
listen 80;
server_name MY_SERVER_IP;
location / {
proxy_pass http://127.0.0.1:8000;
}
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:8000/socket.io;
}
}
I deploy the app with
gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 app:app
Within my app.py I am running the socketio server with
socketio.run(app, host='127.0.0.1', port=80, debug=True)
Also seeing this within console...
socket.io.min.js:2 GET http://127.0.0.1:8000/socket.io/?EIO=3&transport=polling&t=MuA1z9K net::ERR_CONNECTION_REFUSED
Everything works locally. Please keep in mind, I am fairly new to Flask deployments with socketio.
Your server is running on port 8000, so your proxy_pass statement should go to that port, not 5000:
proxy_pass http://127.0.0.1:8000/socket.io;
Also note that when you run your server via gunicorn, the socketio.run() line does not execute, that is used when you don't use a third party web server such as gunicorn or uwsgi, and I'm guessing is what you use locally.
My connection to socket.io was set incorrectly. I should have had var socket = io(); in my js script as opposed to var socket = io.connect('127.0.0.1:8000'); per documentation

Dispatching requests from one uwsgi to another uwsgi instance running Django Channels

I am currently using Django channels for websocket communication. I read this article and it states that I should split the project into two uwsgi instances. It states that
"The web server undertakes the task of dispatching normal requests to one uWSGI instance and WebSocket requests to another one"
Now I have two uwsgi instances running. This is how I am running both.
This uwsgi handles the normal django site requests
uwsgi --virtualenv /home/ec2-user/MyProjVenv --socket /home/ec2-user/MyProjVenv/MyProjWeb/site1.socket --chmod-socket=777 --buffer-size=32768 --workers=5 --master --module main.wsgi
This uwsgi handles the websocket requests
uwsgi --virtualenv /home/ec2-user/MyProjVenv --http-socket /home/ec2-user/MyProjVenv/MyProjWeb/web.socket --gevent 1000 --http-websockets --workers=2 --master --chmod-socket=777 --module main.wsgi_websocket
Now the websocket uwsgi launches main.wsgi_websocket
The code for main.wsgi_websocket one is this
import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='main.settings')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
Now after spinning up the two uwsgi instances I am able to access the website fine.The websocket uwsgi instance is also receiving data however I am not sure if its passing that data to the django website uwsgi instance which basically uses django channels and has handlers for send/receive functions. I am using Django Channels here and this is the configuration I have specified in my settings for Django Channels
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(redis_host, 6379)],
},
"ROUTING": "main.routing.channel_routing",
},
}
The channel routing is this
channel_routing = [
include("chat.routing.websocket_routing", path=r"^/chat/stream"),
include("chat.routing.custom_routing"),
]
and this is the websocket_routing which I have
websocket_routing = [
route("websocket.connect", ws_connect),
# Called when WebSockets get sent a data frame
route("websocket.receive", ws_receive),
# Called when WebSockets disconnect
route("websocket.disconnect", ws_disconnect),
]
Now the problem is that my ws_receive is never called. If I test on my local dev machine using ipaddress:8000/chat/stream this works perfectly fine however I have no clue why my receive is not being called when I use ipadress/ws/ . I am certain that my other uwsgi instance is getting that data but I dont know how to find out if its passing it to the other uwsgi instance of the djnago side and if it is then why is my receive not being called ?. Any suggestions on this would definitely help
I was wondering about this when I saw your other question here Nginx with Daphne gives 502 Bad Gateway
Splitting the project is a good idea. I assume these two instances are running behind nginx(from the above question of yours).
So, nginx should decide which request goes to which instance? You could do that using different url paths for each channels and django app.
Example:
for django app: /whatever/whatever/...
and for channels : /ws/whatever/...
Let's assume that your channels consumer instance is on 8000.
Add this to your nginx:
location /ws/ {
proxy_pass http://0.0.0.0:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
This way, whatever your query be, if the url starts with /ws/, it is consumed by whatever instance running at port 8000.

Django Channels Nginx production

I have a django project and recently added channels to use websockets. This seems to all work fine, but the problem I have is to get the production ready.
My setup is as follows:
Nginx web server
Gunicorn for django
SSL enabled
Since I have added channels to the mix. I have spent the last day trying to get it to work.
On all the turtotials they say you run daphne on some port then show how to setup nginx for that.
But what about having gunicorn serving django?
So now I have guncorn running this django app on 8001
If I run daphne on another port, lets say 8002 - how should it know its par of this django project? And what about run workers?
Should Gunicorn, Daphne and runworkers all run together?
This question is actually addressed in the latest Django Channels docs:
It is good practice to use a common path prefix like /ws/ to
distinguish WebSocket connections from ordinary HTTP connections
because it will make deploying Channels to a production environment in
certain configurations easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on
path to either (1) a production-grade WSGI server like Gunicorn+Django
for ordinary HTTP requests or (2) a production-grade ASGI server like
Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy
where Daphne serves all requests - HTTP and WebSocket - rather than
having a separate WSGI server. In this deployment configuration no
common path prefix like is /ws/ is necessary.
In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):
upstream daphne_server {
server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;
}
upstream gunicorn_server {
server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name _;
location /ws/ {
proxy_pass http://daphne_server;
}
location / {
proxy_pass http://gunicorn_server;
}
}
(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)
I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:
websockets connections are going to daphne server
HTTP connections (REST API) are going to gunicorn server
Here is my nginx configuration file:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
I recently answered a similiar question, have a look there for an explanation on how django channels work.
Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.
To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server: $ daphne my_project.asgi:channel_layer
and your worker:
$ python manage.py runworker
NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.

How to allow NGINX to buffer for multiple Django App Servers

How can one allow NGINX to buffer client requests for multiple Django App Servers that all run a WSGI server like Gunicorn? What do I need to change in the config files?
Use nginx's upstream option to define a pool of application servers; when you proxy_pass, you can proxy_pass to the named upstream:
upstream my-upstream {
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Unless you specify otherwise, requests will be round-robined between the different upstream servers.
upstream my-upstream {
least_conn;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
Let Assume you are using 4 sever when your server 1 is down then nginx will intelligently shift your next request to your next available serve, once server 1 is up then your next request will be send again in server 1. Actually nginx use round robin algorithm to shifting the request.
upstream my-upstream {
ip_hash;
server 127.0.0.1:9000;
server 127.0.0.1:9001;
}
location / {
proxy_pass http://my-upstream;
}
In this case for previous scenario won't be same, Like we will always get response from server 1 and when our server 1 is down then it goes to server 2 and then when it is up that time is will response server 1 again.