Websocket Connection Failed on AWS EC2 Ubuntu Instance(Django Channels) - django

I am trying to deploy my django chammels asgi app in ubuntu aws ec2 instance, my wsgi app is working fine also the asgi on ubuntu is working fine (tested with python websocket library) there is no error in code and daphne is running perfectly
I enabled all ports in the security group of EC2 as well
There is no error on the codebase and also daphne is running on localhost of EC2 ubuntu Instance perfectly
server {
listen 80;
server_name MY_SERVER_IP;
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /ws/{
proxy_pass http://0.0.0.0:4001;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
[Unit]
Description=project Daphne Service
After=network.target
[Service]
User=root
Type=simple
WorkingDirectory=/home/ubuntu/myproject
ExecStart=/home/ubuntu/myproject/venv/bin/python /home/ubuntu/myproject/venv/bin/d>
[Install]
WantedBy=multi-user.target
I created this python script to determine if my daphne is working or not, it is providing me the perfect result, no error
import asyncio
import websockets
import json
async def hello():
uri = "ws://127.0.0.1:4001/ws/chat/person/?token=<CHAT_TOKEN>
async with websockets.connect(uri) as websocket:
await websocket.send(json.dumps({"message":"Hey there"}))
g = await websocket.recv()
print(g)
asyncio.get_event_loop().run_until_complete(hello())
This following script returns
{
"id": 1,
"message": "Hey There",
"time_stamp": "TIME",
"receiver": {
"username": "John Doe",
"id": 2",
"profile_picture": "LINK"
}
}
Now in my frontend application when I try to connect with the websocket
socket = new WebSocket("ws://<SERVER_IP>:4001/ws/chat/person/?token=<MY_TOKEN>")
socket.onopen = functon(){
...
}
...
But the following script returns
WebSocket connection to 'ws://<SERVER_IP>/ws/chat/person/?token=<TOKEN> failed:

I fixed the problem by using supervisor
sudo apt install nginx supervisor
Now, you will need to create the supervisor configuration file (often located in /etc/supervisor/conf.d/ - here, we’re making Supervisor listen on the TCP port and then handing that socket off to the child processes so they can all share the same bound port:
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
Create the run directory for the sockets referenced in the supervisor configuration file.
sudo mkdir /run/daphne/
sudo chown <user>.<group> /run/daphne/
d /run/daphne 0755 <user> <group>
sudo supervisorctl reread
sudo supervisorctl update

Related

Setting up reverse proxy Nginx + Gunicorn/Django REST - 404 Not Found

I have a setup with Nginx as web server and Gunicorn as application server serving a Django REST API.
Nginx is listening at port 80 and gunicorn at port 8000 (I launch gunicorn using this command):
gunicorn --bind 0.0.0.0:8000 cdm_api.wsgi -t 200 --workers=3
and when I launch the petition to port 8000, I am able to access to the API running for instance:
curl -d "username=<user>&password=<pass>" -X POST http://127.0.0.1:8000/api/concept
However, when I try to make the petition though Nginx acting as reverse proxy, I get 404 Not Found Error:
curl -d "username=<user>&password=<pass>" -X POST http://127.0.0.1:80/api/concept
Here is my nginx conf file:
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
location /api/ {
proxy_pass http://127.0.0.1:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 360s;
proxy_read_timeout 360s;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
Let me know if more information is needed. Thanks in advance!
So
gunicorn will create a sock file and you will have to map your proxy_pass with the sock file created, for now you are mapping to your localhost:8000 for this you will have to run the django runserver command so that nginx can find a server running on port 8000.
something like -
location / {
include proxy_params;
#proxy_pass http://unix:/home/abc/backend/social.sock;
proxy_pass http://unix:/home/abc/backend/backend.sock;
}
gunicorn make it possible to serve django at production, you don't have to command runserver everytime. Make sure gunicorn restarts everytime when server reboots.

deploying channels with nginx

I have deployed django with nginx following the tutorials in digital ocean. Then I blindly followed the section "Example Setup" in the channels document after installation.
My confusions are:
When setting up the configuration file for supervisor, it says to set the directory as
directory=/my/app/path
Should I write down the path where the manage.py is or the path where the settings.py is?
When I reload nginx after changing nginx configuration file, I get an error saying that
host not found in upstream "channels-backend" in
/etc/nginx/sites-enabled/mysite:18 nginx: configuration file
/etc/nginx/nginx.conf test failed
I did replace "mysite" by the name of my website. I had another error earlier saying that
no live upstreams while connecting to upstream
but could not recreate the situation.
I am new to using the channels, so any additional information on upstream would be helpful. Please let me know if I need to provide more information.
Edit:
Here is the nginx.conf file. I changed some sensitive data inside the <>.
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name <domain name> <ip address>;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root <root to static>;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
This passes nginx -t. The error message I have in the error.log
connect() failed (111: Connection refused) while connecting to upstream, client: <some ip>, server: <domain name>, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "<domain name>"
The problem actually in supervisor configuration file.
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
To check if supervisor was running correctly, I ran
sudo supervisorctl status
This gave me a FATAL status. The problem was that I am currently using a virtual environment, and daphne was only installed inside the virtual environment. Therefore your command should be something like
command= /my/project/virtualenv/path/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application

djanago channels CRITICAL Listen failure: Couldn't listen on 0.0.0.0:x: [Errno 98] Address already in use

I am trying to deploy my ASGI application.
I have followed the exact docs of https://channels.readthedocs.io/en/latest/deploying.html
But when I am checking my logs i am ending up with:
CRITICAL Listen failure: Couldn't listen on 0.0.0.0:x: [Errno 98] Address already in use.
I have tried changing ports but at last, ending up with the same:
Eg: for port 8005 daphne -p 8005 -b 0.0.0.0 myproject.asgi:application
I am ending up with:
Configuring endpoint unix:/run/daphne/daphne1.sock
2020-07-16 08:22:10,324 INFO Starting server at fd:fileno=0, tcp:port=8005:interface=0.0.0.0, unix:/run/daphne/daphne0.sock
2020-07-16 08:22:10,324 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2020-07-16 08:22:10,325 INFO Configuring endpoint fd:fileno=0
2020-07-16 08:22:10,345 INFO Listening on TCP address 127.0.0.1:8005
2020-07-16 08:22:10,345 INFO Configuring endpoint tcp:port=8005:interface=0.0.0.0
In my asgi.log file.
My /etc/supervisor/conf.d/abc.conf file:
[fcgi-program:isbuddy]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8005
# Directory where your site's project files are located
directory=/home/bhaskar/myprojectdir
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/bhaskar/venv/bin/daphne -p 8005 -b 0.0.0.0 -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers myproject.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=2
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/bhaskar/asgi.log
redirect_stderr=true
My /etc/nginx/sites-enabled/abcd as:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://0.0.0.0:8005;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I tried all the possibilities available by checking every docs.
I want my website to run in my public IP.

Run simultaneously UWSGI and ASGI with Django

I'm currently running a Django (2.0.2) server with uWSGI having 10 workers
I'm trying to implement a real time chat and I took a look at Channel.
The documentation mentions that the server needs to be run with Daphne, and Daphne needs an asynchronous version of UWSGI named ASGI.
I manged to install and setup ASGI and then run the server with daphne but with only one worker (a limitation of ASGI as I understood) but the load it too high for the worker.
Is it possible to run the server with uWSGI with 10 workers to reply to HTTP/HTTPS requests and use ASGI/Daphne for WS/WSS (WebSocket) requests ?
Or maybe it's possible to run multiples instances of ASGI ?
It is possible to run WSGI alongside ASGI here is an example of a Nginx configuration:
server {
listen 80;
server_name {{ server_name }};
charset utf-8;
location /static {
alias {{ static_root }};
}
# this is the endpoint of the channels routing
location /ws/ {
proxy_pass http://localhost:8089; # daphne (ASGI) listening on port 8089
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_pass http://localhost:8088; # gunicorn (WSGI) listening on port 8088
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
client_max_body_size 50m;
}
}
To use the /ws/ correctly, you will need to enter your URL like that:
ws://localhost/ws/your_path
Then nginx will be able to upgrade the connection.

Can not connect to websocket using django-channels , nginx on docker as services

I'm using docker compose to build a project with django, nginx as services. When I launch the daphne server, and a client tries to connect to the websocket server, I get this error:
*1 recv() failed (104: Connection reset by peer) while reading response header from upstream
Client-side shows this
failed: Error during WebSocket handshake: Unexpected response code: 502
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
command: nginx -g 'daemon off;'
ports:
- "1010:80"
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
- .:/makeup
links:
- web
web:
build: .
command: /usr/local/bin/circusd /makeup/config/circus/web.ini
environment:
DJANGO_SETTINGS_MODULE: MakeUp.settings
DEBUG_MODE: 1
volumes:
- .:/makeup
expose:
- '8000'
- '8001'
links:
- cache
extra_hosts:
"postgre": 100.73.138.65
Nginx:
server {
listen 80;
server_name thelab518.cloudapp.net;
keepalive_timeout 15;
root /makeup/;
access_log /dev/stdout;
error_log /dev/stderr;
location /api/stream {
proxy_pass http://web:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://web:8000;
}
And the circusd's web.ini file:
[watcher:web]
cmd = /usr/local/bin/gunicorn MakeUp.wsgi:application -c config/gunicorn.py
working_dir = /makeup/
copy_env = True
user = www-data
[watcher:daphne]
cmd = /usr/local/bin/daphne -b 0.0.0.0 -p 8001 MakeUp.asgi:channel_layer
working_dir = /makeup/
copy_env = True
user = root
[watcher:worker]
cmd = /usr/bin/python3 manage.py runworker
working_dir = /makeup/
copy_env = True
user = www-data
As quite explicitly stated in the fine manual, to successfully run Channels you need to have a dedicated application server implementing the ASGI protocol, such as the supplied daphne
The entire Django execution model has been changed with Channels, so that there are separate "interface servers" taking care of receiving and sending messages over, for example, WebSockets or HTTP or SMS, and "worker servers" that run the actual code (potentially on a different server or VM or container or...). The two are connected by a "Channel layer" that carries messages and replies back and forth.
The current implementation supplies 3 channel layers that talk ASGI between an interface server and a worker server:
An In-memory channel layer, used mainly for running the test server (it's single process)
An IPC based channel layer, usable to run different workers on the same server
A redis based channel layer, that should be used for heavy production sites, able to connect interface servers to multiple worker servers.
You configure them like you do for DATABASES::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": [("redis-channel-1", 6379), ("redis-channel-2", 6379)],
},
},
}
Of course this means that your docker config has to change and add one or more interface servers instead of, or in addition to, nginx (even if, in that case, you'll need to accept websocket connections on a different port with all the connected possible problems) and, quite likely, an instance of redis connectin all them.
This in turn means that until circus and nginx can support ASGI, it won't be possible to use them with django-channels, or that this support will only be for the regular http part of your system.
You can find more info in the Deploying section of the official documentation.
It looks that you stared daphne on port 8001, and trying to expose port 8000 and 8001 in docker-compose. The port 8000 is not pointing to any server (daphne is on 8001). In your nginx please set proxy to 8001 ports and expose only port 8001 in docker-compose.
I have created a simple example how it can be set on github where I have proxy to asgi and wsgi servers, but you can go with only asgi server:
The nginx:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
The docker-compose.yml:
version: '2'
services:
nginx:
extends:
file: docker-common.yml
service: nginx
ports:
- 8000:8000
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- asgiserver
asgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/asgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 9000
wsgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/wsgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 8000