Django Channels + Elastic Beanstalk - django

I've set up an Application load balancer that redirects /ws/ requests to port 5000 where I have Daphne running along with 4 workers (that reload via Supervisord). However, in the Chrome console I get the error
WebSocket connection to 'wss://api.example.com/ws/' failed: WebSocket is closed before the connection is established.
when attempting to connect to my WebSocket via simple JavaScript code (see Multichat for something quite close). Any ideas?
Routing.py
websocket_routing = [
# Called when WebSockets connect
route("websocket.connect", ws_connect),
# Called when WebSockets get sent a data frame
route("websocket.receive", ws_receive),
# Called when WebSockets disconnect
route("websocket.disconnect", ws_disconnect),
]
Settings.py
# Channel settings
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": ["redis://:xxxxx#xxxx-redis.xxxxx.1234.usxx.cache.amazonaws.com:6379/0"],
},
"ROUTING": "Project.routing.channel_routing",
},
}
Supervisord.conf
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
environment=LD_LIBRARY_PATH="/usr/local/lib"
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 Project.asgi:channel_layer
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
environment=LD_LIBRARY_PATH="/usr/local/lib"
command=/opt/python/run/venv/bin/python manage.py runworker -v2
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
daphne.out.log
2017-03-05 00:58:24,168 INFO Starting server at tcp:port=5000:interface=0.0.0.0, channel layer Project.asgi:channel_layer.
2017-03-05 00:58:24,179 INFO Using busy-loop synchronous mode on channel layer
2017-03-05 00:58:24,182 INFO Listening on endpoint tcp:port=5000:interface=0.0.0.0
workers.out.log
2017-03-05 00:58:25,017 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,019 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,010 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,020 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,020 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,021 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,021 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,021 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,022 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,029 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,029 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,030 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
JavaScript code that runs before failure
// Correctly decide between ws:// and wss://
var ws_scheme = window.location.protocol == "https:" ? "wss" : "ws";
var ws_path = ws_scheme + '://' + window.location.host + "/ws/";
console.log("Connecting to " + ws_path);
var socket = new ReconnectingWebSocket(ws_path);
Evidently, there is no relevant output in the daphne/worker logs which implies the connection is potentially not being correctly routed in the first place.

Everything was set up properly--'twas a permissions issue. Pay close attention to all relevant AWS security groups (both the load balancer and instances that are a member of the target group).

Related

Django channels unable to connect(find) websocket after docker-compose of project using redis

I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.

Nginx, Gunicorn, Django, Celery(Redis): upstream prematurely closed connection 502 gateway

I run a setup with docker-compose on linux server. Two days ago I added gunicorn + nginx to the setup. Unfortunately, all rest api endpoints that start celery tasks stopped working (it returns 502 gateway not found).
When I try to send a post form on calculate shortest path that starts celery task, 502 gateway returns.
Issue:
Summary
URL: http://192.168.0.150:8001/tspweb/calculate_shortest_paths/
Status: 502 Bad Gateway
Source: Network
Address: 192.168.0.150:8001
Here are the logs from django container and nginx container.
tspoptimization | [2018-10-31 07:26:30 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:15)
nginx_1 | 2018/10/31 07:26:30 [error] 8#8: *9 upstream prematurely closed connection while reading response header from upstream, client: 192.168.0.103, server: localhost, request: "POST /tspweb/calculate_shortest_paths/ HTTP/1.1", upstream: "http://192.168.128.2:8001/tspweb/calculate_shortest_paths/", host: "192.168.0.150:8001", referrer: "http://192.168.0.150:8001/tspweb/warehouse_list.html"
nginx_1 | 192.168.0.103 - - [31/Oct/2018:07:26:30 +0000] "POST /tspweb/calculate_shortest_paths/ HTTP/1.1" 502 157 "http://192.168.0.150:8001/tspweb/warehouse_list.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" "-
Everything was working perfectly before gunicorn + nginx were added (on local system without those 2 it works perfectly fine). That means that it is not timeout issue.
My suspicion is that the nginx+gunicorn doesn't 'redirect' the POST request from form into celery. I started celery with loggging on to a file and this is the content of the celery log file:
root#4fb6e101a85b:/opt/services/djangoapp/src# cat logmato.log
[2018-10-31 07:12:04,400: INFO/MainProcess] Connected to. redis://redis:6379//
[2018-10-31 07:12:04,409: INFO/MainProcess] mingle: searching for neighbors
[2018-10-31 07:12:05,430: INFO/MainProcess] mingle: all alone
[2018-10-31 07:12:05,446: WARNING/MainProcess] /usr/local/lib/python3.6/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2018-10-31 07:12:05,446: INFO/MainProcess] celery#4fb6e101a85b ready.
[2018-10-31 07:14:07,802: INFO/MainProcess] Connected to redis://redis:6379//
[2018-10-31 07:14:07,813: INFO/MainProcess] mingle: searching for neighbors
[2018-10-31 07:14:08,835: INFO/MainProcess] mingle: all alone
[2018-10-31 07:14:08,853: WARNING/MainProcess]/usr/local/lib/python3.6/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never
2018-10-31 07:14:08,853: INFO/MainProcess] celery#4fb6e101a85b ready.
As you can see from the logs, celery workers didn't start a single task that means the problem is not in the celery nor redis, but somewhere in connection between (nginx-gunicorn-django-celery).
Here is my docker-compose file:
version: '3'
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
networks: # <-- connect to the bridge
- database_network
redis:
image: "redis:alpine"
expose:
- "5672"
django:
build: .
restart: always
container_name: tspoptimization
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- db
- redis
networks:
- nginx_network
- database_network
celery:
build: .
command: celery -A tspoptimization worker -l info
volumes:
- .:/code
depends_on:
- db
- redis
- django
links:
- redis
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- django
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
database_network: # <-- add the bridge
driver: bridge
volumes:
postgres_data:
static_volume:
media_volume:
Here is the nginx conf:
upstream hello_server {
server django:8001;
}
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://hello_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /tspweb/static/ {
alias /opt/services/djangoapp/src/tspweb/static/;
}
location /tspweb/media/ {
alias /opt/services/djangoapp/src/tspweb/media/;
}
}
my django settings:
DEBUG = True
ALLOWED_HOSTS = ['*']
CELERY_BROKER_URL = 'redis://redis:6379'
STATIC_URL = '/tspweb/static/'
STATIC_ROOT = os.path.join(os.path.dirname(os.path.dirname(BASE_DIR)), '/tspweb/static')
MEDIA_URL = '/tspweb/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'tspweb/media')
And lastly, the dockerfile:
FROM python:3.6
RUN mkdir -p /opt/services/djangoapp/src
WORKDIR /opt/services/djangoapp/src
ADD . /opt/services/djangoapp/src
EXPOSE 8001
RUN pip install -r requirements.txt
RUN python manage.py collectstatic --no-input
CMD ["gunicorn", "--bind", ":8001", "tspoptimization.wsgi"]
Any help how to resolve this issue?
I fixed this issue alone, so here is the answer:
Redis & Celery must be in the same virtual network created by docker with nginx-network and db-network.
This is the docker-compose file that is working and the tasks are sent properly:
version: '3'
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
networks: # <-- connect to the bridge
- database_network
redis:
image: "redis:latest"
expose:
- "5672"
networks:
- database_network
- nginx_network
django:
build: .
restart: always
container_name: tspoptimization
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- db
- redis
networks:
- nginx_network
- database_network
celery:
build: .
command: celery -A tspoptimization worker -l info
volumes:
- .:/code
depends_on:
- db
- redis
- django
links:
- redis
networks:
- nginx_network
- database_network
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/src/tspweb/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/src/tspweb/media # <-- bind the media volume
depends_on:
- django
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
database_network: # <-- add the bridge
driver: bridge
volumes:
postgres_data:
static_volume:
media_volume:
Actually I don't know whether this is the way how this should be handled, but I am not a professional in devops, so at least this work for now.

Error during WebSocket handshake with user endpoint once Django Channels app is deployed

I'm trying to set up a websocket connection at the user level in my django app for receiving notifications. prod is https so need to use wss.
Here is the js:
$( document ).ready(function() {
socket = new WebSocket("wss://" + window.location.host + "/user_id/");
var $notifications = $('.notifications');
var $notificationList = $('.dropdown-menu.notification-list');
$notifications.click(function(){
$(this).removeClass('newNotification')
});
socket.onmessage = function(e) {
// notification stuff
// Call onopen directly if socket is already open
if (socket.readyState == WebSocket.OPEN) socket.onopen();
}
I've implemented django-channels in settings.py like this:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts":[ENV_STR("REDIS_URL")]
},
"ROUTING": "project.routing.channel_routing",
},
}
routing.py
from channels.routing import route
from community.consumers import ws_add, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.disconnect", ws_disconnect),
]
locally, this handshakes just fine.
2017-04-16 16:37:04,108 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,110 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,111 - INFO - server - HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2017-04-16 16:37:04,112 - INFO - server - Using busy-loop synchronous mode on channel layer
2017-04-16 16:37:04,112 - INFO - server - Listening on endpoint tcp:port=8000:interface=127.0.0.1
[2017/04/16 16:37:22] HTTP GET / 200 [0.55, 127.0.0.1:60129]
[2017/04/16 16:37:23] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:23] WebSocket CONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:25] HTTP GET /user/test10 200 [0.47, 127.0.0.1:60129]
[2017/04/16 16:37:25] WebSocket DISCONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:26] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60153]
[2017/04/16 16:37:26] WebSocket CONNECT /user_id/ [127.0.0.1:60153]
However, now that the app has been deployed to Heroku, the /_user_id/ endpoint is returning 404, and I'm getting ERR_DISALLOWED_URL_SCHEME when it should be a valid endpoint:
WebSocket connection to 'wss://mydomain.com/user_id/' failed: Error during WebSocket handshake: Unexpected response code: 404.
As I'm continuing to research it seems like this server config can't actually support websockets in prod. Current Procfile:
web: gunicorn project.wsgi:application --log-file -
worker: python manage.py runworker -v2
I spent a few hrs looking at converting the app to asgi and using Daphne, although since the project is python 2.7 that's a difficult conversion (not to mention seems to require a different static file implementation)

Django channels run three workers, is it normal?

I have a very simple setup of django project with channels using documentation
https://channels.readthedocs.io/en/stable/getting-started.html
In settings:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "core.routing.channel_routing",
},
}
In rounting.py:
from channels.routing import route
from apps.prices.consumers import get_prices
channel_routing = [
route('get_prices', get_prices),
]
And when i run:
python manage.py runserver
it prints:
2016-12-24 23:49:05,202 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,202 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,203 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,207 - INFO - server - Using busy-loop synchronous mode on channel layer
And three workers seems that something went wrong, or it is normal?
But everything else works fine.
Big thx for advices
Locally when I run the ./manage.py runserver command I get 4 workers by default.
Possibly this line on the channels runserver command - https://github.com/django/channels/blob/a3f4e002eeebbf7c2412d9623e4e9809cfe32ba5/channels/management/commands/runserver.py#L80
To have a single worker running you can use the channels command ./manage.py runworker.

Logging stdout to gunicorn access log?

When I wrap my Flask application in gunicorn writing to stdout no longer seems to go anywhere (simple print statements don't appear). Is there someway to either capture the stdout into the gunicorn access log, or get a handle to the access log and write to it directly?
Use the logging: set the stream to stdout
import logging
app.logger.addHandler(logging.StreamHandler(sys.stdout))
app.logger.setLevel(logging.DEBUG)
app.logger.debug("Hello World")
Two solutions to this problem. They are probably longer than others, but ultimately they tap into how logging is done under the hood in Python.
1. set logging configuration in the Flask app
The official Flask documentation on logging works for gunicorn. https://flask.palletsprojects.com/en/1.1.x/logging/#basic-configuration
some example code to try out:
from logging.config import dictConfig
from flask import Flask
dictConfig(
{
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "[%(asctime)s] [%(process)d] [%(levelname)s] in %(module)s: %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S %z"
}
},
"handlers": {
"wsgi": {
"class": "logging.StreamHandler",
"stream": "ext://flask.logging.wsgi_errors_stream",
"formatter": "default",
}
},
"root": {"level": "DEBUG", "handlers": ["wsgi"]},
}
)
app = Flask(__name__)
#app.route("/")
def hello():
app.logger.debug("this is a DEBUG message")
app.logger.info("this is an INFO message")
app.logger.warning("this is a WARNING message")
app.logger.error("this is an ERROR message")
app.logger.critical("this is a CRITICAL message")
return "hello world"
run with gunicorn
gunicorn -w 2 -b 127.0.0.1:5000 --access-logfile - app:app
request it using curl
curl http://127.0.0.1:5000
this would generate the following logs
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Starting gunicorn 20.0.4
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Listening at: http://127.0.0.1:5000 (2724300)
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Using worker: sync
[2020-09-04 11:24:43 +0200] [2724311] [INFO] Booting worker with pid: 2724311
[2020-09-04 11:24:43 +0200] [2724322] [INFO] Booting worker with pid: 2724322
[2020-09-04 11:24:45 +0200] [2724322] [DEBUG] in flog: this is a DEBUG message
[2020-09-04 11:24:45 +0200] [2724322] [INFO] in flog: this is an INFO message
[2020-09-04 11:24:45 +0200] [2724322] [WARNING] in flog: this is a WARNING message
[2020-09-04 11:24:45 +0200] [2724322] [ERROR] in flog: this is an ERROR message
[2020-09-04 11:24:45 +0200] [2724322] [CRITICAL] in flog: this is a CRITICAL message
127.0.0.1 - - [04/Sep/2020:11:24:45 +0200] "GET / HTTP/1.1" 200 11 "-" "curl/7.68.0"
2. set logging configuration in Gunicorn
same application code as above but without the dictConfig({...}) section
create a logging.ini file
[loggers]
keys=root
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=[%(asctime)s] [%(process)d] [%(levelname)s] - %(module)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S %z
run gunicorn with --log-config logging.ini option, i.e gunicorn -w 2 -b 127.0.0.1:5000 --access-logfile - --log-config logging.ini app:app
The solution from John mee works, but it duplicates log entries in the stdout from gunicorn.
I used this:
import logging
from flask import Flask
app = Flask(__name__)
if __name__ != '__main__':
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
and got I this from: https://medium.com/#trstringer/logging-flask-and-gunicorn-the-manageable-way-2e6f0b8beb2f
You can redirect standard output to a errorlog file, which is enough for me.
Note that:
capture_output
--capture-output
False
Redirect stdout/stderr to specified file in errorlog
My config file gunicorn.config.py setting
accesslog = 'gunicorn.log'
errorlog = 'gunicorn.error.log'
capture_output = True
Then run with gunicorn app_py:myapp -c gunicorn.config.py
The equivaluent command line would be
gunicorn app_py:myapp --error-logfile gunicorn.error.log --access-logfile gunicorn.log --capture-output