I have a very simple setup of django project with channels using documentation
https://channels.readthedocs.io/en/stable/getting-started.html
In settings:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "core.routing.channel_routing",
},
}
In rounting.py:
from channels.routing import route
from apps.prices.consumers import get_prices
channel_routing = [
route('get_prices', get_prices),
]
And when i run:
python manage.py runserver
it prints:
2016-12-24 23:49:05,202 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,202 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,203 - INFO - worker - Listening on channels get_prices, http.request, websocket.connect, websocket.receive
2016-12-24 23:49:05,207 - INFO - server - Using busy-loop synchronous mode on channel layer
And three workers seems that something went wrong, or it is normal?
But everything else works fine.
Big thx for advices
Locally when I run the ./manage.py runserver command I get 4 workers by default.
Possibly this line on the channels runserver command - https://github.com/django/channels/blob/a3f4e002eeebbf7c2412d9623e4e9809cfe32ba5/channels/management/commands/runserver.py#L80
To have a single worker running you can use the channels command ./manage.py runworker.
Related
I have a Django app using DgangoChannels, Djangochannelrestframework. It establishes a websocket connection with ReactJS frontend. As channel layers I use Redis like that
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
Redis and Django runs in docker. My redis docker setup is
redis:
image: "redis:7.0.4-alpine"
command: redis-server
ports:
- "6379:6379"
networks:
- nginx_network
When I run my app on production server everything works for 5-8 hours. But after that period, if Django app trying to send a message via ws if falls with the error
ReadOnlyError at /admin/operations/operation/add/
READONLY You can't write against a read only replica.
Request Method: POST
Request URL: http://62.84.123.168/admin/operations/operation/add/
Django Version: 3.2.12
Exception Type: ReadOnlyError
Exception Value:
READONLY You can't write against a read only replica.
Exception Location: /usr/local/lib/python3.8/site-packages/channels_redis/core.py, line 673, in group_send
Python Executable: /usr/local/bin/python
Python Version: 3.8.13
Python Path:
['/opt/code',
'/usr/local/bin',
'/usr/local/lib/python38.zip',
'/usr/local/lib/python3.8',
'/usr/local/lib/python3.8/lib-dynload',
'/usr/local/lib/python3.8/site-packages']
Server time: Tue, 02 Aug 2022 08:23:18 +0300
I understand that it somehow connected with Redis replication, but no idea why if falls after period of time and how to fix it
I have the same error, the possible solution is here
Fix by adding command to docker and disable the replica-read-only config,
add this to your redis docker compose
command: redis-server --appendonly yes --replica-read-only no
then you could try to verify if the replica-read-only is disable usingredis-cli > config get replica-read-only command , if the result is no then it successful to disable.
I want to monitoring my database using prometheus, django rest framework and docker,
all is my local machine, the error is below:
well the error is the url http://127.0.0.1:9000/metrics, the http://127.0.0.1:9000 is the begging the my API, and I don't know what's the problem, my configuration is below
my requirements.txt
django-prometheus
my file docker: docker-compose-monitoring.yml
version: '2'
services:
prometheus:
image: prom/prometheus:v2.14.0
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
grafana:
image: grafana/grafana:6.5.2
ports:
- 3060:3060
my folder and file prometheus/prometheus.yml
global:
scrape_interval: 15s
rule_files:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- 127.0.0.1:9090
- job_name: monitoring_api
static_configs:
- targets:
- 127.0.0.1:9000
my file settings.py
INSTALLED_APPS=[
...........
'django_prometheus',]
MIDDLEWARE:[
'django_prometheus.middleware.PrometheusBeforeMiddleware',
......
'django_prometheus.middleware.PrometheusAfterMiddleware']
my model.py
from django_promethues.models import ExportMOdelOperationMixin
class MyModel(ExportMOdelOperationMixin('mymodel'), models.Model):
"""all my fields in here"""
my urls.py
url('', include('django_prometheus.urls')),
well the application is running well, when in the 127.0.0.1:9090/metrics, but just monitoring the same url, and I need monitoring different url, I think the problem is not the configuration except in the file prometheus.yml, because I don't know how to call my table or my api, please help me.
bye.
you need to change your config of prometheus and add python image in docker-compose like this:
config of prometheus(prometheus.yaml):
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: django_project # your project name
metrics_path: /metrics
static_configs:
- targets:
- web:8000
docker-compose file for prometheus and django , you can also include grafana image, I have installed grafana locally:
version: '3.7'
services:
web:
build:
context: . # context represent path of your dockerfile(dockerfile present in the root dir)
command: sh -c "python3 manage.py migrate &&
gunicorn webapp.route.wsgi:application --pythonpath webapp --bind 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml #prometheus.yaml present in the root dir
Dockerfile:
FROM python:3.8
COPY ./webapp/django /app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt*strong text*
For prometheus settings in django:
https://pypi.org/project/django-prometheus/
Hit django app api's.
Hit localhost:8000/metrics api.
Hit localhost:9090/ and search for the required metrics from the dropdown and click on execute it will generate result in console and create graph
To show graph in the grafana hit localhost:3000 and create new dashboard.
I'm trying to set up a websocket connection at the user level in my django app for receiving notifications. prod is https so need to use wss.
Here is the js:
$( document ).ready(function() {
socket = new WebSocket("wss://" + window.location.host + "/user_id/");
var $notifications = $('.notifications');
var $notificationList = $('.dropdown-menu.notification-list');
$notifications.click(function(){
$(this).removeClass('newNotification')
});
socket.onmessage = function(e) {
// notification stuff
// Call onopen directly if socket is already open
if (socket.readyState == WebSocket.OPEN) socket.onopen();
}
I've implemented django-channels in settings.py like this:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts":[ENV_STR("REDIS_URL")]
},
"ROUTING": "project.routing.channel_routing",
},
}
routing.py
from channels.routing import route
from community.consumers import ws_add, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.disconnect", ws_disconnect),
]
locally, this handshakes just fine.
2017-04-16 16:37:04,108 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,110 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,111 - INFO - server - HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2017-04-16 16:37:04,112 - INFO - server - Using busy-loop synchronous mode on channel layer
2017-04-16 16:37:04,112 - INFO - server - Listening on endpoint tcp:port=8000:interface=127.0.0.1
[2017/04/16 16:37:22] HTTP GET / 200 [0.55, 127.0.0.1:60129]
[2017/04/16 16:37:23] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:23] WebSocket CONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:25] HTTP GET /user/test10 200 [0.47, 127.0.0.1:60129]
[2017/04/16 16:37:25] WebSocket DISCONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:26] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60153]
[2017/04/16 16:37:26] WebSocket CONNECT /user_id/ [127.0.0.1:60153]
However, now that the app has been deployed to Heroku, the /_user_id/ endpoint is returning 404, and I'm getting ERR_DISALLOWED_URL_SCHEME when it should be a valid endpoint:
WebSocket connection to 'wss://mydomain.com/user_id/' failed: Error during WebSocket handshake: Unexpected response code: 404.
As I'm continuing to research it seems like this server config can't actually support websockets in prod. Current Procfile:
web: gunicorn project.wsgi:application --log-file -
worker: python manage.py runworker -v2
I spent a few hrs looking at converting the app to asgi and using Daphne, although since the project is python 2.7 that's a difficult conversion (not to mention seems to require a different static file implementation)
I've set up an Application load balancer that redirects /ws/ requests to port 5000 where I have Daphne running along with 4 workers (that reload via Supervisord). However, in the Chrome console I get the error
WebSocket connection to 'wss://api.example.com/ws/' failed: WebSocket is closed before the connection is established.
when attempting to connect to my WebSocket via simple JavaScript code (see Multichat for something quite close). Any ideas?
Routing.py
websocket_routing = [
# Called when WebSockets connect
route("websocket.connect", ws_connect),
# Called when WebSockets get sent a data frame
route("websocket.receive", ws_receive),
# Called when WebSockets disconnect
route("websocket.disconnect", ws_disconnect),
]
Settings.py
# Channel settings
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": ["redis://:xxxxx#xxxx-redis.xxxxx.1234.usxx.cache.amazonaws.com:6379/0"],
},
"ROUTING": "Project.routing.channel_routing",
},
}
Supervisord.conf
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
environment=LD_LIBRARY_PATH="/usr/local/lib"
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 Project.asgi:channel_layer
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
environment=LD_LIBRARY_PATH="/usr/local/lib"
command=/opt/python/run/venv/bin/python manage.py runworker -v2
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
daphne.out.log
2017-03-05 00:58:24,168 INFO Starting server at tcp:port=5000:interface=0.0.0.0, channel layer Project.asgi:channel_layer.
2017-03-05 00:58:24,179 INFO Using busy-loop synchronous mode on channel layer
2017-03-05 00:58:24,182 INFO Listening on endpoint tcp:port=5000:interface=0.0.0.0
workers.out.log
2017-03-05 00:58:25,017 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,019 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,010 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,020 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,020 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,021 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,021 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,021 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,022 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-03-05 00:58:25,029 - INFO - runworker - Using single-threaded worker.
2017-03-05 00:58:25,029 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-03-05 00:58:25,030 - INFO - worker - Listening on channels chat.receive, http.request, websocket.connect, websocket.disconnect, websocket.receive
JavaScript code that runs before failure
// Correctly decide between ws:// and wss://
var ws_scheme = window.location.protocol == "https:" ? "wss" : "ws";
var ws_path = ws_scheme + '://' + window.location.host + "/ws/";
console.log("Connecting to " + ws_path);
var socket = new ReconnectingWebSocket(ws_path);
Evidently, there is no relevant output in the daphne/worker logs which implies the connection is potentially not being correctly routed in the first place.
Everything was set up properly--'twas a permissions issue. Pay close attention to all relevant AWS security groups (both the load balancer and instances that are a member of the target group).
I'm trying to use Redis as a broker for Celery for my Django project that uses Docker Compose. I can't figure out what exactly I've done wrong, but despite the fact that the console log messages are telling me that Redis is running and accepting connections (and indeed, when I do docker ps, I can see the container running), I still get an error about the connection being refused. I even did
docker exec -it <redis_container_name> redis-cli
ping
and saw that the response was PONG.
Here are the Celery settings in my settings.py:
BROKER_URL = 'redis://localhost:6379/0'
BROKER_TRANSPORT = 'redis'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ENABLE_UTC = True
CELERY_TIMEZONE = "UTC"
Here are the Redis container settings in my docker-compose.yml:
redis:
image: redis
ports:
- "6379:6379"
I remembered to link the redis container with my web container as well. I can start up the server just fine, but I get the connection refused error when I try to upload anything to the site. What exactly is going wrong?
EDIT: I remembered to use VBoxManage to port forward such that I can go to my browser and access my site at localhost:8000, so it doesn't seem like I need to use the VM's IP instead of localhost for my settings.py.
EDIT 2: If I replace localhost in the settings with either the IP address of the docker-machine VM or the IP address of the Redis container, then what happens is that I really quickly get a false success message on my website when I upload a file, but then nothing actually gets uploaded. The underlying upload function, insertIntoDatabase(), uses delay.
I just had similar problem due to updating Celery from v3.1 to v4 and according to this tutorial it was needed to change BROKER_URL to CELERY_BROKER_URL in the settings.py
settings.py part
CELERY_BROKER_URL = 'redis://cache:6379/0'
CELERY_RESULT_BACKEND = 'redis://cache:6379/0'
docker-compose.yml part
version: '2'
services:
web:
container_name: django-container
*******
other options
*******
depends_on:
- cache
- db
cache:
container_name: redis-container
restart: always
image: redis:latest
Is Django running in a seperate container that is linked to the Redis container? If so, you should have some environment variables with the Ip and port that Django should use to connect to the Redis container. Set BROKER_URL to use the redis Ip and port env vars and you should be in business. Ditto for RESULT_BACKEND.
Reference docs for the env vars are here: Docker Compose docs
Here's some example code for how we use the automatically added env vars in one of our projects at OfferUp:
BROKER_TRANSPORT = "redis"
_REDIS_LOCATION = 'redis://{}:{}'.format(os.environ.get("REDIS_PORT_6379_TCP_ADDR"), os.environ.get("REDIS_PORT_6379_TCP_PORT"))
BROKER_URL = _REDIS_LOCATION + "/0"
CELERY_RESULT_BACKEND = _REDIS_LOCATION + "/1"