Using Websockets with Django on Google App engine Flex - django

I'm currently trying to setup a Google app engine flex using a django framework with django-channels. for my current project i need a websocket, so i'm trying to reconstruct the tutorial offered on the website by Django-channels: https://channels.readthedocs.io/en/latest/tutorial/
Currently I'm stuck on adding redis to my google-app-flex instance. I followed the google documentation on setting up a redis connection - unfortunatly the example is in Flask: google doc
I assume my error is trivial, and i just need to connect django CHANNEL_LAYERS to redis proporly.
executing sudo gcloud redis instances describe <redisname> --region=us-central1 gives me following responce:
Image: "Redis Describtion"
executing sudo gcloud app describe, this responce:
I configured my app.yaml as follows:
# app.yaml
# [START runtime]
runtime: python
env: flex
entrypoint: daphne django_channels_heroku.asgi:application --port $PORT --bind 0.0.0.0
runtime_config:
python_version: 3
automatic_scaling:
min_num_instances: 1
max_num_instances: 7
# Update with Redis instance IP and port
env_variables:
REDISHOST: '<the ip in "host" from "Redis Describtion" image above>'
REDISPORT: '6379'
# Update with Redis instance network name
network:
name: default
# [END runtime]
..and in my settings.py i added this as the redis connection (which feels really wrong btw):
#settings.py
import redis
#settings.py stuff...
#connect to redis
redis_host = os.environ.get('REDISHOST', '127.0.0.1')
redis_port = int(os.environ.get('REDISPORT', 6379))
redis_client = redis.StrictRedis(host=redis_host, port=redis_port)
# Channels
ASGI_APPLICATION = "django_channels_heroku.routing.application"
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('127.0.0.1', 6379)],
},
},
}
what am i doing wrong. how do i connect to Redis using Django correctly?
here are some Links:
https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-flex
Django, Redis: Where to put connection-code
Deploying Django channels app on google flex engine
How to connect to Redis instance (memorystore) from Google's Standard App Engine (Python 3.7)
https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-flex
https://cloud.google.com/memorystore/docs/redis/quickstart-gcloud

My mistake is in the settings.py:
Correct version:
#settings.py
#settings stuff...
redis_host = os.environ.get('REDISHOST', '127.0.0.1')
redis_port = int(os.environ.get('REDISPORT', 6379))
#redis_client = redis.StrictRedis(host=redis_host, port=redis_port) #this is not needed
# Channels
ASGI_APPLICATION = "django_channels_heroku.routing.application"
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [(redis_host, redis_port)],
},
},
}

Related

Django channel with AWS Elastic cache(cluster mode) docker

We are trying to deploy the Django channel app with docker and AWSElastiCache(cluster enabled) for the Redis cloud. However, we are facing issue Moved IP issue.
Can anyone provide the solution for the channel_layer working with AWS elastic cluster mode?
FYI we deployed our app on the ec2 server.
settings.py
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('xxxx.clusterxxx.xxx.cache.amazonaws.com:xxx')],
},
},
}
docker-compose-yml
version: '3.7'
services:
kse_web:
build: .
volumes:
- "/path:/app/path_Dashboard"
command: python /app/path_Dashboard/manage.py runserver 0.0.0.0:8008
ports:
- "8008:8008"
kse_worker_channels:
build: .
volumes:
- "/path:/app/path_Dashboard"
kse_daphne:
build: .
command: bash -c "daphne -b 0.0.0.0 -p 5049 --application-close-timeout 60 --proxy-headers core.asgi:application"
volumes:
- "path:/path"
ports:
- "5049:5049"
networks:
abc_api_net:
external: true

ReadOnlyError in Django application with Redis and DjangoCannels

I have a Django app using DgangoChannels, Djangochannelrestframework. It establishes a websocket connection with ReactJS frontend. As channel layers I use Redis like that
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
Redis and Django runs in docker. My redis docker setup is
redis:
image: "redis:7.0.4-alpine"
command: redis-server
ports:
- "6379:6379"
networks:
- nginx_network
When I run my app on production server everything works for 5-8 hours. But after that period, if Django app trying to send a message via ws if falls with the error
ReadOnlyError at /admin/operations/operation/add/
READONLY You can't write against a read only replica.
Request Method: POST
Request URL: http://62.84.123.168/admin/operations/operation/add/
Django Version: 3.2.12
Exception Type: ReadOnlyError
Exception Value:
READONLY You can't write against a read only replica.
Exception Location: /usr/local/lib/python3.8/site-packages/channels_redis/core.py, line 673, in group_send
Python Executable: /usr/local/bin/python
Python Version: 3.8.13
Python Path:
['/opt/code',
'/usr/local/bin',
'/usr/local/lib/python38.zip',
'/usr/local/lib/python3.8',
'/usr/local/lib/python3.8/lib-dynload',
'/usr/local/lib/python3.8/site-packages']
Server time: Tue, 02 Aug 2022 08:23:18 +0300
I understand that it somehow connected with Redis replication, but no idea why if falls after period of time and how to fix it
I have the same error, the possible solution is here
Fix by adding command to docker and disable the replica-read-only config,
add this to your redis docker compose
​ command: redis-server --appendonly yes --replica-read-only no ​
then you could try to verify if the ​replica-read-only​ is disable using​redis-cli > config get replica-read-only​ command , if the result is no then it successful to disable.

Docker Compose Django + PostgreSQL access postgres host without using service name

So, basically i have this docker-compose.yml config:
services:
postgres:
container_name: youtube_manager_postgres
restart: always
image: postgres:alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=qwerty123
- POSTGRES_DB=ytmanager
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "5432:5432"
django:
container_name: youtube_manager_django
restart: always
build:
context: ../
dockerfile: deploy/django/Dockerfile
command: sh -c "poetry run python3 manage.py migrate &&
poetry run python3 manage.py collectstatic --no-input --clear &&
poetry run uwsgi --ini /etc/uwsgi.ini"
ports:
- "8000:8000"
volumes:
- staticfiles:/code/static
- mediafiles:/code/media
depends_on:
- postgres
My Django's database preferences are:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'ytmanager',
'USER': 'admin',
'HOST': '0.0.0.0',
'PASSWORD': 'qwerty123',
'PORT': '5432',
}
}
I wan't to use it in two ways:
1. Running docker-compose up -d postgres and then python3 manage.py runserver (actually, poetry run python3 manage.py runserver but for now it doesn't matter) during development.
2. Running docker-compose up during deployment.
For now, it works fine with the 1 option, but when I'm execution docker-compose up I'm getting an error:
youtube_manager_django | django.db.utils.OperationalError: could not connect to server: Connection refused
youtube_manager_django | Is the server running on host "0.0.0.0" and accepting
youtube_manager_django | TCP/IP connections on port 5432?
If I'm changing Django database settings this way:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'ytmanager',
'USER': 'admin',
'HOST': '0.0.0.0',
'PASSWORD': 'qwerty123',
'PORT': '5432',
}
}
Executing docker-compose up -d postgres and then python manage.py runserver causes an error:
django.db.utils.OperationalError: could not translate host name "postgres" to address: Temporary failure in name resolution
How could I properly config docker-compose.yml to use the same HOST in Django settings? (for example, 'HOST': '0.0.0.0' or 'HOST': 'postgres' for both options).
I've tried to use network_mode: host on my django and postgres services. It works fine, but is there any other way to solve a problem (for example, using networking settings? I've read docker-compose documentation on their website, but can't get what's going on there).
I think you are mixing develop a production environments (by the way, second time you pasted your Django database settings probably you meant 'HOST': 'postgres')
So, if I'm not wrong:
On development you want in your Django setting .py file: 'HOST': '0.0.0.0', since i think your are
executing python manage.py runserver outside docker,
keeping postgres in docker.
On production you want the same in your Django setting .py file: 'HOST': '0.0.0.0', but to make it work you need 'HOST': 'postgres' (matching the name of the service in the compose file) and run everything on docker (executing the whole compose file as it is: docker-compose up). In this case, Django can't get access to '0.0.0.0' database host since it is running 'containerized' and that ip don't bind to any service, so it needs the ip or name of the service 'postgres'.
Posible solution:
In my opinion the solution is having two yml files to be called by Docker ( e.g. docker-compose -f docker-compose-development.yml up):
docker-compose-development.yml
docker-compose-production.yml
In each .yml file you can use different env variables or settings to cover differences between development and production in a clean way.
You can have a look at:
https://github.com/pydanny/cookiecutter-django. It is a template Django project using Docker.
It follows "The Twelve Factors" app methodology:
https://12factor.net/
In short:
Environment variables are set in files under the .envs folder.
in the compose .yml files you point to them in order to load the environment variables:
env_file:
./.envs/.production/.postgres
Django settings .py files get access to the env variables using the django-environ package.
Before editing Django DB config, please ensure the following:
Both containers are running in the same network.
Postgres service is up and running in the container.
Service 'postgres' is accessible from the webapp container. For this you can login to the container and perform a ping test.
docker exec -it containerID /bin/bash (in some cases /bin/sh) # to login to container
Solutions:
Similar:
Django connection to postgres by docker-compose
For you to connect to DB using service name in Django, as per the documentation if the HOST and PORT keys are left out of the dictionary then Django will try connecting with the complete "NAME" as a SID.
Hope it helps

Postgres django.db.utils.OperationalError: could not connect to server: Connection refused

Trying to run my django server in a docker, but the postgres port is already being used? When I run "docker-compose up", I receive this error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
ERROR: Service 'web' failed to build: The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1
sudo service postgresql status
returns:
9.6/main (port 5432): online
sudo lsof -nP | grep LISTEN
returns:
postgres 15817 postgres 3u IPv4 1022328 0t0 TCP 127.0.0.1:5432
I tried to run "sudo kill -9 15817", but docker-compose up still receives the same error.
Docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'stemleticsadmin',
'PASSWORD': 'changeme',
'HOST': '127.0.0.1', # set in docker-compose.yml
'PORT': 5432 # default postgres port
}
}
In order to use postgres inside of Docker you will need to configure information like the database user, password and db-name. This is done through setting environment variables for the container. A complete list of supported variables can be found here.
Additionally you will want to expose port 5432 of postgres to your web service inside your docker-compose file.
Something like this should work:
docker-compose.yml
version: '3'
services:
db:
image: postgres
ports:
- "5432"
environment:
- POSTGRES_DB=stemletics
- POSTGRES_USER=stemleticsadmin
- POSTGRES_PASSWORD=changeme
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
You will also have to change the hostname you are using inside settings.py. docker-compose creates a default network for your services and attaches the running containers to this network. Inside your web container the database will be available at the hostname db.
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'stemleticsadmin',
'PASSWORD': 'changeme',
'HOST': 'db', # set in docker-compose.yml
'PORT': 5432 # default postgres port
}
}
Lastly if you do not have any sort of database reconnection logic in your python code the migration may still fail. This is due to the fact that depends_on only waits for the container to start, but postgres will take a couple of seconds to initialze after the container is running.
In order to get around this quickly it will be easiest to run one container at a time.
i.e.:
$ docker-compose up -d db
Wait for postgres to initialize
$ docker-compose up -d web
Hopefully this gets you up and running.
I was able to fix this issue simply building my db container, wait few seconds, then building the web container:
docker-compose up -d --build db
wait a few seconds
docker-compose up -d --build web
I hope this helps
I face the same problem to connect the PostgreSQL server to the windows operating system. Then I apply the following way. I hope it will help to solve this problem...
Download postgres_sql
Install postgresql
Search option open 'SQL Shell (psql)'
Create database
Settings add DATABASES....
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'postgres', #default user name
'PASSWORD': 'changeme',
'HOST': '127.0.0.1', # default host
'PORT': '5432', # default postgres port
}
}
I ran into this issue and it turned out that I hadn't started docker desktop. Apparently, if docker desktop is not running, any docker commands you use do not apply to docker desktop but rather a different version of docker on your system. My OS is ubuntu 22.04.

How to deploy Django app on Google App Engine and make connection with Postgres instance?

Currently I am trying to deploy a Django app on Google App Engine(GAE). All goes well and app is deployed, but when it gets deployed, its connection with Postgres instance lost. I don't know why this happening. following is my settings.py file.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbname',
'USER': 'username',
'PASSWORD': 'password',
}
}
# In the flexible environment, you connect to CloudSQL using a unix socket.
# Locally, you can use the CloudSQL proxy to proxy a localhost connection
# to the instance
DATABASES['default']['HOST'] = '/cloudsql/shopnroar-175407:us-central1:snr-instance1'
if os.getenv('GAE_INSTANCE'):
pass
else:
DATABASES['default']['HOST'] = '100.107.126.241'
When i run it locally, it's making connection with google cloud Postgres as i have given ipv4 address to make connection, but as soon as i deploy it on GAE, following error comes while accessing database.
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/shopnroar-175407:us-central1:snr-instance1/.s.PGSQL.5432"?
here is my app.yaml
# [START runtime]
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT SNR.wsgi
env_variables:
# Replace user, password, database, and instance connection name with the values obtained
# when configuring your Cloud SQL instance.
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://amad.uddin:goingtoin1122#/shopnroar?host=/cloudsql/shopnroar-175407:us-central1:snr-instance1
beta_settings:
cloud_sql_instances: shopnroar-175407:us-central1:snr-instance1
runtime_config:
python_version: 2
# [END runtime]
Can anybody tell me how can i make connection with postgres instance after deploying django app on GAE?
Any help or suggestions will be highly appreciated.
Actually - forget my old answer - try turning the app.yaml values into strings, that helped me out:
cloud_sql_instances: 'shopnroar-175407:us-central1:snr-instance1'