We are trying to deploy the Django channel app with docker and AWSElastiCache(cluster enabled) for the Redis cloud. However, we are facing issue Moved IP issue.
Can anyone provide the solution for the channel_layer working with AWS elastic cluster mode?
FYI we deployed our app on the ec2 server.
settings.py
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('xxxx.clusterxxx.xxx.cache.amazonaws.com:xxx')],
},
},
}
docker-compose-yml
version: '3.7'
services:
kse_web:
build: .
volumes:
- "/path:/app/path_Dashboard"
command: python /app/path_Dashboard/manage.py runserver 0.0.0.0:8008
ports:
- "8008:8008"
kse_worker_channels:
build: .
volumes:
- "/path:/app/path_Dashboard"
kse_daphne:
build: .
command: bash -c "daphne -b 0.0.0.0 -p 5049 --application-close-timeout 60 --proxy-headers core.asgi:application"
volumes:
- "path:/path"
ports:
- "5049:5049"
networks:
abc_api_net:
external: true
Related
I have the following setup:
docker-compose.yml
# Mentioning which format of dockerfile
version: "3.9"
# services or nicknamed the container
services:
# web service for the web
web:
# you should use the --build flag for every node package added
build: .
# Add additional commands for webpack to 'watch for changes and bundle it to production'
command: python manage.py runserver 0.0.0.0:8000
volumes:
- type: bind
source: .
target: /code
ports:
- "8000:8000"
depends_on:
- db
environment:
- "DJANGO_SECRET_KEY=django-insecure-m#x2vcrd_2un!9b4la%^)ou&hcib&nc9fvqn0s23z%i1e5))6&"
- "DJANGO_DEBUG=True"
expose:
- 8000
db:
image: postgres:13
#
volumes:
- postgres_data:/var/lib/postgresql/data/
# unsure of what this environment means.
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
# - "POSTGRES_USER=postgres"
# Volumes set up
volumes:
postgres_data:
and a settings file as
ALLOWED_HOSTS = ['0.0.0.0', 'localhost', '127.0.0.1']
#127.0.0.1 is my localhost address.
With my host's IP as 192.168.0.214
Can you please help me deploy the django site on my host's local network?
Do I have to set up something on my router?
Or could you direct me towards resources(understanding networking) which will help me understand the same.
I have a Django app deployed in Docker containers.
I have 3 config environnements: dev, preprod and prod.
dev is my local environnement (localhost) and preprod/prod are remote linux environnements.
It works when using the "public" Redis server and standard config.
But I need to use our own Redis deployed in Docker container in a remote server (192.168.xx.xx) with name container redis_cont.
And I do not really know how to config. I do not know if it is possible?
I would appreciate some help.
docker-compose
version: '3.7'
services:
web:
restart: always
build:
context: ./app
dockerfile: Dockerfile.dev
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/usr/src/app
ports:
- 8000:8000
env_file:
- ./.env.dev
entrypoint: [ "/usr/src/app/entrypoint.dev.sh" ]
depends_on:
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: redis_cont <= container running in remote linux server
image: "redis:alpine"
celery:
build:
context: ./app
dockerfile: Dockerfile.dev
command: celery -A core worker -l info
volumes:
- ./app:/usr/src/app
env_file:
- ./.env.dev
depends_on:
- web
- redis
celery-beat:
build:
context: ./app
dockerfile: Dockerfile.dev
command: celery -A core beat -l info
volumes:
- ./app:/usr/src/app
env_file:
- ./.env.dev
depends_on:
- web
- redis
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_BEAT_SCHEDULE = {
'hello': {
'task': 'project.tasks.hello',
'schedule': crontab() # execute every minute
},
}
Since the containers are not created via the same docker-compose, they won't share the same network. redis_cont just doesn't exist to the services built in the isolated network of your docker-compose.
If Redis container is published on the remote and is accessible using ip:port, you should be able to use it directly in your settings.py. No need to add a new service in your compose file.
Note
To establish a communication between services in the same docker-compose you should use the service name (web, celery-beat, etc in your case) and not the container name.
I'm having issues connecting with my Elasticsearch container since day 1.
First I was using elasticsearch as the hostname, then I've tried the container name web_elasticsearch_1, and finally I'd set a Static IP address to the container and passed it in my configuration file.
PYPI packages:
django==3.2.9
elasticsearch==7.15.1
elasticsearch-dsl==7.4.0
docker-compose.yml
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
networks:
default:
ipv4_address: 172.18.0.10
settings.py
# Elasticsearch
ELASTICSEARCH_HOST = "172.18.0.10"
ELASTICSEARCH_PORT = 9200
service.py
from django.conf import settings
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(
hosts=[{"host": settings.ELASTICSEARCH_HOST, "port": settings.ELASTICSEARCH_PORT}],
use_ssl=False,
verify_certs=False,
connection_class=RequestsHttpConnection,
)
traceback
HTTPConnectionPool(host='172.18.0.10', port=9200): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1973ebd6d0>, 'Connection to 172.18.0.10 timed out. (connect timeout=5)'))
By default Docker Compose uses a bridge network to provision inter-container communication. You can read more about this network at the Debian Wiki.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. So update your file:
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
And now you can connect with elasticsearch:9200 instead of 172.18.0.10 from your web container. For more info see this article.
I have a Django project that uses django-elasticsearch-dsl. The project is dockerized, so elasticsearch and the web projects leave in separate containers.
Now my goal is to recreate and repopulate the indices running
python manage.py search_index --rebuild
In order to do that, I try to run the command from the container of the web service the following way:
docker-compose exec web /bin/bash
> python manage.py search_index --rebuild
Not surprsiginly, I get an error
Failed to establish a new connection: [Errno 111] Connection refused)
apparently because python tried to connect to elasticsearch using localhost:9200.
So the question is, how do I tell the management command the host where elasticsearch lives ?
Here's my docker-compose.yml file:
version: '2'
services:
web:
build: .
restart: "no"
command: ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
env_file: &envfile
- .env
environment:
- DEBUG=True
ports:
- "${DJANGO_PORT}:8000"
networks:
- deploy_network
depends_on:
- elasticsearch
- db
elasticsearch:
image: 'elasticsearch:2.4.6'
ports:
- "9200:9200"
- "9300:9300"
networks:
- deploy_network
db:
image: "postgres"
container_name: "postgres"
restart: "no"
env_file: *envfile
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
networks:
deploy_network:
driver: bridge
UPDATE:
In the Django project's settings I setup the elasticsearch dsl host:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
}
}
Since your Django project and Elasticsearch are in 2 separate containers, setting ELASTICSEARCH_DSL's host to 'localhost:9200' won't work, in this case localhost refers to localhost inside Django container.
So you need to set it like this:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'elasticsearch:9200'
}
}
EDIT 2
In fact I realized that automatic backup done today in preprod environment (nginx) by celery/dbbackup are not available in container?
I only have backup from yesterday when I use development environment (django runserver)....
EDIT 1
change docker-compose accordingly
backup_volume is mounted
I can not found where in local but if I cnnect to the db container and run \i 'path/to/file.psql' it works
[
{
"CreatedAt": "2021-01-19T14:37:55Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "cafe_tropical",
"com.docker.compose.version": "1.27.4",
"com.docker.compose.volume": "backup_volume"
},
"Mountpoint": "/var/lib/docker/volumes/cafe_tropical_backup_volume/_data",
"Name": "cafe_tropical_backup_volume",
"Options": null,
"Scope": "local"
}
]
I have a Django app on Docker with web container where automatics backups are stored and another postgresql database container.
I want to be able to restore postgresql database using \i 'path\to\backup.psql' in a psql shell but it failed because file is not found
db_preprod=# \i '/usr/src/app/backup/backup_2021-01-19_1336.psql'
/usr/src/app/backup/backup_2021-01-19_1336.psql: No such file or directory
I also tryed to copy with docker cp but not works:
docker cp web:/usr/src/ap/backup/backup_2021-01-19_1336.psql db:/.
copying between containers is not supported
docker-compose
version: '3.7'
services:
web:
restart: always
container_name: web
build:
context: ./app
dockerfile: Dockerfile.preprod
restart: always
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./app:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
- backup_volume:/usr/src/app/backup <-- added volume
expose:
- 8000
env_file:
- ./.env.preprod
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"]
interval: 30s
timeout: 10s
retries: 50
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- backup_volume:/var/lib/postgresql/data/backup <-- added volume
env_file:
- ./.env.preprod.db
nginx:
container_name: nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1340:80
depends_on:
web:
condition: service_healthy
volumes:
postgres_data:
static_volume:
media_volume:
backup_volume:
You need to use volumes. Both containers mount the same path. Django outputs the dump and then psql gets it. But why does it work like this? Shouldn't you be able to output and restore from either django or postgres?