I dont know what part I am missing but celery not conneting to redis when I am running docker-compose up --build
error: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:
6379. Connection refused.
Here is my file docker-compose.yml
version: '3'
services:
web:
build: .
image: resolution
depends_on:
- db
- redis
- celery
command: bash -c "python3 /code/manage.py migrate && python3 /code/manage.py initialsetup && python3 /code/manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db:db
- redis:redis
- celery:celery
restart: always
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGHOST=trust
- PGPORT=5432
db:
image: postgres:latest
environment:
POSTGRES_DB: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
POSTGRES_HOST: 'trust'
redis:
image: "redis:alpine"
ports:
- "6379:6379"
restart: on-failure
celery:
image: resolution
command: celery -A mayan worker -l info
environment:
- DJANGO_SETTINGS_MODULE=mayan.settings.production
volumes:
- .:/code
depends_on:
- db
- redis
links:
- redis:redis
restart: on-failure
celery and redis are running in different containers.
According to the error message that you shared, most likely, your celery is trying to connect to localhost to reach the RedisDB, which is not on localhost.
Seach for the celery configuration file that contains the CELERY_BROKER_URL and CELERY_RESULT_BACKEND values. Most likely they look like this:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
They should look like this, pointing to the redis service name that you defined in your compose file:
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
If you don't have such a config, search directly for the place where the Celery instance is initialized and make sure it looks like this:
app = Celery('server', broker='redis://redis:6379/0')
Related
I have a problem with the initial launch of docker-compose up, when DB is not initialized yet and django throws an error.
I tried to use 'restart_police', but it didn't help and the webservice was restarted almost without waiting and forward the DB service, regardless of which reload period I wouldn't set
version: "3.9"
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
deploy:
restart_policy:
condition: on-failure
delay: 15s
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=db
depends_on:
- db
db:
container_name: db_pg
image: postgres
hostname: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST: db
volumes:
- ./data/db:/var/lib/postgresql/data
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
depends_on:
- db
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- ./data/pgadmin:/var/lib/pgadmin/data
I'm having a hard time deploying my app built with Django, Postgres, DjangoQ, Redis and ES on AWS Elastic Beanstalk, using docker-compose.yml.
I've used EB CLI (eb init, eb create) to do it and it shows the environment is successfully launched but I still have the following problem.
On the EC2 instance, there is no postgres, djangoq and ec containers built like it says in the docker-compose file as below. Only django, redis and ngnix containers are found on the ec2 instance.
The environment variables that I specified in the docker-compose.yml file aren't being configured to the django container on EC2, so I can't run django there.
I'm pretty lost and am not sure where to even start to fix the problems here.. Any insight will be very much appreciated..
version: '3'
services:
django:
build:
context: .
dockerfile: docker/Dockerfile
command: gunicorn --bind 0.0.0.0:5000 etherscan_project.wsgi:application
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- es
django-q:
build:
context: .
dockerfile: docker/Dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py qcluster"
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- django
- es
db:
image: postgres:latest
expose:
- 5432
env_file: .env
volumes:
- ./docker/volumes/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
expose:
- 6379
ports:
- 6379:6379
volumes:
- ./docker/volumes/redis:/data
nginx:
container_name: nginx
image: nginx:1.13
ports:
- 80:80
depends_on:
- db
- django
- redis
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- $PWD:/srv/app/:delegated
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
ports:
- 9200:9200
- 9300:9300
expose:
- 9200
- 9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./docker/volumes/es:/usr/share/elasticsearch/data
volumes:
app-files:
driver_opts:
type: nfs
device: $PWD
o: bind
can you confirm that your environment variables are being used correctly? A common mistake with EB and docker-compsoe is that it is assumed that your .env file works the same way in EB as it does in docker-compose when it does not. I have made that mistake before. Check out the docs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables
I have build my postgres and django appplication using the following
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
When I check the docker-desktop, I got 2 docker containers, "django" and "pgdb".
When I check the django, it says
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
Originally, on my windows 10 machine, I saved the secret key in the windows variable. What is the way to build the docker-compose so it has the secret get?
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
You would need to create a .env file with the SECRET_KEY.
In the django_secrets.env you can store like this:
SECRET_KEY=my_secret_key
Then in the docker-compose.yml file you can specify the django_secrets.env file:
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./django_secrets.env
volumes:
- .:/usr/src/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
And then you can get the value in the settings.py file like this:
SECRET_KEY = os.environ.get("SECRET_KEY", 'my_default_secret_key')
You can have the django_secrets.env file in any path, you just need to specify the path in the docker-compose.yml file. Also you can name it as you like
Assume this simple docker-compose file.
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
How can i add django-q worker to handle tasks from web container?
I could probably build same image with different command such as python manage.py qcluster but I dont think this solution si elegant. Could you suggest some approach how to do that?
Probably the most easy thing that you can do is to add a new cluster for qcluster.
Something like this:
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
djangoq:
build: .
command: python manage.py qcluster
volumes:
- .:/code
ports:
- "8000:8001"
depends_on:
- redis
I'm doing a project online book store on django, when i try to setup environment variable I am facing the problem.
My docker-compose.yml file looks like:
version: '3.7'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
environment:
- SECRET_KEY=secret_key
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
hostname: db
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
ports:
- "5432:5432"
volumes:
postgres_data:
and my settings.py:
SECRET_KEY = os.environ.get('SECRET_KEY')
The secret_key is:
SECRET_KEY=-)hy!+7txe6))yhcy4o3ruxj(gy(vwx)6^h&+-i*=0f$4q(&bh