How can I set `celeryd` and `celerybeat` in Docker Compose? - django

I have a task that update per-minute.
This is my Dockerfile for the Django application.
FROM python:3-onbuild
COPY ./ /
EXPOSE 8000
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --noinput
ENTRYPOINT ["python3", "manage.py", "celeryd"]
ENTRYPOINT ["python3", "manage.py", "celerybeat"]
ENTRYPOINT ["/app/start.sh"]
This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./nginx:/etc/nginx/conf.d
- ./static:/app/static
depends_on:
- web
rabbit:
hostname: rabbit_airport
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5673:5672"
web:
build: ./
container_name: django_airport
volumes:
- ./:/app
- ./static:/app/static
expose:
- "8080"
links:
- rabbit
depends_on:
- rabbit
This is the front-most log of my running container.
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 ===
rabbit_1 | Starting RabbitMQ 3.6.12 on Erlang 19.2.1
rabbit_1 | Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbit_1 | Licensed under the MPL. See http://www.rabbitmq.com/
rabbit_1 |
rabbit_1 | RabbitMQ 3.6.12. Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbit_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/
rabbit_1 | ## ##
rabbit_1 | ########## Logs: tty
rabbit_1 | ###### ## tty
rabbit_1 | ##########
rabbit_1 | Starting broker...
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 ===
rabbit_1 | node : rabbit#rabbit_airport
rabbit_1 | home dir : /var/lib/rabbitmq
rabbit_1 | config file(s) : /etc/rabbitmq/rabbitmq.config
rabbit_1 | cookie hash : grcK4ii6UVUYiLRYxWUffw==
rabbit_1 | log : tty
rabbit_1 | sasl log : tty
rabbit_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#rabbit_airport
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Memory high watermark set to 3145 MiB (3298503884 bytes) of 7864 MiB (8246259712 bytes) total
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Enabling free disk space monitoring
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Disk free limit set to 50MB
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Limiting to approx 1048476 file handles (943626 sockets)
Everything is okay except my Celery task is not running.
EDIT:
#!/bin/bash
# PENDING: From the source here,
# http://tutos.readthedocs.io/en/latest/source/ndg.html it says that it is a
# common practice to have a specific user to handle the webserver.
SCRIPT=$(readlink -f "$0")
DJANGO_SETTINGS_MODULE=airport.settings
DJANGO_WSGI_MODULE=airport.wsgi
NAME="airport"
NUM_WORKERS=3
if [ "$BASEDIR" = "/" ]
then
BASEDIR=""
else
BASEDIR=$(dirname "$SCRIPT")
fi
if [ "$BASEDIR" = "/" ]
then
VENV_BIN="venv/bin"
SOCKFILE="run/gunicorn.sock"
else
VENV_BIN=${BASEDIR}"/venv/bin"
SOCKFILE=${BASEDIR}"/run/gunicorn.sock"
fi
SOCKFILEDIR="$(dirname "$SOCKFILE")"
VENV_ACTIVATE=${VENV_BIN}"/activate"
VENV_GUNICORN=${VENV_BIN}"/gunicorn"
# Activate the virtual environment.
# Only set this for virtual environment.
#cd $BASEDIR
#source $VENV_ACTIVATE
# Set environment variables.
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$PYTHONPATH:$BASEDIR
# Create run directory if they does not exists.
#test -d $SOCKFILEDIR || mkdir -p $SOCKFILEDIR
# Start Gunicorn!
# Programs meant to be run under supervisor should not daemonize themselves
# (do not use --daemon).
#
# Set this for virtual environment.
#exec ${VENV_GUNICORN} ${DJANGO_WSGI_MODULE}:application \
# --bind=unix:$SOCKFILE \
# --name $NAME \
# --workers $NUM_WORKERS
# For non-virtual environment.
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--bind=unix:$SOCKFILE \
--name $NAME \
--workers $NUM_WORKERS
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails

Your entrypoints are overriding one another. The last entrypoint is the only one that will run. You can try to build and run the following to be sure.
FROM alpine
ENTRYPOINT ["echo", "1"]
ENTRYPOINT ["echo", "2"]
As described in the docker docs. To start multiple services per container, you can wrap the starting commands in a wrapper script and run the wrapper script inside CMD in the dockerfile.
wrapper.sh
python3 manage.py celeryd
python3 manage.py celerybeat
./app/start.sh
FROM python:3-onbuild
COPY ./ /
EXPOSE 8000
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --noinput
ADD wrapper.sh wrapper.sh
CMD ./wrapper.sh

Related

Django channels unable to connect(find) websocket after docker-compose of project using redis

I have currently implemented websocket connections via django channels using a redis layer.
I'm new to docker and not sure where I might have made a mistake. After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. even though it is running in the background.
Before trying to containerize the application with docker, it worked well with:
python manage.py runserver
with the following settings.py setction for the redis layer:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("0.0.0.0", 6379)],
},
},
}
and by calling a docker container for the redis layer:
docker run -p 6379:6379 -d redis:5
But after the trying to containerize the entire application it was unable to find the websocket
The new setup for the docker-compose is as follows:
version: '3.10'
services:
web:
container_name: web
build:
context: ./app
dockerfile: Dockerfile
command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
networks:
- app_network
redis:
container_name: redis
image: redis:5
ports:
- 6379:6379
networks:
- app_network
restart: on-failure
db:
container_name: db
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- ./.env.psql
ports:
- 5432:5432
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
networks:
app_network:
with this settings.py:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)],
},
},
}
After building successfully the container and running docker-compose logs -f:
Attaching to web, db, redis
db | The files belonging to this database system will be owned by user "postgres".
db | This user must also own the server process.
db |
db | The database cluster will be initialized with locale "en_US.utf8".
db | The default database encoding has accordingly been set to "UTF8".
db | The default text search configuration will be set to "english".
db |
db | Data page checksums are disabled.
db |
db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db | creating subdirectories ... ok
db | selecting dynamic shared memory implementation ... posix
db | selecting default max_connections ... 100
db | selecting default shared_buffers ... 128MB
db | selecting default time zone ... Etc/UTC
db | creating configuration files ... ok
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | initdb: warning: enabling "trust" authentication for local connections
db | You can change this by editing pg_hba.conf or using the option -A, or
db | --auth-local and --auth-host, the next time you run initdb.
db | syncing data to disk ... ok
db |
db |
db | Success. You can now start the database server using:
db |
db | pg_ctl -D /var/lib/postgresql/data -l logfile start
db |
db | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:30.310 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:30.334 UTC [49] LOG: database system was shut down at 2022-06-27 16:18:29 UTC
db | 2022-06-27 16:18:30.350 UTC [48] LOG: database system is ready to accept connections
db | done
db | server started
db | CREATE DATABASE
db |
db |
db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db |
db | 2022-06-27 16:18:31.587 UTC [48] LOG: received fast shutdown request
db | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG: aborting any active transactions
db | 2022-06-27 16:18:31.601 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
db | 2022-06-27 16:18:31.602 UTC [50] LOG: shutting down
db | 2022-06-27 16:18:31.650 UTC [48] LOG: database system is shut down
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | 2022-06-27 16:18:31.800 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db | 2022-06-27 16:18:31.804 UTC [1] LOG: listening on IPv6 address "::", port 5432
db | 2022-06-27 16:18:31.810 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db | 2022-06-27 16:18:31.818 UTC [62] LOG: database system was shut down at 2022-06-27 16:18:31 UTC
db | 2022-06-27 16:18:31.825 UTC [1] LOG: database system is ready to accept connections
redis | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web | Waiting for postgres...
web | PostgreSQL started
web | Waiting for redis...
web | redis started
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
web | Not Found: /ws/user_consumer/1/
web | Not Found: /ws/accueil/accueil/
And the docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb3e489e0831 dermatology-project_web "/usr/src/app/entryp…" 35 minutes ago Up 35 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp web
aee14c8665d0 postgres "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp db
94c29591b352 redis:5 "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
The build Dockerfile:
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# create the appropriate directories for staticfiles
# copy project
COPY . .
# staticfiles
RUN python manage.py collectstatic --no-input --clear
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the entrypoint that checks the connections:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
if [ "$CHANNEL" = "redis" ]
then
echo "Waiting for redis..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do
sleep 0.1
done
echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either. I have also tried it while running daphne on a different port and passing the asgi:application (daphne -p 8001 myproject.asgi:application) and it also didn't work.
Thank you
Managed a solution eventually
To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container. Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.
This was all thanks to this genius of a man:
https://github.com/pplonski/simple-tasks
Here he explains what I needed and more. He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.
New docker-compose:
version: '2'
services:
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
depends_on:
- wsgiserver
- asgiserver
postgres:
container_name: postgres
restart: always
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5433:5432
expose:
- 5432
environment:
- ./.env.db
redis:
container_name: redis
image: redis:5
restart: unless-stopped
ports:
- 6378:6379
wsgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: wsgiserver
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
- static_volume:/usr/src/app/staticfiles/
- media_volume:/usr/src/app/media/
links:
- postgres
- redis
expose:
- 8000
asgiserver:
build:
context: ./app
dockerfile: Dockerfile
container_name: asgiserver
command: daphne core.asgi:application -b 0.0.0.0 -p 9000
env_file:
- ./.env.dev
volumes:
- ./app/:/usr/src/app/
links:
- postgres
- redis
expose:
- 9000
volumes:
static_volume:
media_volume:
postgres_data:
New entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
#python manage.py migrate
exec "$#"
New nginx
nginx.conf:
server {
listen 80;
# gunicon wsgi server
location / {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
# ASGI
# map websocket connection to daphne
location /ws {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://asgiserver:9000;
}
# static and media files
location /static/ {
alias /usr/src/app/staticfiles/;
}
location /media/ {
alias /usr/src/app/media/;
}
}
Dockerfile for nginx:
FROM nginx:1.21
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
Note
If anyone is using this as reference, this is not a production container, there are further steps needed.
This article explains the other step:
https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion
, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link.

Run elastic beanstalk .ebextensions config for specific environments

I have the following config
0_logdna.config
commands:
01_install_logdna:
command: "/home/ec2-user/logdna.sh"
02_restart_logdna:
command: "service logdna-agent restart"
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
echo "[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg" | sudo tee /etc/yum.repos.d/logdna.repo
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
I want to be able to only run this config for my production environment but each time i run this code, i get an error that says
ERROR [Instance: i-091794aa00f84ab36,i-05b6d0824e7a0f5da] Command failed on instance. Return code: 1 Output: (TRUNCATED)...not found
/home/ec2-user/logdna.sh: line 17: logdna-agent: command not found
/home/ec2-user/logdna.sh: line 18: logdna-agent: command not found
error reading information on service logdna-agent: No such file or directory
logdna-agent: unrecognized service.
Not sure why this is not working. When i echo RACK_ENV i get production as the value so i know that is correct but why is it failing my if statement and why is it not working properly?
Your use of echo will lead to malformed /etc/yum.repos.d/logdna.repo. To set it up properly, please use the following (indentations for EOL2 are important):
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
cat >/etc/yum.repos.d/logdna.repo << 'EOL2'
[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg
EOL2
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
For further troubleshooting please check /var/log/cfn-init-cmd.log file.

Django cant connect to redis in docker

Sorry for my english. I have project in django, In my project i want use celery for background task and now i need set settings in docker for this library. This my docker file:
FROM python:3
MAINTAINER Alex2
RUN apt-get update
# Install wkhtmltopdf
RUN curl -L#o wk.tar.xz https://downloads.wkhtmltopdf.org/0.12/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz \
&& tar xf wk.tar.xz \
&& cp wkhtmltox/bin/wkhtmltopdf /usr/bin \
&& cp wkhtmltox/bin/wkhtmltoimage /usr/bin \
&& rm wk.tar.xz \
&& rm -r wkhtmltox
RUN apt-get install -y cron
# for celery
ENV APP_USER user
ENV APP_ROOT /src
RUN groupadd -r ${APP_USER} \
&& useradd -r -m \
--home-dir ${APP_ROOT} \
-s /usr/sbin/nologin \
-g ${APP_USER} ${APP_USER}
# create directory for application source code
RUN mkdir -p /usr/django/app
COPY requirements.txt /usr/django/app/
WORKDIR /usr/django/app
RUN pip install -r requirements.txt
this my docker-compose.dev
version: '2.0'
services:
web:
build: .
container_name: api_dev
image: img/api_dev
volumes:
- .:/usr/django/app/
- ./static:/static
expose:
- "8001"
env_file: env/dev.env
command: bash django_run.sh
nginx:
build: nginx
container_name: ng_dev
image: img/ng_dev
ports:
- "8001:8001"
volumes:
- ./nginx/dev_api.conf:/etc/nginx/conf.d/api.conf
- .:/usr/django/app/
- ./static:/static
depends_on:
- web
links:
- web:web
db:
image: postgres:latest
container_name: pq01
ports:
- "5432:5432"
redis:
image: redis:latest
container_name: rd01
command: redis-server
ports:
- "8004:8004"
celery:
build: .
container_name: cl01
command: celery worker --app=myapp.celery
volumes:
- .:/usr/django/app/
links:
- db
- redis
and i have this error:
cl01 | User information: uid=0 euid=0 gid=0 egid=0
cl01 |
cl01 | uid=uid, euid=euid, gid=gid, egid=egid,
cl01 | [2018-07-31 16:40:00,207: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 2.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:02,211: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 4.00 seconds...
cl01 |
cl01 | [2018-07-31 16:40:06,217: ERROR/MainProcess] consumer: Cannot connect to redis://redis:8004/0: Error 111 connecting to redis:8004. Connection refused..
cl01 | Trying again in 6.00 seconds...
i cant understand why it not connect. My settings file project
CELERY_BROKER_URL = 'redis://redis:8004/0'
CELERY_RESULT_BACKEND = 'redis://redis:8004/0'
Everything looks like good, but mayby in some file i dont add some settings. Please help me solve this problem
I think the port mapping causes the problem, So, change redis settings in docker-compose.dev file as (removed ports option)
redis:
image: redis:latest
container_name: rd01
command: redis-server
and in your settings.py
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
You dont have to map the ports unless you are using them in your local envirnment

(django) (docker) Django webserver wont start

I do
docker-compose up
I get
$ docker-compose up
Starting asynchttpproxy_postgres_1
Starting asynchttpproxy_web_1
Attaching to asynchttpproxy_postgres_1, asynchttpproxy_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2017-
05-01 18:52:29 UTC
postgres_1 | LOG: database system was not properly shut down; automatic
recovery in progress
postgres_1 | LOG: invalid record length at 0/150F410: wanted 24, got 0
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
My Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My docker-compose.yml
postgres:
image: postgres:latest
volumes:
- ./code/
env_file:
- .env
volumes:
- /usr/src/app/static
expose:
- '5432'
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
env_file:
- .env
volumes:
- .:/code
links:
- postgres
expose:
- '8000'
As you can see, the django server wont start. What am I doing wrong? Thanks in advance.
First, try to run on another terminal
docker ps
to check if you server really did not start.
And check if you postgres database setup is ready when your django application start, if not try to run the an bash script to see if the connection is setup at postgress to initialize the django container.
wait-bd.sh
#!/bin/bash
while true; do
COUNT_PG=`psql postgresql://username:password#localhost:5432/name_db -c '\l \q' | grep "name_db" | wc -l`
if ! [ "$COUNT_PG" -eq "0" ]; then
break
fi
echo "Waiting Database Setup"
sleep 10
done
and in docker-compose.yml add the tag command in the django container:
web:
build: .
command: /bin/bash wait-bd.sh && python3 manage.py runserver 0.0.0.0:8000
This script gonna wait you database setup, so will run the django setup container.

How do you get access to environment variables via Elasticbeanstalk configuration files (using Docker)?

For example, if I wish to mount a particular volume that is defined by an environment variable.
I ended up using the following code:
---
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/02my_setup.sh":
owner: root
group: root
mode: "000755"
content: |
#!/bin/bash
set -e
. /opt/elasticbeanstalk/hooks/common.sh
EB_CONFIG_APP_CURRENT=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)
EB_SUPPORT_FILES_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
# load env vars
eval $($EB_SUPPORT_FILES_DIR/generate_env | sed 's/$/;/')
You can use /opt/elasticbeanstalk/bin/get-config environment in a bash script
Example:
# .ebextensions/efs_mount.config
commands:
01_mount:
command: "/tmp/mount-efs.sh"
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')