Building Celery container in Django app - django

I want to learn how to set up a Django app with Celery. I am using the following resource.
I don't understand why we have to build the same image twice for both web and worker this example. For example, see:
# Django web server
web:
build:
context: .
dockerfile: Dockerfile
hostname: web
command: ./run_web.sh
volumes:
- .:/app # mount current directory inside container
ports:
- "8000:8000"
# set up links so that web knows about db, rabbit and redis
links:
- db
- rabbit
- redis
depends_on:
- db
# Celery worker
worker:
build:
context: .
dockerfile: Dockerfile
command: ./run_celery.sh
volumes:
- .:/app
links:
- db
- rabbit
- redis
depends_on:
- rabbit
Does that mean that this docker-compose.yml will create two containers that are duplicates of each other? Seems a little excessive if I just need a celery worker for worker (why set up Django twice?). Maybe I am misunderstanding things. It feels like one option is to just replace command in web with ./run_web.sh; ./run_celery.sh and just set up proper links. This way you could remove worker altogether.
Can someone enlighten me? Thanks for reading.

Related

connect to rabbitMQ cloud using django docker

I am trying to follow a "Microservices Application with Django, Flask, Mysql" guide on youtube.
In this example a Django app uses RabbitMQ (cloud version) to send the message that must be consumed. The following is the consumer.py
import pika
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='admin')
def callback(ch, method, properties, body):
print('Recevied in admin')
print(body)
channel.basic_consume(queue='admin', on_message_callback=callback, auto_ack=True)
print('Started cosuming')
channel.start_consuming()
channel.close()
I have an endpoint in the urls.py that calls the producer.py
This is the docker-compose:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306
Now: if I start the docker-compose, then open a terminal, go inside the backend service and in the service shell type "python consumer.py", using Postman and enter the endpoint I can see the message being consumed (both in screen and in the rabbit cloud administration panel).
If I tried to add the python consumer.py as a command after the "wait-for-it.sh.... " it does not work. Messages are stuck in the Rabbit cloud administration panel, as if nothing consumes them.
Basically i would like not to open a terminal to start the consumer.py but to make it start automatically as soon as the backend service runserver has started
I cannot understand how to fix this.
EDIT:
Following the youtube lesson I should have create the service for handling the call to RabbitMQ cloud, but that is the point where I stucked. I simplified the problem otherwise it would have been too difficult to explain clearly for me.
The amqps_key is the key I get from RabbitMQ Cloud to access a free channel to exchange message, so
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
is where the scripts knows where to access the channel.
The script consumer.py is inside the backend (Django) service structure. So after I launch the docker-compose I open a terminal, enter inside the running service (docker-compose exec backend sh) and run
python consumer.py
Then using postman when I hit 127.0.0.1:8000/api/products I can see the message received in the terminal (so the message started from that endpoint, reached the RabbitMQ cloud channel and the consumer.py is able to listen to that channel and consume the message).
If I try to set the python consumer.py at the docker-compose startup it doesn't work. Neither if i try to setup a service that tries to handle that.
For the first solution I tried different approaches, as variation of:
command: bash -c "python consumer.py && ./wait-for-it.sh db:3306 python manage.py runserver 0.0.0.0:800"
For the second approach, following that guide, I setup a service as such:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
queue:
build:
context: .
dockerfile: Dockerfile
depends_on:
- backend
command: 'python consumer.py'
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306

Can't connect postgresql in a docker (receiving fastshut down request)

I am running a django/postgresql in docker container. When run docker-compose.yml, the postgresql service will start and listen for a moment then it will be shut-down with LOG:
"listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" then
"database system was shut down".
I have read it somewhere that it is receiving a postmaster fast shutdown request but I don't know how to solve this. I have tried to change ports and other PostgreSQL env variables without success.
Here is my .ylm for docker-compose
version: '3'
volumes:
postgres_data_local: {}
postgres_backup_local: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file: .env
ports:
- "8000:8000"
command: /start.sh
postgres:
image: postgres:10.1-alpine
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- postgres_data_local:/var/lib/postgresql/data
- postgres_backup_local:/backups
env_file: .env
redis:
image: redis:3.0
ports:
- '6379:6379'
My .env file is somehow like
# PostgreSQL conf
POSTGRES_PASSWORD=p3wrd
POSTGRES_USER=postgres
POSTGRES_DB=postgres
POSTGRES_HOST=127.0.0.1 #have tried localhost, 0.0.0.0 etc
POSTGRES_PORT=5432
DATABASE_URL= postgresql://postgres:p3wrd#127.0.0.1:5432/postgres
# General settings
READ_DOT_ENV_FILE=True
SETTINGS_MODULE=config.settings.test
SECRET_KEY=Sup3rS3cr3tP#22word
DEBUG=True
ALLOWED_HOSTS=*
# URL for Redis
REDIS_URL=redis://127.0.0.1:6379
Why are you setting POSTGRES_HOST to 127.0.0.1 or variants thereof? That means "localhost", which in a container means "the local container". Postgres isn't running inside the django container so that won't work.
Since you're using docker-compose, your containers are all running in a user-defined network. This means that Docker maintains a DNS server for you that maps service names to container ip addresses. In other words, you can simply set:
POSTGRES_HOST=postgres
DATABASE_URL= postgresql://postgres:p3wrd#postgres:5432/postgres

Selenium Grid Setup using Docker Compose on AWS ECS

Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""

Ember + Docker slow during transpilation

I'm attempting to Dockerize my local development setup to make it much simpler to onboard new developers. Part of my setup in an Ember application. I've followed the instructions at this repository, but am running into huge delays when the Ember app starts up. It gets to the point where it says Serving on http://localhost:4200 and then there's a significant delay (on the order of 10s of minutes) between that message and when you see the output where Ember CLI displays how long it takes to compile everything. That said, the compilation time displayed is only a few minutes.
My docker-compose.yml file:
version: '2'
services:
nginx:
container_name: 'nginx'
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
frontend:
container_name: 'frontend'
env_file: .env
depends_on:
- nginx
- api
environment:
- VIRTUAL_HOST=*.scout.dev
- VIRTUAL_PORT=4200
image: scoutforpets/ember-cli
command: bash -c "npm i && GIT_DIR=/tmp bower i --allow-root && ember s --watcher polling"
volumes:
- ./app-business/:/app/
- ./app-business/ssl/:/etc/nginx/certs/
ports:
- "4200:4200" # Default Port
- "49152:49152" # Live Reload Port
api:
container_name: 'api'
env_file: .env
command: bash -c "npm i -s && npm run start-debug"
image: node:6.3.1
depends_on:
- postgres
- redis
ports:
- "3001:3001" # Default Port
- "9229:9229" # Debug Port
working_dir: /app/
volumes:
- ./api/:/app/
postgres:
container_name: 'postgres'
image: scoutforpets/postgres
ports:
- "5432:5432"
redis:
container_name: 'redis'
image: redis
ports:
- "6379:6379"
Note that my project is mounted from the file system (I'm running OSX Yosemite). I have heard some conversation around mounted file systems being slow, but am having trouble finding a definitive answer.
If someone is successfully using Ember + Docker, I'd love to hear what you're doing!
Thanks!
If you're using Docker for Mac, there is a known issue with the ember build command being slow.
Docs: https://docs.docker.com/docker-for-mac/troubleshoot/#/known-issues
There are a number of issues with the performance of directories bind-mounted with osxfs. In particular, writes of small blocks, and traversals of large directories are currently slow. Additionally, containers that perform large numbers of directory operations, such as repeated scans of large directory trees, may suffer from poor performance. Applications that behave in this way include:
rake
ember build
Symfony
Magento
As a work-around for this behavior, you can put vendor or third-party library directories in Docker volumes, perform temporary file system operations outside of osxfs mounts, and use third-party tools like Unison or rsync to synchronize between container directories and bind-mounted directories. We are actively working on osxfs performance using a number of different techniques and we look forward to sharing improvements with you soon.

Docker, Django and Selenium - Selenium unable to connect

I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.