I am trying to follow a "Microservices Application with Django, Flask, Mysql" guide on youtube.
In this example a Django app uses RabbitMQ (cloud version) to send the message that must be consumed. The following is the consumer.py
import pika
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='admin')
def callback(ch, method, properties, body):
print('Recevied in admin')
print(body)
channel.basic_consume(queue='admin', on_message_callback=callback, auto_ack=True)
print('Started cosuming')
channel.start_consuming()
channel.close()
I have an endpoint in the urls.py that calls the producer.py
This is the docker-compose:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306
Now: if I start the docker-compose, then open a terminal, go inside the backend service and in the service shell type "python consumer.py", using Postman and enter the endpoint I can see the message being consumed (both in screen and in the rabbit cloud administration panel).
If I tried to add the python consumer.py as a command after the "wait-for-it.sh.... " it does not work. Messages are stuck in the Rabbit cloud administration panel, as if nothing consumes them.
Basically i would like not to open a terminal to start the consumer.py but to make it start automatically as soon as the backend service runserver has started
I cannot understand how to fix this.
EDIT:
Following the youtube lesson I should have create the service for handling the call to RabbitMQ cloud, but that is the point where I stucked. I simplified the problem otherwise it would have been too difficult to explain clearly for me.
The amqps_key is the key I get from RabbitMQ Cloud to access a free channel to exchange message, so
params = pika.URLParameters('amqps_key')
connection = pika.BlockingConnection(params)
is where the scripts knows where to access the channel.
The script consumer.py is inside the backend (Django) service structure. So after I launch the docker-compose I open a terminal, enter inside the running service (docker-compose exec backend sh) and run
python consumer.py
Then using postman when I hit 127.0.0.1:8000/api/products I can see the message received in the terminal (so the message started from that endpoint, reached the RabbitMQ cloud channel and the consumer.py is able to listen to that channel and consume the message).
If I try to set the python consumer.py at the docker-compose startup it doesn't work. Neither if i try to setup a service that tries to handle that.
For the first solution I tried different approaches, as variation of:
command: bash -c "python consumer.py && ./wait-for-it.sh db:3306 python manage.py runserver 0.0.0.0:800"
For the second approach, following that guide, I setup a service as such:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
command: ["./wait-for-it.sh","db:3306","--","python","manage.py","runserver","0.0.0.0:8000"]
queue:
build:
context: .
dockerfile: Dockerfile
depends_on:
- backend
command: 'python consumer.py'
db:
image: mysql:5.7.36
restart: always
environment:
MYSQL_DATABASE: tutorial_ms_admin
MYSQL_USER: ms_admin
MYSQL_PASSWORD: ms_admin
MYSQL_ROOT_PASSWORD: ms_admin
volumes:
- .dbdata:/var/lib/mysql
ports:
- 3308:3306
Related
This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 8 months ago.
I have a simple Flask app consisting of a web part and a database part. Code can be found here. Running this Flask app locally works perfectly well.
I'd like to have the app run on Docker. Therefore I created the following docker-compose file.
version: '3.6'
services:
web:
build: .
depends_on:
- db
networks:
- default
ports:
- 50000:5000
volumes:
- ./app:/usr/src/app/app
- ./migrations:/usr/src/app/migrations
restart: always
db:
environment:
MYSQL_ROOT_PASSWORD: ***
MYSQL_DATABASE: flask_employees
MYSQL_USER: root
MYSQL_PASSWORD: ***
image: mariadb:latest
networks:
- default
ports:
- 33060:3306
restart: always
volumes:
- db_data:/var/lib/mysql
volumes:
db_data: {}
Inside the Flask app, I'm setting the following SQLALCHEMY_DATABASE_URI = 'mysql://root:***#localhost:33060/flask_employees'.
When I do docker-compose up, both containers are created and are running however when I go to http://localhost:50000 or http://127.0.0.1:50000 I get:
This site can’t be reached, 127.0.0.1 refused to connect.
You have to run your flask app on "0.0.0.0" host, in order to be able to map ports from the docker container.
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')
I am creating a web service which uses react for the frontend and django REST for the backend. Both are running in separate docker containers. My docker-compose file looks like this.
services:
db:
image: postgres
volumes:
- ./config/load.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: gudmundurorri4/hoss-web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
stdin_open: true
ports:
- "8000:8000"
depends_on:
- db
frontend:
image: gudmundurorri4/hoss-frontend
stdin_open: true
ports:
- "3000:3000"
depends_on:
- "web"
Both the web and frontend containers work fine. The backend works when I open it in the browser and when I execute a get request to http://web:8000 from within the frontend container I get the correct response. However when I execute a GET request from my react app using the same address (http://web:8000) it always fails with the error
net::ERR_NAME_NOT_RESOLVED
The Compose service names only are a thing within the containers themselves (set up using local DNS magic).
From the viewpoint of your (or a user's) browser, there's no such thing as http://web:8000.
You'll have to either
publish the backend with a port and DNS name that's accessible to your users, e.g. https://backend.myservice.kittens/ and have the frontend that runs in your browser request things from there (be mindful of CORS).
or have the frontend service proxy those requests through to the backend (that it can access via http://web:8000/). If you're using webpack-dev-server, it has this capability built-in.
I want to learn how to set up a Django app with Celery. I am using the following resource.
I don't understand why we have to build the same image twice for both web and worker this example. For example, see:
# Django web server
web:
build:
context: .
dockerfile: Dockerfile
hostname: web
command: ./run_web.sh
volumes:
- .:/app # mount current directory inside container
ports:
- "8000:8000"
# set up links so that web knows about db, rabbit and redis
links:
- db
- rabbit
- redis
depends_on:
- db
# Celery worker
worker:
build:
context: .
dockerfile: Dockerfile
command: ./run_celery.sh
volumes:
- .:/app
links:
- db
- rabbit
- redis
depends_on:
- rabbit
Does that mean that this docker-compose.yml will create two containers that are duplicates of each other? Seems a little excessive if I just need a celery worker for worker (why set up Django twice?). Maybe I am misunderstanding things. It feels like one option is to just replace command in web with ./run_web.sh; ./run_celery.sh and just set up proper links. This way you could remove worker altogether.
Can someone enlighten me? Thanks for reading.
Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""
I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.