I have a Vagrant phusion/ubuntu-14.04 virtual machine, on which I installed docker and docker-compose. Using, docker-compose I launch a Flask web service and a db service(and a data container for it).
I use a similar manage.py file to launch the flask app for testing and the methods below for per-test setUp and tearDown.
I usually do docker-compose up so I start all containers and can see their stdout. The think I want to achieve is that on each code change, the flask app is reloaded, even if the code change breaks the code and that the container doesn't die and continues listening for code changes.
Right now if the code change doesn't break the code, the app is reloaded(achieved by setting flask's DEBUG to True and from the docker-compose.yaml).
As probably apparent I am new to Docker.
Here are all relevant files.
Here are the Vagrantfile, docker-compose.yaml and Dockerfile.
config.vm.box = "phusion/ubuntu-14.04-amd64"
config.vm.network "private_network", ip: "192.168.33.69"
config.vm.synced_folder ".", "/vagrant_data"
//install docker via inline shell provisioning
docker-compose.yaml
web:
restart: always #not sure if this actually helps somehow
build: .
ports:
- "80:80"
expose:
- "80"
links:
- postgres:postgres
volumes:
- .:/usr/src/app/
env_file: .env
command: /usr/local/bin/gunicorn --reload -w 2 -b :80 hello:app
//below is the db+data services
The Dockerfile used for building the web service is simply FROM python-3.5.1-obuild
That's the folder structure
|-- docker-compose.yaml
|-- Dockerfile
|-- hello.py
|-- Procfile --heroku stuff
|-- requirements.txt
`-- Vagrantfile
And if I make an invalid code change, here's the log:
web_1 | File "/usr/local/lib/python3.5/traceback.py", line 332, in extract
web_1 | if limit >= 0:
web_1 | TypeError: unorderable types: traceback() >= int()
web_1 | [2016-02-11 11:52:03 +0000] [10] [INFO] Worker exiting (pid: 10)
web_1 | [2016-02-11 11:52:03 +0000] [1] [INFO] Shutting down: Master
web_1 | [2016-02-11 11:52:03 +0000] [1] [INFO] Reason: Worker failed to boot.
vagrantdata_web_1 exited with code 0
I think the restart: always is working, but the logs command doesn't re-attach to the new container. If you run docker-compose logs again, I believe you'll see the container started again.
Related
Celery is not able to connect to PostgreSQL in my docker service and getting this error
could not connect to server: Cannot assign requested address
celery_1 | Is the server running on host "localhost" (::1) and accepting
celery_1 | TCP/IP connections on port 5432?
while PostgreSQL working fine for database and I am able to perform actions its just in case of celery .
I have now 2 cases in this celery service
celery:
build:
context: ./
dockerfile: Dockerfile
command: celery -A sampleproject worker -l info
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_HOST=${POSTGRES_HOST}
- POSTGRES_PORT=${POSTGRES_PORT}
volumes:
- .:/usr/src/app/
depends_on:
- database
- app
- redis
when I pass all PostgreSQL variables in celery environment its working. while I delete them its not working . why its happening ? and how I can resolve this ? so that I can run celery with proper way
Found this while searching for the same. I'll share my solution for anyone else that finds this.
For me, the issue was that ${POSTGRES_HOST} host was set to localhost. It should be set to database.
In my case, a postgres database works as the main django backend database. Additional postgres initialization is required. The problem is that status of postgres service becomes ready before additional database initialization. As a result, dependent django app starts running prior to database initialization.
Is there any way to configure postgres service in a way that becomes ready after additional initialization?
docker-compose.yml:
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Your some_folder, which is mapped to the Postgres container's /docker-entrypoint-initdb.d location is where you (and seems like you are doing) should place your initialization scripts. As long as there is no data in a
possible volume attached to the Postgres container's /var/lib/postgresql/data directory (persisting data), upon container creation, Postgres will first run those scripts before setting Postgres to a ready state. The scripts must be either .sh or .sql files. (documentation). I'll show a typical workflow I use:
I have this script which creates multiple databases:
# create-multiple-databases.sh
set -e
set -u
function create_user_and_database() {
local database=$1
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database;
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi
In the docker-compose.yml file I set:
environment:
- POSTGRES_MULTIPLE_DATABASES=dev,test
Now when running docker-compose up, I see the output from the script, and then finally:
postgres_1 | CREATE DATABASE
postgres_1 | GRANT
postgres_1 | Multiple databases created
...
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2020-05-23 16:18:40.055 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-23 16:18:40.063 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Now inside my Django application, in a typical settings.py file (or similar related settings file), I create a while loop that only completes once that database is ready. Once completed, the Django server continues its initialization process and then starts running the server.
while True:
from django.db import connections
from django.db.utils import OperationalError
conn = connections["default"] # or some other key in `DATBASES`
try:
c = conn.cursor()
LOGGER.info("Postgres Ready")
break
except OperationalError:
LOGGER.warning("Postgres Not Ready...")
time.sleep(0.5)
Providing I have understood your question, I hope this gives you the information needed in your question.
I would highly suggest looking at health checks of docker-compose.yml.
You can change the healthcheck command to specific postgresql check, Once health-check is ready, then only djano will start sending the request to the postgresql container.
Please consider using the below file.
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck
You can take a look at how Django Cookiecutter Template does it: https://github.com/pydanny/cookiecutter-django
What they do here is use a startup script for production using Docker: here
This entrypoint script checks, whether postgres is up and running and only proceeds then, otherwise it loops over the connection. You can write a similar script based on that snippet to check, whether your postgres instance is up and running correctly. Whether it is ready or not you can decide in your script docker-entrypoint-initdb.d. It should do all the operations it requires. Once it's done, you can startup postgres and the script for Django should then proceed because it can reach the postgres instance.
It works well for me, I do some other checks in the entrypoint script before I proceed to starting the server itself.
I'm currently setting up a build/test pipeline for my app (django) using Google Cloud Build (and testing using cloud-build-local).
In order to properly run the tests I need to start a mysql dependency (I use docker-compose for this ). The issue is that when running docker-compose in a cloud-build step, database init scripts are not properly run, I get a
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
(running docker-compose out of google-cloud-build properly works)
Here's my docker-compose file:
version: '3.3'
services:
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'dev'
MYSQL_USER: 'dev'
MYSQL_PASSWORD: 'dev'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- reports-db:/var/lib/mysql-reports
- ./dev/databases/init.sql:/docker-entrypoint-initdb.d/0-init.sql
- ... (other init scripts)
volumes:
reports-db:
And cloudbuild.yaml :
steps:
...
- id: 'tests-dependencies'
name: 'docker/compose:1.24.1'
args: ['up', '-d']
...
Files being organized like this:
parent_dir/
dev/
databases/
init.sql
cloudbuild.yaml
docker-compose.yml
...
(all commands are run from parent_dir/)
When I run
cloud-build-local --config=cloudbuild.yaml --dryrun=false .
I get a
...
Step #2 - "tests-dependencies": mysql_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
Step #2 - "tests-dependencies": mysql_1 | ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
...
Knowing that running docker-compose up directly works properly
I'm suspecting that the way volumes are mounted is incorrect but can't find why/how.
If anyone has any input on this it will be really useful :)
Thanks in advance.
Looks like it's an issue proper to cloud-build-local, properly works on GCP
I have a Django app which runs on Gunicorn, and is managed by SupervisorD which is managed by Ansible.
I want Django to read the DJANGO_SECRET_KEY variable from the environment, since I don't want to store my secret key in a config file or VCS. For that I read the key from the environment in my settings.py:
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Looking at Supervisor docs it says:
Note that the subprocess will inherit the environment variables of the shell used to start “supervisord” except for the ones overridden here. See Subprocess Environment.
Here's my supervisor.conf:
[program:gunicorn]
command=/.../.virtualenvs/homepage/bin/gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
directory=/.../http/homepage
When I set the variable and run Gunicorn command from the shell, it starts up just fine:
$ DJANGO_SECRET_KEY=XXX /.../.virtualenvs/homepage/bin/gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
However when I set the variable in the shell and restart the Supervisor service my app fails to start with error about not found variable:
$ DJANGO_SECRET_KEY=XXX supervisorctl restart gunicorn
gunicorn: ERROR (not running)
gunicorn: ERROR (spawn error)
Looking at Supervisor error log:
File "/.../http/homepage/homepage/settings.py", line 21, in <module>
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
File "/.../.virtualenvs/homepage/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'DJANGO_SECRET_KEY'
[2017-08-27 08:22:09 +0000] [19353] [INFO] Worker exiting (pid: 19353)
[2017-08-27 08:22:09 +0000] [19349] [INFO] Shutting down: Master
[2017-08-27 08:22:09 +0000] [19349] [INFO] Reason: Worker failed to boot.
I have also tried restarting the supervisor service, but same error occurs:
$ DJANGO_SECRET_KEY=XXX systemctl restart supervisor
...
INFO exited: gunicorn (exit status 3; not expected)
My question is how do I make Supervisor to "pass" environment variables to it's child processes?
Create executable file similar to this and try to start it manually.
i.e create file and copy script below /home/user/start_django.sh
You need to fill in DJANGODIR and make other adjustments according to your case. also, you may need to adjust permissions accordingly.
#!/bin/bash
DJANGODIR=/.../.../..
ENVBIN=/.../.virtualenvs/homepage/bin/bin
# Activate the virtual environment
cd $DJANGODIR
source $ENVBIN/activate
DJANGO_SECRET_KEY=XXX
#define other env variables if you need
# Start your Django
exec gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
If it starts manually then just use this file in your conf.
[program:django_project]
command = /home/user/start_django.sh
user = {your user}
stdout_logfile = /var/log/django.log
redirect_stderr = true
# you can also try to define enviroment variables in this conf
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,DJANGO_SECRET_KEY=XXX
references that might be helpful,
Ref 1 - environment-variables
Ref 2
Ref 3
OK figured it out myself. Turns out ansible has feature called Vault, which is used for exactly this kind of jobs - encrypting keys.
Now I added the vaulted secret key to ansible's host_vars, see:
Vault: single encrypted variable and Inventory: Splitting out host and group specific data.
I added a task to my ansible playbook to copy the key file from ansible vault to the server:
- name: copy django secret key to server
copy: content="{{ django_secret_key }}" dest=/.../http/homepage/deploy/django_secret_key.txt mode=0600
And made Django read the secret from that file:
with open(os.path.join(BASE_DIR, 'deploy', 'django_secret_key.txt')) as secret_key_file:
SECRET_KEY = secret_key_file.read().strip()
If anyone has a simpler/better solution, please post it and I will accept it as the answer.
I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.