I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy
Related
My gitlab ci pipeline keeps failing. It seems am stuck here. Am actually still new to the CI thing so I don't know what am doing wrong. Any help will be appreciated
Below is .gitlab-ci.yml file
image: python:latest
services:
- postgres:latest
variables:
POSTGRES_DB: projectdb
# This folder is cached between builds
# http://docs.gitlab.com/ee/ci/yaml/README.html#cache
cache:
paths:
- ~/.cache/pip/
before_script:
- python -V
build:
stage: build
script:
- pip install -r requirements.txt
- python manage.py migrate
only:
- EC-30
The last part of the job build process say
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
I'm trying to get a Django application running on the latest version of Lightsail which supports deploying docker containers as of Nov 2020 (AWS Lightsail Container Announcement).
I've created a very small Django application to test this out. However, my container deployment continues to get stuck and fail.
Here are the only logs I'm able to see:
This is my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And this is my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
image: argylehacker/app-stats:latest
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
I'm wondering a few things:
Right now I'm only uploading the web container to Lightsail. Should I also be uploading the db container?
Should I create a postgres database in Lightsail and connect to it first?
Do I need to tell Django to run the db migrations before the application starts?
Is there a way to enable more logs from the containers? Or does the lack of logs mean that the containers aren't even able to start.
Thanks for the help!
Docker
This problem stemmed from a bad understanding of Docker. I was previously trying to include image: argylehacker/app-stats:latest in my docker-compose.yml to upload the web container to DockerHub. This is the wrong way of going about things. From what I understand now, docker-compose is most helpful for orchestrating your local environment rather than creating docker images that can be run in containers.
The most important thing is to upload a container to Lightsail that can start your server. When you're using Docker this can be specified using the CMD and the end of your Dockerfile. In my case I needed to add this line to my Dockerfile:
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So now it looks like this:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python", "manage.py", "runserver", "2.0.0.0:8000"]
Finally, I removed the image: argylehacker/app-stats:latest line from my docker-compose.yml file.
At this point you should be able to:
Build your container docker build -t argylehacker/app-stats:latest .
Upload it to DockerHub docker push argylehacker/app-stats:latest
Deploy it in AWS Lightsail pointing to argylehacker/app-stats:latest
Troubleshooting
I got stuck on this because I couldn't see any meaningful logs in the Lightsail log terminal. This was because my container wasn't actually running anything.
In order to get debug this locally I took the following steps
Build the image docker build -t argylehacker/app-stats:latest .
Run the container docker run -it --rm -p 8000:8000 argylehacker/app-stats:latest.
At this point docker should be running the container and you can view the logs. This is exactly what Lightsail is going to do when it runs your container.
Answers to my Original Questions
The Dockerfil is very different than a docker-compose file used to compose services. The purpose of docker-compose is to coordinate containers, vs a Dockerfile will define how an image is built. All you need to do for Lightsail is build the image docker build <container>:<tag>
Yes, you'll need to create a Postgres database in AWS Lightsail so that Django can connect to a database and run. You'll modify the settings.py file to include the database credentails once it is available in Lightsail.
Still tracking down the best way to run the db migrations
The lack of logs was because the Dockerfile wasn't starting Django
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
I had a existing Django Rest project with an existing MySQL database (named libraries) which I wanted to Dockerize.
My dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
My docker-compose:
version: '3'
services:
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: libraries
MYSQL_USER: root
MYSQL_PASSWORD: root
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
Steps:
I ran: docker-compose build - build was successful
I ran: docker-compose up - had to run this command twice and then I could access my API by hitting localhost:8000
However, whenever I hit any API endpoint I get an error Table "XYZ" does not exist. All the tables are already present.
Why this happens?
First of all, it's strange that you had to run docker-compose up twice. I recommend to run docker logs after the first run to see what goes wrong, then start another question if you need help.
Regarding your main question, keep it mind that docker containers are stateless. That means unless you add persistent volume configurations, you'll get the same "fresh" one every time you start a new container.
Based on your compose file, there are two containers: a "web" one and a "db" one. A fresh "db" one just contains an empty MySQL instance with db name, db user, and db password settings. There's no data in it. You have two options:
Run migration from your "web" container to set up the db schema in your "db" container.
If you have some data in your local/dev setting and want to use them, consider backing up these data from your local setting then restoring it into your "db" container. In case you don't know how, consult MySQL documents to see how to backup data, and consult the "Initializing a fresh instance" part of the MySQL docker hub to see how to start a new "db" container with some data.
First you need to run django migrations:
$ docker exec -it [container] bash
# python manage.py migrate
I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html