Tests ran in gitlab-ci fail as Postgres can't been seen - django

Tests using gitlab-ci in docker fail as the Postgres service in not accessible.
In my dev environment, I run tests successfully with: $docker-compose -f local.yaml run web py.test
But in gitlab, the command - docker run --env-file=.env_dev $CONTAINER_TEST_IMAGE py.test -p no:sugar in the gitlab-ci.yaml file fails with:
9bfe10de3baf: Pull complete
a137c036644b: Pull complete
8ad45b31cc3c: Pull complete
Digest: sha256:0897b57e12bd2bd63bdf3d9473fb73a150dc4f20cc3440822136ca511417762b
Status: Downloaded newer image for registry.gitlab.com/myaccount/myapp:gitlab_ci
$ docker run --env-file=.env $CONTAINER_TEST_IMAGE py.test -p no:sugar
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Basically, it cannot see the Postgres service. The text Postgres is unavailable - sleeping comes from an entrypoint.sh file in the Dockerfile
Below are some relevant files:
gitlab-ci.yml
image: docker:latest
services:
- docker:dind
stages:
- build
- test
variables:
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE --file compose/local/django/Dockerfile .
- docker push $CONTAINER_TEST_IMAGE
pytest:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run --env-file=.env_dev $CONTAINER_TEST_IMAGE py.test -p no:sugar
when: on_success
Dockerfile:
# ... other configs here
ENTRYPOINT ["compose/local/django/entrypoint.sh"]
entrypoint.sh:
# ..... other configs here
export DATABASE_URL=postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#postgres:5432/$POSTGRES_USER
function postgres_ready(){
python << END
import sys
import psycopg2
try:
conn = psycopg2.connect(dbname="$POSTGRES_USER", user="$POSTGRES_USER", password="$POSTGRES_PASSWORD", host="postgres")
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - continuing..."
exec $cmd
The above setup & configuration is inspired by django-cookie-cutter

Related

Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?

I want to push my Django project to GitLab and build a build with the pipeline. But I get this error message every time: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
Code
stages:
- build
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE/commits:$CI_COMMIT_SHA
build-docker-image:
stage: build
tags:
- docker
image: docker:19.03.12
services:
- docker:19.03.12-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
$CI_REGISTRY
script:
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- main

Gitlab ci fails to run docker-compose for django app

I am setting up a gitlab pipeline that I want to use to deploy a Django app on AWS using Terraform.
At the moment I am just setting up the pipeline so that validates the terraform and runs tests (pytest) and lynting.
The pipeline uses docker in docker and it looks like this:
image:
name: hashicorp/terraform:1.0.5
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- Test and Lint
Test and Lint:
image: docker:20.10.9
services:
- docker:20.10.9-dind
stage: Test and Lint
script:
- apk add --update docker-compose
- apk add python3
- apk add py3-pip
- docker-compose run --rm app sh -c "pytest && flake8"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^(master|production)$/ || $CI_COMMIT_BRANCH =~ /^(master|production)$/'
The pipeline fails to run the tests due to a database error I think which is weird as I am using pytest to mock the django database.
If I just run:
docker-compose run --rm app sh -c "pytest && flake8"
on the terminal of my local machine all tests pass.
Any idea how can I debug this?
p.s.
let me know if I need to add more info.
I don't think you are able to run docker in the CI directly. You can specify which image to use in each step and then run the commands. For instance:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
unit_test:
stage: Test
script:
- pytest
See, in this pipeline, I used the python:3.7 image. You can upload your docker image to some Registry and use it in the pipeline.
I manage to solve it and the tests in CI pass using
script:
- apk add --update docker-compose
- docker-compose up -d --build && docker-compose run --rm app sh -c "pytest && flake8"

Authenticaton Postgres error while using Docker Compose, Python Django and Gitlab CI

I use Gitlab CI to make pipeline with building Docker image with my Django app. I saved some .env variables to Gitlab variables. They are succesfully calling and working, but there is
psycopg2.OperationalError: FATAL: password authentication failed for user
I have checked all passwords and variables, they are correct.
.gitlab-ci.yml
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- apk add py-pip python3-dev libffi-dev openssl-dev gcc libc-dev make
- pip3 install docker-compose
stages:
- test
test:
stage: test
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py test
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py check
- docker-compose -f docker-compose.test.yml down -v

How to set environmental variables properly Gitlab CI/CD and Docker

I am new to Docker and CI/CD with Gitlab CI/CD. I have .env file in the root directory of my Django project which contains my environmental variable e.g SECRET_KEY=198191891. The .env file is included in .gitignore. I have set up these variables in Gitlab settings for CI/CD. However, the environment variables set in Gitlab CI/CD settings seem to be unavailable
Also, how should the Gitlab CI/CD automation process create a user and DB to connect to and run the tests? When creating the DB and user for the DB on my local machine, I logged into the container docker exec -it <postgres_container_name> /bin/sh and created Postgres user and DB.
Here are my relevant files.
docker-compose.yml
version: "3"
services:
postgres:
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
volumes:
- .:/writer-api
redis:
image: "redis:alpine"
celery:
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.gitlab-ci.yml
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building the app"
- docker-compose build
test:
stage: test
variables:
script:
- echo "Testing"
- docker-compose run web coverage run manage.py test
deploy-staging:
stage: deploy
only:
- develop
script:
- echo "Deploying staging"
- docker-compose up -d
deploy-production:
stage: deploy
only:
- master
script:
- echo "Deploying production"
- docker-compose up -d
Here are my settings for my variables
Here is my failed pipeline job
The SECRET_KEY variable will be available to all your CI jobs, as configured. However, I don't see any references to it in your Docker Compose file to pass it to one or more of your services. For the web service to use it, you'd map it in like the other variables you already have.
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
SECRET_KEY: ${SECRET_KEY}
DEBUG: ${DEBUG}
…
As for creating the database, you should wrap up whatever you currently run interactively in the postgres container in a SQL file or shell script, and then bind-mount it into the container's initialization scripts directory under /docker-entrypoint-initdb.d. See the Initialization scripts section of the postgres image's description for more details.
In my experience the best way to pass environment variables to a container docker is creating an environment file, which works to development environment and production environment:
GitLab CI/CD variables
You must create an environment file on GitLab CI/CD, following the next steps, on your project GitLab you must go to:
settings > CI/CD > Variables
on that you must create a ENV_FILE
Demo image
Next on your build stage in .gitlab-ci.yml copy the ENV_FILE to local process
.gitlab-ci.yml
build:
stage: build
script:
- cp $ENV_FILE .env
- echo "Building the app"
- docker-compose build
your Dockerfile should be stay normally so it doesn't have to change
Dockerfile
FROM python:3.8.6-slim
# Rest of setup goes here...
COPY .env .env
In order for compose variable substitution to work, when the user is not added to docker group, you must add the -E flag to sudo.
script:
- sudo -E docker-compose build

Docker Cloud autotest cant find service

I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy