Setting up continuous integration with django 3, postgres and gitlab CI - django

I'm setting up a continuous integration with Django 3 and Gitlab CI.
Having done it previously with Django 2 but now I'm struggling to get things done with Django 3.
This warning is shown and I'm wondering if it's the reason :
/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:304: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.
And this error at the end :
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known
Here is my config :
image: python:3.8
services:
- postgres:10.17
variables:
POSTGRES_DB : db_test
POSTGRES_USER : postgres
POSTGRES_PASSWORD : ""
POSTGRES_HOST : postgres
POSTGRES_PORT : 5432
stages:
- tests
cache:
paths:
- ~/.cache/pip/
before_script:
- python -V
- apt-get update && apt install -y -qq python3-pip
- pip install -r requirements.txt
test:
stage: tests
variables:
DATABASE_URL: "postgres://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- coverage run manage.py test
- coverage report
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
Will be grateful if somebody can help me fix this.

Related

Gitlab CI pipeline failure

My gitlab ci pipeline keeps failing. It seems am stuck here. Am actually still new to the CI thing so I don't know what am doing wrong. Any help will be appreciated
Below is .gitlab-ci.yml file
image: python:latest
services:
- postgres:latest
variables:
POSTGRES_DB: projectdb
# This folder is cached between builds
# http://docs.gitlab.com/ee/ci/yaml/README.html#cache
cache:
paths:
- ~/.cache/pip/
before_script:
- python -V
build:
stage: build
script:
- pip install -r requirements.txt
- python manage.py migrate
only:
- EC-30
The last part of the job build process say
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1

Is there an easy way for someone completely naive to docker and web hosting to set up connectivity to their website?

I am very new to the whole development thing so I apologize for the simpleness of my question. I have been scouring the internet and reading articles and watching videos trying to to this docker/django combo I have going live. Everything I've looked at seems to be intended for people who know somewhat what they are doing. It's just not clicking for me reading the documentation and reading these articles. The leap to actually launching this kind of software is what's getting me. I tried AWS, DockerHub, and a few others.
My goal is just to get this little Django app I have going so that I can connect to it from work in the web browser without having my laptop there. I want my coworkers to be able to use it as they are a bit technically illiterate and it really simplifies this one thing that they have to do semi-daily.
I hope this is enough pertinent info, but let me know if not. I have two separate docker images. One for MariaDB and the other for Django.
My docker-compose.yml
version: "3.3"
# container networks to set up
networks:
django_db_net:
external: false
# the containers to spin up
services:
django:
build: ./docker/django
restart: 'unless-stopped'
depends_on:
- db
networks:
- django_db_net
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./src:/src
working_dir: /src/chopper
command: ["/src/wait-for-it.sh", "db:3306", "--", "python", "manage.py", "runserver", "0.0.0.0:8000"]
# command: python manage.py runserver 0.0.0.0:8000
ports:
- "${DJANGO_PORT}:8000"
db:
image: mariadb:latest
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=this_is_a_bad_password
- MYSQL_USER=django
- MYSQL_PASSWORD=django
- MYSQL_DATABASE=chopper
networks:
- django_db_net
My Dockerfile
FROM python:latest
# update pip
ENV PYTHONUNBUFFERED=1
RUN pip3 install --upgrade pip & \
pip3 install django mysqlclient relatorio
ENV MYSQL_MAJOR 8.0
RUN apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys 8C718D3B5072E1F5 & \
echo "deb http://repo.mysql.com/apt/debian/ buster mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list & apt-get update & \
apt-get -y --no-install-recommends install default-libmysqlclient-dev
WORKDIR /src
from what I understand from your question is that you want to deploy a webservice that can be accessed publicly using python/django and docker.
My recomendation is to check out Heroku.
https://www.heroku.com/continuous-integration
specifically
https://devcenter.heroku.com/categories/deploying-with-docker
And maybe integrate it with Github or GitLab.
As far as I remember Heroku provides you with free deployment to host your service for free ( with some limitations of course ) but it would be a perfect fit for your ask.

Docker Cloud autotest cant find service

I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy

How to setup Travis CI for a Django project hosted on Heroku?

I'm trying to setup TravisCI on my Django project.
I'm using Heroku where a classic pattern is to use env var to get postgres database URL:
settings.py
DEBUG = (os.environ['DJ_DEBUG'] == 'True')
import dj_database_url
DATABASES = {'default': dj_database_url.config(conn_max_age=500)}
example of .env file for my local env
DJ_DEBUG=True
DATABASE_URL=postgres://root:captainroot#127.0.0.1:5432/captaincook
Now, here is my .travis.yml conf file, that tries to use the locally created db:
language: python
python:
- 3.5
addons:
- postgresql: "9.5"
before_install:
- export DJ_DEBUG=False
- export DABATASE_URL=postgres://postgres#localhost/travisdb
install:
- pip install -r requirements.txt
before_script:
- psql -c "CREATE DATABASE travisdb;" -U postgres
- python captaincook/manage.py migrate --noinput
env:
- DJANGO=1.9.10
script: python captaincook/manage.py test --keepdb
The project works everywhere, except when deployed on travis, where I got this Django error:
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
Any idea? Thanks.
You have a typo: DABATASE_URL instead of DATABASE_URL.
But I suspect that rather than explicitly using export in before_install, you should use the env key:
env:
- DJ_DEBUG=False
- DATABASE_URL=postgres://postgres#localhost/travisdb

django webserver on travis for e2e testing

A quick question for some Django / Travis pros.
I would like to run some e2e tests on travis for my django/angular app, and connect to sauceLabs through a sauceConnect tunnel.
//travis.yml
addons:
sauce_connect: true
postgresql: "9.3"
branches:
only:
- master
- integration_env
language: python
python:
- '2.7.9'
cache:
directories:
- $HOME/virtualenv/python2.7.9/lib/python2.7/site-packages
- node_modules
- bower_components
install:
- npm install
- pip install -r requirements.txt
- pip install coverage -U --force-reinstall
- pip install coveralls -U --force-reinstall
- node_modules/protractor/bin/webdriver-manager update
before_script:
- psql -c 'create database travisci;' -U postgres
- pg_restore --no-acl --no-owner -h localhost -U postgres -d travisci demoDB.dump
- python manage.py runserver &
script:
# - grunt karma:sauceTravis
- grunt protractor:sauceLabs
- coverage run --source='.' manage.py test
after_success:
- grunt coveralls:run
- coveralls --merge=coverage/lcov/coveralls.json
I try to run a django webserver on my travis CI environment. I do this in a before_script after I create my database.
When I try to ping localhost:8000, however, I get a "bad gateway 301" response. Says something about dirty ssl?
If anyone has any advice about how to go about debugging this, I would be grateful.
Thanks