django webserver on travis for e2e testing - django

A quick question for some Django / Travis pros.
I would like to run some e2e tests on travis for my django/angular app, and connect to sauceLabs through a sauceConnect tunnel.
//travis.yml
addons:
sauce_connect: true
postgresql: "9.3"
branches:
only:
- master
- integration_env
language: python
python:
- '2.7.9'
cache:
directories:
- $HOME/virtualenv/python2.7.9/lib/python2.7/site-packages
- node_modules
- bower_components
install:
- npm install
- pip install -r requirements.txt
- pip install coverage -U --force-reinstall
- pip install coveralls -U --force-reinstall
- node_modules/protractor/bin/webdriver-manager update
before_script:
- psql -c 'create database travisci;' -U postgres
- pg_restore --no-acl --no-owner -h localhost -U postgres -d travisci demoDB.dump
- python manage.py runserver &
script:
# - grunt karma:sauceTravis
- grunt protractor:sauceLabs
- coverage run --source='.' manage.py test
after_success:
- grunt coveralls:run
- coveralls --merge=coverage/lcov/coveralls.json
I try to run a django webserver on my travis CI environment. I do this in a before_script after I create my database.
When I try to ping localhost:8000, however, I get a "bad gateway 301" response. Says something about dirty ssl?
If anyone has any advice about how to go about debugging this, I would be grateful.
Thanks

Related

Gitlab ci fails to run docker-compose for django app

I am setting up a gitlab pipeline that I want to use to deploy a Django app on AWS using Terraform.
At the moment I am just setting up the pipeline so that validates the terraform and runs tests (pytest) and lynting.
The pipeline uses docker in docker and it looks like this:
image:
name: hashicorp/terraform:1.0.5
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- Test and Lint
Test and Lint:
image: docker:20.10.9
services:
- docker:20.10.9-dind
stage: Test and Lint
script:
- apk add --update docker-compose
- apk add python3
- apk add py3-pip
- docker-compose run --rm app sh -c "pytest && flake8"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^(master|production)$/ || $CI_COMMIT_BRANCH =~ /^(master|production)$/'
The pipeline fails to run the tests due to a database error I think which is weird as I am using pytest to mock the django database.
If I just run:
docker-compose run --rm app sh -c "pytest && flake8"
on the terminal of my local machine all tests pass.
Any idea how can I debug this?
p.s.
let me know if I need to add more info.
I don't think you are able to run docker in the CI directly. You can specify which image to use in each step and then run the commands. For instance:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
unit_test:
stage: Test
script:
- pytest
See, in this pipeline, I used the python:3.7 image. You can upload your docker image to some Registry and use it in the pipeline.
I manage to solve it and the tests in CI pass using
script:
- apk add --update docker-compose
- docker-compose up -d --build && docker-compose run --rm app sh -c "pytest && flake8"

Setting up continuous integration with django 3, postgres and gitlab CI

I'm setting up a continuous integration with Django 3 and Gitlab CI.
Having done it previously with Django 2 but now I'm struggling to get things done with Django 3.
This warning is shown and I'm wondering if it's the reason :
/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:304: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.
And this error at the end :
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known
Here is my config :
image: python:3.8
services:
- postgres:10.17
variables:
POSTGRES_DB : db_test
POSTGRES_USER : postgres
POSTGRES_PASSWORD : ""
POSTGRES_HOST : postgres
POSTGRES_PORT : 5432
stages:
- tests
cache:
paths:
- ~/.cache/pip/
before_script:
- python -V
- apt-get update && apt install -y -qq python3-pip
- pip install -r requirements.txt
test:
stage: tests
variables:
DATABASE_URL: "postgres://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- coverage run manage.py test
- coverage report
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
Will be grateful if somebody can help me fix this.

Authenticaton Postgres error while using Docker Compose, Python Django and Gitlab CI

I use Gitlab CI to make pipeline with building Docker image with my Django app. I saved some .env variables to Gitlab variables. They are succesfully calling and working, but there is
psycopg2.OperationalError: FATAL: password authentication failed for user
I have checked all passwords and variables, they are correct.
.gitlab-ci.yml
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- apk add py-pip python3-dev libffi-dev openssl-dev gcc libc-dev make
- pip3 install docker-compose
stages:
- test
test:
stage: test
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py test
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py check
- docker-compose -f docker-compose.test.yml down -v

Coveralls is not being submitted on a Django app with Docker

I'm working on a Django project using Docker. I have configured Travis-Ci and I want to submit test coverage to coveralls. However, it is not working as expected. any help will be highly appreciated.
Here is the error I'm getting
Submitting coverage to coveralls.io...
No source for /mwibutsa/mwibutsa/settings.py
No source for /mwibutsa/mwibutsa/urls.py
No source for /mwibutsa/user/admin.py
No source for /mwibutsa/user/migrations/0001_initial.py
No source for /mwibutsa/user/models.py
No source for /mwibutsa/user/tests/test_user_api.py
No source for /mwibutsa/user/tests/test_user_model.py
Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 177, in wear
response.raise_for_status()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/cli.py", line 77, in main
result = coverallz.wear()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 180, in wear
raise CoverallsException('Could not submit coverage: {}'.format(e))
coveralls.exception.CoverallsException: Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
**Here is my Travis.yml file**
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
My Dockerfile
FROM python:3.7-alpine
LABEL description="Mwibutsa Floribert"
ENV PYTHONUNBUFFERED 1
RUN mkdir /mwibutsa
WORKDIR /mwibutsa
COPY requirements.txt /mwibutsa/
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN apk del .tmp-build-deps
COPY . /mwibutsa/
My docker-compose.yml
version: '3.7'
services:
web:
build: .
command: >
sh -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=postgres
- DB_PASSWORD=password
- DB_USER=postgres
- DB_PORT=5432
volumes:
- .:/mwibutsa
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:12-alpine
environment:
- POSTGRES_NAME=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_PORT=5432
To understand why the coverage is not being submitted, you have to understand how docker containers operate.
The container is created to mimic a separate and independent unit. This means that commands being run in the global context are different from those being run inside the container context.
In your case, you are running tests and generating a coverage report inside the container's context then trying to submit a report to coveralls from the global context.
Since the file is in the container, the coveralls command cannot find the report and hence nothing gets submitted.
You may refer to the answer provided here to solve this:
Coveralls: Error- No source for in my application using Docker container
Or check out the documentation provided by travis on how to submit to coveralls from travis using docker:
https://docs.travis-ci.com/user/coveralls/#using-coveralls-with-docker-builds
You have to run coveralls inside the container so it can send the data file generated by coverage to coveralls.io. You have to run coverage again in the after_success command so the .coverage data file is present in the container when coveralls runs. You also have to pass the coveralls repo token in as an environment variable that you set in travis https://docs.travis-ci.com/user/environment-variables#defining-variables-in-repository-settings.
.travis.yml
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- docker-compose run web sh -c "coverage run manage.py test && TRAVIS_JOB_ID=$TRAVIS_JOB_ID TRAVIS_BRANCH=$TRAVIS_BRANCH COVERALLS_REPO_TOKEN=$COVERALLS_REPO_TOKEN coveralls"
You need to make sure your git repo files are copied into the container for coveralls to accurately report the branch and have the badge work. You might also need to install git in the container.
Dockerfile:10
RUN apk add --update --no-cache postgresql-client jpeg-dev git

Can circle ci use docker-compose to build the environment

I currently have a few services such as db and web in a django application, and docker-compose is used to string them together.
The web version has code like this..
web:
restart: always
build: ./web
expose:
- "8000"
The docker file in web has python2.7-onbuild, so it uses the requirements.txt file to install all the necessary dependencies.
I am now using circle CI for integration and have a circle.yml file like this..
....
dependencies:
pre:
- pip install -r web/requirements.txt
....
Is there anyway I could avoid the dependency clause in the circle yml file.
Instead I would like Circle CI to use docker-compose.yml instead, if that makes sense.
Yes, using docker-compose in the circle.yml file can be a nice way to run tests because it can mirror ones dev environment very closely. This is a extract from our working tests on a AngularJS project:
---
machine:
services:
- docker
dependencies:
override:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- sudo pip install --upgrade docker-compose==1.3.0
test:
pre:
- docker-compose pull
- docker-compose up -d
- docker-compose run npm install
- docker-compose run bower install --allow-root --config.interactive=false
override:
# grunt runs our karma tests
- docker-compose run grunt deploy-build compile
Notes:
The docker login is only needed if you have private images in docker hub.
when we wrote our circle.yml file only docker-compose 1.3 was available. This is probably updated now.
I haven't tried this myself but based on the info here https://circleci.com/docs/docker I guess it may work
# circle.yml
machine:
services:
- docker
dependencies:
pre:
- pip install docker-compose
test:
pre:
- docker-compose up -d
Unfortunately, circleCI by default install old version of Docker 1.9.1 which is not compatible with latest version of docker-compose. In order to get more fresh docker version 1.10.0 you should:
machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
- pip install docker-compose
services:
- docker
test:
pre:
- docker-compose up -d
Read more: https://discuss.circleci.com/t/docker-1-10-0-is-available-beta/2100
UPD: Native-Docker support on Circle version 2.
Read more information how to switch to new Circle CI version here: https://circleci.com/docs/2.0/migrating-from-1-2/