Coveralls is not being submitted on a Django app with Docker - django

I'm working on a Django project using Docker. I have configured Travis-Ci and I want to submit test coverage to coveralls. However, it is not working as expected. any help will be highly appreciated.
Here is the error I'm getting
Submitting coverage to coveralls.io...
No source for /mwibutsa/mwibutsa/settings.py
No source for /mwibutsa/mwibutsa/urls.py
No source for /mwibutsa/user/admin.py
No source for /mwibutsa/user/migrations/0001_initial.py
No source for /mwibutsa/user/models.py
No source for /mwibutsa/user/tests/test_user_api.py
No source for /mwibutsa/user/tests/test_user_model.py
Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 177, in wear
response.raise_for_status()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/cli.py", line 77, in main
result = coverallz.wear()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 180, in wear
raise CoverallsException('Could not submit coverage: {}'.format(e))
coveralls.exception.CoverallsException: Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
**Here is my Travis.yml file**
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
My Dockerfile
FROM python:3.7-alpine
LABEL description="Mwibutsa Floribert"
ENV PYTHONUNBUFFERED 1
RUN mkdir /mwibutsa
WORKDIR /mwibutsa
COPY requirements.txt /mwibutsa/
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN apk del .tmp-build-deps
COPY . /mwibutsa/
My docker-compose.yml
version: '3.7'
services:
web:
build: .
command: >
sh -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=postgres
- DB_PASSWORD=password
- DB_USER=postgres
- DB_PORT=5432
volumes:
- .:/mwibutsa
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:12-alpine
environment:
- POSTGRES_NAME=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_PORT=5432

To understand why the coverage is not being submitted, you have to understand how docker containers operate.
The container is created to mimic a separate and independent unit. This means that commands being run in the global context are different from those being run inside the container context.
In your case, you are running tests and generating a coverage report inside the container's context then trying to submit a report to coveralls from the global context.
Since the file is in the container, the coveralls command cannot find the report and hence nothing gets submitted.
You may refer to the answer provided here to solve this:
Coveralls: Error- No source for in my application using Docker container
Or check out the documentation provided by travis on how to submit to coveralls from travis using docker:
https://docs.travis-ci.com/user/coveralls/#using-coveralls-with-docker-builds

You have to run coveralls inside the container so it can send the data file generated by coverage to coveralls.io. You have to run coverage again in the after_success command so the .coverage data file is present in the container when coveralls runs. You also have to pass the coveralls repo token in as an environment variable that you set in travis https://docs.travis-ci.com/user/environment-variables#defining-variables-in-repository-settings.
.travis.yml
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- docker-compose run web sh -c "coverage run manage.py test && TRAVIS_JOB_ID=$TRAVIS_JOB_ID TRAVIS_BRANCH=$TRAVIS_BRANCH COVERALLS_REPO_TOKEN=$COVERALLS_REPO_TOKEN coveralls"
You need to make sure your git repo files are copied into the container for coveralls to accurately report the branch and have the badge work. You might also need to install git in the container.
Dockerfile:10
RUN apk add --update --no-cache postgresql-client jpeg-dev git

Related

how to make the django devserver run everytime i create a docker container, instead of when i build the image

tldr version: how do i do x everytime i build a container, instead of everytime i build a new image.
im building a very basic docker django example. when i do docker-compose build everything works as i want
version: '3.9'
services:
app:
build:
context: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./app:/app
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1
this runs the django devserver, however only when the image is being built. the containers created by the image do nothing, but actually i want them to run the django devserver. So i figure i should just move the command: sh -c "python manage.py runserver 0.0.0.0:8000" from docker-compose to my dockerfile as an entrypoint.
below is my docker file
FROM python:3.9-alpine3.13
LABEL maintainer="madshit.com"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
adduser --disabled-password --no-create-home app
ENV PATH="/py/bin:$PATH"
USER app
ENTRYPOINT python manage.py runserver # i added this because i thought it would be called everytime my docker environment was finished setting up. no dice :(
The bottom section of the image below is a screenshot of the logs of my image from docker desktop. strangely the last command it accepted was to set the user not anything to do with entrypoint. maybe it ignored entrypoint and thats the problem? The top section shows the logs of the instance created from this image (kinda bare).
what do i need to do to make the django webserver run in each container when deployed?
why doesnt entrypoint seem to get called? (its not in the logs)
I would recommend changing your environment variable logic slightly.
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1 <-- replace this
- SERVER='localhost' <-- or other env like staging or live
And then in your settings file you can do:
SERVER = os.environ.get('SERVER')
And then you can set variables based on the string like so:
if SERVER == 'production':
DEBUG = FALSE
else:
DEBUG = True
This is a very regular practice so that we can customise all kinds of settings and there are plenty of use cases for this method.
If that still doesn't work, we may have to look at other issues that might be causing these symptoms.

Docker-compose executes django twice

I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.

Authenticaton Postgres error while using Docker Compose, Python Django and Gitlab CI

I use Gitlab CI to make pipeline with building Docker image with my Django app. I saved some .env variables to Gitlab variables. They are succesfully calling and working, but there is
psycopg2.OperationalError: FATAL: password authentication failed for user
I have checked all passwords and variables, they are correct.
.gitlab-ci.yml
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- apk add py-pip python3-dev libffi-dev openssl-dev gcc libc-dev make
- pip3 install docker-compose
stages:
- test
test:
stage: test
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py test
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py check
- docker-compose -f docker-compose.test.yml down -v

PermissionError: [Errno 13] Permission denied: '/app/manage.py' when trying to create project with docker-compose

I was following a tutorial on how to create Django REST Framework API with Docker and succeeded to run the project on the first attempt, but then it's not possible to recreate it due to PermissionError.
The directory structure looks in the following way:
project_directory
- Dockerfile
- docker-compose.yml
- requirements.txt
- app/ # this directory was created manually
Successful configuration looks this way:
Dockerfile:
FROM python:3.7-alpine
LABEL author="aqv"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
requirements.txt:
Django>=2.1.3,<2.2.0
djangorestframework>=3.9.0,<3.10.0
psycopg2>=2.7.5,<2.8.0
docker-compose.yml:
version: "3"
services:
app:
build:
context: .
ports:
- "3005:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgresuser
- DB_PASS=<pass>
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgresuser
- POSTGRES_PASSWORD=<pass>
First step was running (1) docker build . in the project directory, then came (2) docker-compose build (which made the 1st command redundant, but didn't break anything) and finally (3) docker-compose run app sh -c "django-admin.py startproject app .".
The last command now ends up with:
Starting project_t_db_1 ... done
Traceback (most recent call last):
File "/usr/local/bin/django-admin.py", line 5, in <module>
management.execute_from_command_line()
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 3.7, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 353, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/startproject.py", line 20, in handle
super().handle('project', project_name, target, **options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/templates.py", line 155, in handle
with open(new_path, 'w', encoding='utf-8') as new_file:
PermissionError: [Errno 13] Permission denied: '/app/manage.py'
The /app directory is empty, there are only the files listed in the project_directory above, so /app/manage.py doesn't exist.
The attempts to re-run creation of the project were taken on Windows 10 machine (both CMD and PowerShell, including ran as admin), Ubuntu on Windows and a remote Ubuntu server. Ownership of files and directories was checked with both root and regular user.
All Docker containers were killed (docker kill $(docker ps -q)), images were removed (docker rm $(docker ps -a -q) and docker rmi $(docker images -q)) and creation of the environment was run from scratch.
I observed, that I can successfully create the project on newly set up server. But when trying to create another one or replace the existing one with another, the issue comes up again.
What would you suggest to check?
I'm creating the app directory as users home directory - this way the user has the correct permissions for it
RUN adduser \
--disabled-password \
--gecos "" \
--home /app \
app
USER app
WORKDIR /app
COPY ./app/ /app/
I have faced the similar issue on Mac. The only changes I did was changed the volumes setup in docker-compose.yml
volumes:
- ./app /app
to below
volumes:
- ./app:/app
It worked for me. The simple docker-compose.yml file looks like below
version : "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
I faced the same issue. I found out that this was due to the windows (or third party's) firewall blocking file sharing between Windows and Docker. When I tried to share 'C' and 'D' drive of the Windows in Docker's setting, the following message appeared, (error message from Docker) and I could not share drives.
In Docker's documentation, they suggested to open TCP port 445 for dockers, for allowing file sharing. I used Kaspersky for security, so I looked for ways to open TCP port in Kaspersky's firewall. I found my solution in this link. You can also find other solutions for this problem in this stackoverflow page as well.
After I successfully shared drives between Windows and Docker, problem solved.
This was bugging me for 2 days, until I decided to reinstall docker. Which solved the problem for me.
Before:
Docker --version: Docker version 20.10.20, build 9fdeb9c
After:
Docker --version: Docker version 20.10.21, build baeda1f
uname -a
Linux fedora 6.0.9-602.inttf.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 18 16:20:56 EET 2022 x86_64 x86_64 x86_64 GNU/Linux
I used an old project which I have in production right now so I knew it should work, but I got the same permission denied on my brand new Fedora 37 development machine.
Steps to solve the issue (install docker from repository as documented here https://docs.docker.com/engine/install/fedora/#install-using-the-repository):
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
sudo dnf -y install dnf-plugins-core
download https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf config-manager --add-repo path/to/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl start docker

Setting up docker for django, vue.js, rabbitmq

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.