I am trying to use the coverage tool to measure the code coverage of my Django app, when i test it work fine, but when i pushed to github, i got some errors in travis-ci:
Traceback (most recent call last):
File "/usr/local/bin/coverage", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 756, in main
status = CoverageScript().command_line(argv)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 491, in command_line
return self.do_run(options, args)
File "/usr/local/lib/python3.6/site-packages/coverage/cmdline.py", line 641, in do_run
self.coverage.save()
File "/usr/local/lib/python3.6/site-packages/coverage/control.py", line 782, in save
self.data_files.write(self.data, suffix=self.data_suffix)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 680, in write
data.write_file(filename)
File "/usr/local/lib/python3.6/site-packages/coverage/data.py", line 467, in write_file
with open(filename, 'w') as fdata:
PermissionError: [Errno 13] Permission denied: '/backend/.coverage'
The command "docker-compose run backend sh -c "coverage run manage.py test"" exited with 1.
my travis.yml:
language: python
python:
- "3.6"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run backend sh -c "python manage.py test && flake8"
- docker-compose run backend sh -c "coverage run manage.py test"
after_success:
- coveralls
and my dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
# Install dependencies
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
# Setup directory structure
RUN mkdir /backend
WORKDIR /backend
COPY scripts/start_dev.sh /
RUN dos2unix /start_dev.sh
COPY . /backend
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol/
RUN chmod -R 755 /vol/web
USER user
docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
So after seeing the lack of permissions on /.coverage, I simply added chmod +x .coverage, however, this made no difference and I yet received the exact same error.
Your permission issue is most likely due to the fact you have a volume (./backend:/backend) and that you are using a user in your container which does not have the right permission to write on this volume (USER user)
Since you probably cannot change the permissions on the Travis CI directory ./backend, you could try to change the user which is used to write files to that location. This is easy to do with docker-compose:
backend:
container_name: backend_dev_blog
build: ./backend
command: "sh -c /start_dev.sh"
user: root
volumes:
- ./backend:/backend
ports:
- "8000:8000"
networks:
- main
environment:
- DB_HOST=db
- DB_NAME=blog
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
Related
I am trying to run python manage.py collectstatic , in docker but nothing works, my python project misses some icons, and this command will solve the issue, but I can't know where to place the command, I have read several questions here but no luck.
Below is my docker-compose.ci.stag.yml file:
version: "3.7"
services:
web:
build:
context: .
dockerfile: Dockerfile.staging
cache_from:
- gingregisr*ty.azurecr.io/guio-tag:tag
image: gingregisrty.azurecr.io/guio-tag:tag
expose:
- 7000
env_file: .env
Then my docker-compose.staging.yml file :
version: '3.5'
# sudo docker login -p <password> -u <username>
services:
api:
container_name: api
image: gingregisrty.azurecr.io/guio-tag:tag
ports:
- 7000:7000
restart: unless-stopped
env_file:
- .env
networks:
- app-network
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30
environment:
- WATCHTOWER_CLEANUP=true
networks:
- app-network
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy:0.9
restart: always
ports:
- 443:443
- 90:90
volumes:
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- api
networks:
- app-network
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
certs:
html:
vhost:
acme:
then my Docker.staging file :
# ./django-docker/app/Dockerfile
FROM python:3.7.5-buster
# set work directory
WORKDIR /opt/app
# Add current directory code to working directory
ADD . /opt/app/
# set environment variables
# Prevents Python from writing pyc files to disc.
ENV PYTHONDONTWRITEBYTECODE 1
# Prevents Python from buffering stdout and stderr.
ENV PYTHONUNBUFFERED 1
# Copy firebase file
# COPY afro-mobile-test-firebase-adminsdk-cspoa.json
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \
python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
# Install project dependencies
RUN pip3 install -r requirements.txt
EXPOSE 7000
# copy project
COPY . /opt/app/
CMD ["bash", "start-app.sh"]
then my start-app.sh file :
#Run migrations
python manage.py migrate
#run tests
# python manage.py test
# run collect statics
python manage.py collectstatic
echo 'COLLECT STAIIC DONE ********'
echo $PORT
# Start server
# python manage.py runserver 0.0.0.0:$PORT
gunicorn server.wsgi:application --bind 0.0.0.0:$PORT
Am using gitlab ci to automate the pipeline, so here is my gitlab.yml build script:
# Build and Deploy to Azure.
build-dev:
stage: build
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login $AZ_REGISTRY_IMAGE -u $AZ_USERNAME_REGISTRY -p $AZ_PASSWORD_REGISTRY
- docker pull $AZ_REGISTRY_IMAGE/guio-tag:tag || true
- docker-compose -f docker-compose.ci.stag.yml build
- docker push $AZ_REGISTRY_IMAGE/guio-tag:tag
only:
- develop
- test-branch
The build runs successfully, but am sure python manage.py collectstatic is not ran, how best can I do this ?
I am trying to fire up a separate redis container which will work as a broker for celery. Can someone help me with as to why the docker user is not able to open the UNIX socket. I have even tried making the user as root but it doesn't seem to work. Please find below the Dockerfile, docker-compose file and redis.conf file.
Dockerfile:
FROM centos/python-36-centos7
USER root
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH=/home/django/.local/bin:$PATH
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle.conf /home/django
RUN yum install -y dnf
RUN dnf install -y libaio libaio-devel
RUN rpm -i /home/django/oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm && \
cp /home/django/oracle.conf /etc/ld.so.conf.d/ && \
ldconfig && \
ldconfig -p | grep client64
COPY ./requirements /home/django/requirements
WORKDIR /home/django
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r ./requirements/development.txt
COPY . .
RUN chmod 777 /home/django
EXPOSE 8000
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
Docker-compose file:
version : '3.8'
services:
app:
build: .
volumes:
- .:/django
- cache:/var/run/redis
image: app_name:django
container_name: app_name
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:10.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=app_name
- POSTGRES_PASSWORD=app_password
- POSTGRES_DB=app_db
labels:
description : "Postgres Database"
container_name: app_name-db-1
redis:
image: redis:alpine
command: redis-server /etc/redis/redis.conf
restart: unless-stopped
ports:
- 6379:6379
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis-server.log:/var/log/redis/redis-server.log
- cache:/var/run/redis/
- ./redis/redis.conf:/etc/redis/redis.conf
container_name: redis
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
volumes:
postgres_data:
cache:
static-volume:
docker-entrypoint.sh:
# run migration first
python manage.py migrate
python manage.py preload_sites -uvasas -l
python manage.py preload_endpoints -uvasas -l
python manage.py collectstatic --noinput
#start celery
export C_FORCE_ROOT='true'
celery multi start 1 -A realm -l INFO -c4
# start the server
python manage.py runserver 0:8000
redis.conf
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
logfile /var/log/redis/redis-server.log
I am new to docker so apologies if I have not done something very obvious or if I have not followed some of the best practices.
I'm building Django+Angular web application which is deployed on server using docker-compose. And I need to periodically run one django management command. I was searching SO a bit and tried following:
docker-compose:
version: '3.7'
services:
db:
restart: always
image: postgres:12-alpine
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
api:
restart: always
image: registry.gitlab.com/*******/price_comparison_tool/backend:${CI_COMMIT_REF_NAME:-latest}
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend:/code
environment:
- SUPERUSER_PASSWORD=********
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=price_tool
- DB_USER=price_tool
- DB_PASSWORD=*********
depends_on:
- db
web:
restart: always
image: registry.gitlab.com/**********/price_comparison_tool/frontend:${CI_COMMIT_REF_NAME:-latest}
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- .:/frontend
ports:
- "80:80"
depends_on:
- api
volumes:
backend:
db:
Dockerfile (backend):
FROM python:3.8.3-alpine
ENV PYTHONUNBUFFERED 1
RUN apk add postgresql-dev gcc python3-dev musl-dev && pip3 install psycopg2
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD entrypoint.sh /entrypoint.sh
ADD crontab_task /crontab_task
ADD run_boto.sh /run_boto.sh
RUN chmod a+x /entrypoint.sh
RUN chmod a+x /run_boto.sh
RUN /usr/bin/crontab /crontab_task
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir -p db
RUN mkdir -p logs
ENTRYPOINT ["/entrypoint.sh"]
CMD ["gunicorn", "-w", "3", "--timeout", "300", "--bind", "0.0.0.0:8000", "--access-logfile", "-", "price_tool_project.wsgi>
crontab_task:
*/1 * * * * /run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
run_boto.sh:
#!/bin/bash -e
cd price_comparison_tool/backend/
python manage.py boto.py
But when I run docker-compose up --build I get following messages in terminal:
api_1 | /bin/ash: python manage.py boto > /proc/1/fd/1 2>/proc/1/fd/2: not found
api_1 | /bin/ash: /run_boto.sh: not found
Project structure is following:
.
├── backend
├── db
├── docker-compose.yml
└── frontend
Can anybody give me an advice how to fix this issue and run management command periodically? Thanks in advance!
EDIT
I made following update:
crontab_task:
*/1 * * * * /code/run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
and now run_boto.sh path is correct, but I get following error:
/bin/ash: /code/run_boto.sh: Permission denied
If you are running this application, not as a root user then the problem is, cron or crontab cannot be used as a non-rooted user.
you can take a look at this answer which I got when I was facing the same problem.
I run the following yml file with $docker-compose up command. I'm developing a REST API with django (Udemy "Build a Backend REST API with Python & Django - Advanced").
System: OS X 10.10.5
Docker version: 18.03.0-ce
docker-compose version: 1.20.1
Q: I am unable to access localhost with 127.0.0.1
docker-compose.yml
version: "2.2"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=supersecretpassword
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=supersecretpassword
Dockerfile
FROM python:3.7-alpine
#python unbuffered environmental variable
ENV PYTHONUNBUFFERED 1
#copy requirements.txt from the directory adjacent to
#the Dockerfile into dockerimage/requirments.txt
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
#create an empty folder named app in docker image
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
Some suggest to add CMD ["python", "manage.py", "runserver",
"0.0.0.0:8000"] in Dockerfile; didn't work in this case. In django
settings.py, ALLOWED_HOSTS = ['192.168.99.100']
Console output (part of: see bold)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py",
line 178, in get_new_connection connection = Database.connect(conn_params)
File "/usr/local/lib/python3.7/site-packages/psycopg2/init.py",
line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused **
Is the server running on host "db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I'm super new to this stuff, appreciate your help and suggestions.
I've modified django-cookiecutter default production template to make caddy web server serve static files. I'm using volumes to map the ./static directories in django and caddy containters through host ./static directory, but I'm getting permissions error when docker executes python manage.py collectstatic --noinput while trying to create a subfolder of ./static.
However, if I don't switch to django user in django container's Dockerfile, hence execute collectstatic as root, everything works perfectly. I guess django user in the container is not allowed to write to host directory, even despite the fact that chown -R django /app/static was successfully executed.
Traceback (most recent call last):
File "/app/manage.py", line 30, in <\module>
execute_from_command_line(sys.argv)
...
File "/usr/local/lib/python3.6/site-packages/collectfast/management/commands/collectstatic.py", line 111, in copy_file
self.do_copy_file(args)
File "/usr/local/lib/python3.6/site-packages/collectfast/management/commands/collectstatic.py",
line 100, in do_copy_file
path, prefixed_path, source_storage)
File "/usr/local/lib/python3.6/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 354, in copy_file
self.storage.save(prefixed_path, source_file)
File "/usr/local/lib/python3.6/site-packages/django/core/files/storage.py",
line 49, in save
return self._save(name, content)
File "/usr/local/lib/python3.6/site-packages/django/core/files/storage.py", line 236, in _save
os.makedirs(directory)
File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError:
[Errno 13] Permission denied: '/app/static/sass'
I tried chown -R systemd-timesync:root static inside host, creating ./static folder beforehand inside host as root, and adding RUN mkdir /app/static && chown -R django /app/static to django container's Dockerfile (to execute as container's root user).
docker-compose.yml
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_caddy: {}
services:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
volumes:
- ./static:/app/static
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
caddy:
build:
context: .
dockerfile: ./compose/production/caddy/Dockerfile
depends_on:
- django
volumes:
- production_caddy:/root/.caddy
- ./static:/srv/static
env_file:
- ./.envs/.production/.caddy
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.2
django container Dockerfile
FROM nickgryg/alpine-pandas
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# lxml dependencies
&& apk add libxml2-dev libxslt-dev
RUN addgroup -S django \
&& adduser -S -G django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install --no-cache-dir -r /requirements/production.txt \
&& rm -rf /requirements
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start
COPY . /app
RUN chown -R django /app
USER django
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
django container start script
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
python /app/manage.py collectstatic --noinput
/usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
I don't want my container to be executed as root, so I'm looking for any solutions / ideas.
Finally found a workaround other than executing collectstatic as root.
As I suspected, the problem was in docker's permissions, and we should grant Docker the permissions to create folders in /static folder, which is owned by django user inside django Docker container. We can do that, knowing that userId is the same between host system and the container, by running
docker-compose run django id -u django
It outputs the userId of the django user in the system. For instance, uid is 100. Then run (not sure about gid, but it works when gid = uid + 1)
chown -R 100:101 /static
If we run ls -lh, we can see that static folder is owned by systemd-network, which is sort of a Docker user mapped to uid = 100
drwxr-xr-x 4 root root 4.0K Sep 27 11:23 compose
drwxr-xr-x 3 root root 4.0K Nov 27 12:09 config
drwxr-xr-x 3 root root 4.0K Nov 14 02:04 docs
drwxr-xr-x 2 root root 4.0K Sep 27 11:23 locale
-rwxr-xr-x 1 root root 1.1K Sep 27 12:56 manage.py
...
drwxr-xr-x 11 systemd-network systemd-journal 4.0K Nov 21 22:15 static
drwxr-xr-x 2 root root 4.0K Nov 27 13:37 utils
It should solve the problem. Beware that after rebuilding the container uid of django user may change, and the error will appear again, so you would have to repeat this.
Everyone who understands a bit more how Docker works is welcome to explain what happens here, and I will accept his answer.