Title says for itself. Everything works perfectly, except tests. After pulling and running test(test stage) service, it says Ran 0 tests in 0.000s, but i have some basic tests and they run perfectly on my system. Just can't quite figure out what is wrong.
I have a docker-compose file like this:
version: '2'
services:
postgres:
image: "postgres"
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=solo
- POSTGRES_DB=solo
- POSTGRES_PASSWORD=solo
- POSTGRES_HOST_AUTH_METHOD=trust
web:
image: $CI_REGISTRY:$CI_COMMIT_REF_NAME
build: .
environment:
- SECRET_KEY
command: bash -c "cd src && gunicorn solo.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static:/app/static
expose:
- 8000
depends_on:
- postgres
restart: always
test:
image: $CI_REGISTRY:$CI_COMMIT_REF_NAME
command: bash -c "python /app/src/manage.py test"
depends_on:
- postgres
nginx:
build: ./nginx
volumes:
- static:/app/static
ports:
- 80:80
depends_on:
- web
volumes:
static:
postgres_data:
And .gitlab-ci.yml like this:
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- deploy
before_script:
- apk add --no-cache py-pip
- pip install docker-compose==1.9.0
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker-compose build
- docker-compose push
test:
stage: test
script:
- docker-compose pull test
- docker-compose run test
deploy:
stage: deploy
only:
- master
- cicddev
before_script:
- apk add --update alpine-sdk
- apk add python3-dev libffi-dev libffi-dev libressl-dev
- apk add --no-cache openssh-client py-pip bash rsync
- pip install Fabric3==1.14.post1
- eval $(ssh-agent -s)
- bash -c 'echo "${DEPLOY_KEY}" | ssh-add -'
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
- fab -H ${DEPLOY_ADDRESS} deploy
Any thoughts or suggestions?
Found solution here Django test runner not finding tests.
Django won't find test if you're trying to call test from parent folders. So I had to change command: bash -c "python /app/src/manage.py test" to command: bash -c "cd src && python manage.py test"
Related
I am trying to run python manage.py collectstatic , in docker but nothing works, my python project misses some icons, and this command will solve the issue, but I can't know where to place the command, I have read several questions here but no luck.
Below is my docker-compose.ci.stag.yml file:
version: "3.7"
services:
web:
build:
context: .
dockerfile: Dockerfile.staging
cache_from:
- gingregisr*ty.azurecr.io/guio-tag:tag
image: gingregisrty.azurecr.io/guio-tag:tag
expose:
- 7000
env_file: .env
Then my docker-compose.staging.yml file :
version: '3.5'
# sudo docker login -p <password> -u <username>
services:
api:
container_name: api
image: gingregisrty.azurecr.io/guio-tag:tag
ports:
- 7000:7000
restart: unless-stopped
env_file:
- .env
networks:
- app-network
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30
environment:
- WATCHTOWER_CLEANUP=true
networks:
- app-network
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy:0.9
restart: always
ports:
- 443:443
- 90:90
volumes:
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- api
networks:
- app-network
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
certs:
html:
vhost:
acme:
then my Docker.staging file :
# ./django-docker/app/Dockerfile
FROM python:3.7.5-buster
# set work directory
WORKDIR /opt/app
# Add current directory code to working directory
ADD . /opt/app/
# set environment variables
# Prevents Python from writing pyc files to disc.
ENV PYTHONDONTWRITEBYTECODE 1
# Prevents Python from buffering stdout and stderr.
ENV PYTHONUNBUFFERED 1
# Copy firebase file
# COPY afro-mobile-test-firebase-adminsdk-cspoa.json
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \
python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
# Install project dependencies
RUN pip3 install -r requirements.txt
EXPOSE 7000
# copy project
COPY . /opt/app/
CMD ["bash", "start-app.sh"]
then my start-app.sh file :
#Run migrations
python manage.py migrate
#run tests
# python manage.py test
# run collect statics
python manage.py collectstatic
echo 'COLLECT STAIIC DONE ********'
echo $PORT
# Start server
# python manage.py runserver 0.0.0.0:$PORT
gunicorn server.wsgi:application --bind 0.0.0.0:$PORT
Am using gitlab ci to automate the pipeline, so here is my gitlab.yml build script:
# Build and Deploy to Azure.
build-dev:
stage: build
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login $AZ_REGISTRY_IMAGE -u $AZ_USERNAME_REGISTRY -p $AZ_PASSWORD_REGISTRY
- docker pull $AZ_REGISTRY_IMAGE/guio-tag:tag || true
- docker-compose -f docker-compose.ci.stag.yml build
- docker push $AZ_REGISTRY_IMAGE/guio-tag:tag
only:
- develop
- test-branch
The build runs successfully, but am sure python manage.py collectstatic is not ran, how best can I do this ?
I am trying to fire up a separate redis container which will work as a broker for celery. Can someone help me with as to why the docker user is not able to open the UNIX socket. I have even tried making the user as root but it doesn't seem to work. Please find below the Dockerfile, docker-compose file and redis.conf file.
Dockerfile:
FROM centos/python-36-centos7
USER root
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH=/home/django/.local/bin:$PATH
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm /home/django
COPY ./oracle.conf /home/django
RUN yum install -y dnf
RUN dnf install -y libaio libaio-devel
RUN rpm -i /home/django/oracle-instantclient18.3-basiclite-18.3.0.0.0-3.x86_64.rpm && \
cp /home/django/oracle.conf /etc/ld.so.conf.d/ && \
ldconfig && \
ldconfig -p | grep client64
COPY ./requirements /home/django/requirements
WORKDIR /home/django
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r ./requirements/development.txt
COPY . .
RUN chmod 777 /home/django
EXPOSE 8000
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
Docker-compose file:
version : '3.8'
services:
app:
build: .
volumes:
- .:/django
- cache:/var/run/redis
image: app_name:django
container_name: app_name
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:10.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=app_name
- POSTGRES_PASSWORD=app_password
- POSTGRES_DB=app_db
labels:
description : "Postgres Database"
container_name: app_name-db-1
redis:
image: redis:alpine
command: redis-server /etc/redis/redis.conf
restart: unless-stopped
ports:
- 6379:6379
volumes:
- ./redis/data:/var/lib/redis
- ./redis/redis-server.log:/var/log/redis/redis-server.log
- cache:/var/run/redis/
- ./redis/redis.conf:/etc/redis/redis.conf
container_name: redis
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
volumes:
postgres_data:
cache:
static-volume:
docker-entrypoint.sh:
# run migration first
python manage.py migrate
python manage.py preload_sites -uvasas -l
python manage.py preload_endpoints -uvasas -l
python manage.py collectstatic --noinput
#start celery
export C_FORCE_ROOT='true'
celery multi start 1 -A realm -l INFO -c4
# start the server
python manage.py runserver 0:8000
redis.conf
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
logfile /var/log/redis/redis-server.log
I am new to docker so apologies if I have not done something very obvious or if I have not followed some of the best practices.
I'm trying to make sure my Django app waits for my Postgres db to start so I don't get this error django.db.utils.OperationalError: FATAL: the database system is starting up, I've read this https://docs.docker.com/compose/startup-order/, and here is what I have so far
docker-compose.yml
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
backend:
build: ./backend
command: python3 manage.py runserver
volumes:
- ./backend:/code
ports:
- "8000:8000"
command: ["./wait-for-it.sh", "db", "bash", "entrypoint.sh"]
depends_on:
- db
wait-for-it.sh
#!/bin/sh
# wait-for-it.sh
set -e
host="$1"
shift
cmd="$#"
# postgres
until PGPASSWORD=$DB_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Dockerfile
# syntax=docker/dockerfile:1
FROM python:3.9.6-alpine3.14
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN \
apk add --no-cache postgresql-libs && \
apk add --no-cache --virtual .build-deps gcc musl-dev postgresql-dev && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
COPY . /code/
RUN chmod u+x ./wait-for-it.sh
EDIT #1:
This is my directory structure
Root directory
Backend directory
You are trying to combine several different solutions.
First of all, if you use pg_isready, you don't need any custom wait-for-it.sh scripts because pg_isready works great. So just remove your wait-for-it.sh file.
Also if you use healthcheck in the docker-compose.yml, you don't need to run any check scripts manually before running your entrypoint.sh. But you need to add condition to depends_on section. So change your docker-compose.yml to the following:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 5s
timeout: 5s
retries: 5
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
backend:
build: ./backend
volumes:
- ./backend:/code
ports:
- "8000:8000"
command: entrypoint.sh
depends_on:
db:
condition: service_healthy
Be aware, I also changed test command in healthcheck section and removed first command in backend image.
I'm building Django+Angular web application which is deployed on server using docker-compose. And I need to periodically run one django management command. I was searching SO a bit and tried following:
docker-compose:
version: '3.7'
services:
db:
restart: always
image: postgres:12-alpine
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- ./db:/var/lib/postgresql/data
api:
restart: always
image: registry.gitlab.com/*******/price_comparison_tool/backend:${CI_COMMIT_REF_NAME:-latest}
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend:/code
environment:
- SUPERUSER_PASSWORD=********
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=price_tool
- DB_USER=price_tool
- DB_PASSWORD=*********
depends_on:
- db
web:
restart: always
image: registry.gitlab.com/**********/price_comparison_tool/frontend:${CI_COMMIT_REF_NAME:-latest}
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- .:/frontend
ports:
- "80:80"
depends_on:
- api
volumes:
backend:
db:
Dockerfile (backend):
FROM python:3.8.3-alpine
ENV PYTHONUNBUFFERED 1
RUN apk add postgresql-dev gcc python3-dev musl-dev && pip3 install psycopg2
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
ADD entrypoint.sh /entrypoint.sh
ADD crontab_task /crontab_task
ADD run_boto.sh /run_boto.sh
RUN chmod a+x /entrypoint.sh
RUN chmod a+x /run_boto.sh
RUN /usr/bin/crontab /crontab_task
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir -p db
RUN mkdir -p logs
ENTRYPOINT ["/entrypoint.sh"]
CMD ["gunicorn", "-w", "3", "--timeout", "300", "--bind", "0.0.0.0:8000", "--access-logfile", "-", "price_tool_project.wsgi>
crontab_task:
*/1 * * * * /run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
run_boto.sh:
#!/bin/bash -e
cd price_comparison_tool/backend/
python manage.py boto.py
But when I run docker-compose up --build I get following messages in terminal:
api_1 | /bin/ash: python manage.py boto > /proc/1/fd/1 2>/proc/1/fd/2: not found
api_1 | /bin/ash: /run_boto.sh: not found
Project structure is following:
.
├── backend
├── db
├── docker-compose.yml
└── frontend
Can anybody give me an advice how to fix this issue and run management command periodically? Thanks in advance!
EDIT
I made following update:
crontab_task:
*/1 * * * * /code/run_boto.sh > /proc/1/fd/1 2>/proc/1/fd/2
and now run_boto.sh path is correct, but I get following error:
/bin/ash: /code/run_boto.sh: Permission denied
If you are running this application, not as a root user then the problem is, cron or crontab cannot be used as a non-rooted user.
you can take a look at this answer which I got when I was facing the same problem.
I'm trying to set up a Docker container with Django. My Docker file is:
FROM python:3.7-alpine
MAINTAINER Freshness Productions
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
RUN rabbitmqctl add_user test testpass1
RUN rabbitmqctl add_vhost myvhost
RUN rabbitmqctl set_permissions -p myvhost test ".*" ".*" ".*"
RUN rabbitmq-server
And my docker-compose.yml is:
version: "3"
services:
app:
build:
context: .
image: &app app
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
export C_FORCE_ROOT='true'
celery -A app beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
env_file: &envfile
- env.env
depends_on:
- db
- broker
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=supersecretpassword
worker:
build: .
image: *app
restart: "no"
env_file: *envfile
command: sh -c "celery -A app worker --loglevel=info"
volumes:
- ./app:/app
depends_on:
- broker
broker:
image: rabbitmq:3
env_file: *envfile
ports:
- 5672:5672
I don't see what I need to do - other than I'm using two different images in my docker-compose? Does that matter? I get the error:
ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
when I try to run docker-compose up
There you can find this configuration it works perfectly on with RabbitMQ, Python Django, MYQL, and Celery and Celery Beat.
don't forget to up-vote if you like. Thanks.
docker-compose.yml
version: "3.7"
services:
django:
container_name: django_python
image: python:3.6
command: bash -c "pip3 install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
volumes:
- ${APP_PATH}var/www/api.mylaser.fr/mylazer_backend:/app
- ${APP_PATH}var/www/static.mylaser.fr:/app/static
depends_on:
- mysql
working_dir: /app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- CELERY_BROKER=${CELERY_BROKER}
mysql:
container_name: python_mysql
image: mariadb:latest
#ports:
# - "3306:3306"
volumes:
- ${APP_PATH}var/mysql:/var/lib/mysql
- ${APP_PATH}etc/mysql:/etc/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- TZ=Europe/Paris
rabbitmq:
image: rabbitmq:3.8.2-management
container_name: rabbitmq
volumes:
- ${APP_PATH}etc/rabbitmq:/etc/rabbitmq/:rw
- ${APP_PATH}var/rabbitmq:/var/lib/rabbitmq/:rw
- ${APP_PATH}var/log/rabbitmq:/var/log/rabbitmq/:rw
environment:
HOSTNAME: ${RABBITMQ_HOSTNAME}
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
ports:
- ${RABBITMQ_PORT}:5672
- ${RABBITMQ_MGT_PORT}:15672
worker:
image: python:3.6
container_name: worker
command: bash -c "pip3 install -r requirements.txt && celery -A ${PROJECT_NAME} worker -l info"
working_dir: /app
volumes:
- ${APP_PATH}var/www/api.mylaser.fr/mylazer_backend:/app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- CELERY_BROKER=${CELERY_BROKER}
depends_on:
- mysql
- rabbitmq
beat-worker:
image: python:3.6
container_name: beat-worker
command: bash -c "pip3 install -r requirements.txt && celery -A ${PROJECT_NAME} beat -l info"
working_dir: /app
volumes:
- ${APP_PATH}var/www/api.mylaser.fr/mylazer_backend:/app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- CELERY_BROKER=${CELERY_BROKER}
depends_on:
- mysql
- rabbitmq