docker-compose how to reference files in other directories - django

Having this dockerfile:
FROM python:3.8.3-alpine
ENV MICRO_SERVICE=/home/app/microservice
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
RUN mkdir -p $MICRO_SERVICE
RUN mkdir -p $MICRO_SERVICE/static
# where the code lives
WORKDIR $MICRO_SERVICE
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev gcc python3-dev musl-dev \
&& apk del build-deps \
&& apk --no-cache add musl-dev linux-headers g++
# install dependencies
RUN pip install --upgrade pip
# copy project
COPY . $MICRO_SERVICE
RUN pip install -r requirements.txt
COPY ./entrypoint.sh $MICRO_SERVICE
CMD ["/bin/bash", "/home/app/microservice/entrypoint.sh"]
and the following docker-compose.yml file:
version: "3.7"
services:
nginx:
build: ./nginx
ports:
- 1300:80
volumes:
- static_volume:/home/app/microservice/static
depends_on:
- web
restart: "on-failure"
web:
build: . #build the image for the web service from the dockerfile in parent directory
command: sh -c "python manage.py collectstatic --no-input &&
gunicorn djsr.wsgi:application --bind 0.0.0.0:${APP_PORT}"
volumes:
- .:/microservice:rw # map data and files from parent directory in host to microservice directory in docker containe
- static_volume:/home/app/microservice/static
env_file:
- .env
image: wevbapp
expose:
- ${APP_PORT}
restart: "on-failure"
volumes:
static_volume:
I need to reference the following files (in the docker-compose.yml file) being in other directories rather than the .devcontainer:
manage.py
requirements.txt
.env
This is my folder structure:
An easy solution would be to move the dockerfile, docker-compose.yml, and .env in the django directory djsr, but I am trying to keep the files structured like this. How can I do reference those files in docker-compose.yml?

It is fairly common to put the couple of Docker-related files in the project root directory, and that can potentially save you some trouble; I'd recommend that as a first choice.
If you do want to keep it all in a subdirectory, it's possible, though. When you run docker-compose, you can specify the location of the configuration file. It will consider all paths as relative to this file's directory.
# Either:
docker-compose -f .devcontainer/docker-compose.yml up
cd .devcontainer && docker-compose up
When you go to build the image, the build reads in a context directory, and COPY statements are always interpreted relative to this directory. For your setup, you need the context directory to be the top of your source tree, and then specify an alternate Dockerfile in a subdirectory.
services:
web:
build:
context: ..
dockerfile: .dockerenv/Dockerfile
For the most part the Dockerfile itself is fine, but where the entrypoint script is in a subdirectory, the COPY command needs to reflect that too. Since you're copying the entire source directory, you could also rearrange things inside the image to be the layout you want.
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
# Either:
COPY .dockerenv/entrypoint.sh ./
# Or:
RUN mv .dockerenv/entrypoint.sh .
# Or:
CMD ["./dockerenv/entrypoint.sh"]
I don't recommend the volume structure you have, but if you want to keep it, you also need to change the source path of the bind mount to be the parent directory. (Note particularly, in the previous Dockerfile fragment, a couple of the options involve moving files inside the image, and a bind mount will hide that change.)
services:
web:
volumes:
# Ignore the application built into the container, and use
# whatever's checked out on the host system instead.
- ..:/home/app/microservice
# Further ignore the static assets on the host system and
# use the content in a named volume instead.
- static_volume:/home/app/microservice/static

why don't you mount the same as you did with the folders only for these files?
The source of the mount. For bind mounts, this is the path to the file or directory on the
Docker daemon host. May be specified as source or src.

Related

Why is my docker image not running when using docker run (image), but i can run containers generated by docker-compose up?

My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.

Building on Docker Hub ignores context path when copying files

I am trying to upload a Django app to Docker Hub. On the local machine (Ubuntu 18.04) everything works fine, but on Docker Hub there is an issue that the requirements.txt file cannot be found.
Local machine:
sudo docker-compose build --no-cache
Result (it's okay):
Step 5/7 : COPY . .
---> 5542d55caeae
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in b85a55aa2640
Dockerfile db.sqlite3 hello_django manage.py requirements.txt venv
Removing intermediate container b85a55aa2640
---> 532e91546d41
Step 7/7 : RUN pip install -r requirements.txt
---> Running in e940ebf96023
Collecting Django==3.2.2....
But, Docker Hub:
Step 5/7 : COPY . .
---> 852fa937cb0a
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in 281d9580d608
README.md app config docker-compose.yml
Removing intermediate container 281d9580d608
---> 99eaafb1a55d
Step 7/7 : RUN pip install -r requirements.txt
---> Running in d0e180d83772
[91mERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Removing intermediate container d0e180d83772
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
app/Dockerfile
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY . .
RUN file="$(ls -1 )" && echo $file
RUN pip install -r requirements.txt
docker-composer.yml
version: '3'
services:
web:
build:
context: app
dockerfile: Dockerfile
volumes:
- ./app/:/code/
ports:
- "8000:8000"
env_file:
- ./config/.env.dev
command: python manage.py runserver 0.0.0.0:8000
Project Structure:
UPDATE:
Docker is building from Github.
File requirements.txt is in the GitHub repository (app folder), but for some reason during build Docker Hub copies files from the project root folder and not the contents of the app folder.
Github:
https://github.com/sigalglebru/django-on-docker
The problem is that you need to tell Docker Hub where to find your build context.
When you run docker-compose build locally, docker-compose reads your docker-compose.yml file and knows to build inside the app directory, because you've explicitly set the build context:
build:
context: app
dockerfile: Dockerfile
When you build on Docker Hub, by default it will assume the build
context is the top level of your repository. If you set the path to
your Dockerfile to, e.g., app/Dockerfile, this is equivalent to
running:
docker build -f app/Dockerfile .
If you try that, you'll see if fail the same way. Rather than setting
the path to the Dockerfile, you need to set the path to the build
context to the app directory. For example:
(Look at the "Build Context" column).
When configured correct, your repository builds on Docker Hub without errors.
Thank you, I found solution:
I just copied files from./app to the mounted volume, and little changed context, but still don't understand why it worked fine on the local machine
Dockerfile:
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY ./app .
RUN pip install -r requirements.txt
docker-compose.yml
version: "3.6"
services:
python:
restart: always
build:
context: .
dockerfile: docker/Dockerfile
expose:
- 8000
ports:
- 8000:8000
command: "python manage.py runserver 0.0.0.0:8000"

Docker ERROR: Couldn't find env file: /home/sam/code/docker/.env.dev

I'm learning how to use docker with Django. So first step is you setup the Dockerfile and here's the content of the file.
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
another file the docker-compose.yml file content below
version: '3.7'
services:
web:
build: ./project
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./project/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
now these two files are in the folder docker which also has my django project folder called project keeping it simple :)
When I run docker-compose build I get back the error
ERROR: Couldn't find env file: /home/sam/code/docker/.env.dev
You are specifying that .env file can be found in the location where your docker-compose.yml file is located.
env_file:
- ./.env.dev
Make sure that your .env file is available there. If you don't need any environment variables to be set just remove above line from the docker-compose.yml file.

Setting up docker for django, vue.js, rabbitmq

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.

Dockerizing an already existing app and database

I am trying to Dockerize an app that is already created (database included).
I've got the proper files in place:
docker-compose.yml
dockerfile
requirements.txt
I'm having trouble with the database part -
How do I configure the docker-compose.yml file to point to the database that is already created?
Here's why I ask - my understanding of Docker - is you create your base app - then "Dockerize" it or package it into an image that you can distribute. I'm a beginner at this - so that may be why I'm not understanding.
Here is my current docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=ar_db
ports:
- "5433:5433"
web:
build: .
command: python2.7 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and dockerfile:
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM debian:8.8
# Set the file maintainer (your name - the file's author)
MAINTAINER HeatherJ
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip libpq-dev python-dev
#install git
RUN apt-get update && apt-get install -y --no-install-recommends \
git&& rm -rf /var/lib/apt/lists/*
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=EPIC_AR
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/EPIC_AR
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/home/epic/EPIC_AR/EPIC_AR
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN mkdir media static logs
VOLUME ["$DOCKYARD_SRVHOME/media/", "$DOCKYARD_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]