I am trying to Dockerize an app that is already created (database included).
I've got the proper files in place:
docker-compose.yml
dockerfile
requirements.txt
I'm having trouble with the database part -
How do I configure the docker-compose.yml file to point to the database that is already created?
Here's why I ask - my understanding of Docker - is you create your base app - then "Dockerize" it or package it into an image that you can distribute. I'm a beginner at this - so that may be why I'm not understanding.
Here is my current docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=ar_db
ports:
- "5433:5433"
web:
build: .
command: python2.7 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and dockerfile:
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM debian:8.8
# Set the file maintainer (your name - the file's author)
MAINTAINER HeatherJ
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip libpq-dev python-dev
#install git
RUN apt-get update && apt-get install -y --no-install-recommends \
git&& rm -rf /var/lib/apt/lists/*
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=EPIC_AR
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/EPIC_AR
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/home/epic/EPIC_AR/EPIC_AR
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN mkdir media static logs
VOLUME ["$DOCKYARD_SRVHOME/media/", "$DOCKYARD_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
Related
I'm new to docker and eb deployment, I want to deploy django with docker on eb
here's what I did so far
created a Dockerfile
# Pull base image
FROM python:3.9.16-slim-buster
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal libpq-dev python-dev libcurl4-openssl-dev libssl-dev gcc
# install dependencies
COPY . /code
WORKDIR /code/
RUN pip install -r requirements.txt
# set work directory
WORKDIR /code/app
then in docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python /code/hike/manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/code
it runs locally, but on deployment it fails and when I get to logs, it says
pg_config is required to build psycopg2 from source.
like it's not using the Dockerfile, I read somewhere I should set Dockerrunder.aws.json but I've no idea what to write in it!
I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.
Having this dockerfile:
FROM python:3.8.3-alpine
ENV MICRO_SERVICE=/home/app/microservice
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
RUN mkdir -p $MICRO_SERVICE
RUN mkdir -p $MICRO_SERVICE/static
# where the code lives
WORKDIR $MICRO_SERVICE
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev gcc python3-dev musl-dev \
&& apk del build-deps \
&& apk --no-cache add musl-dev linux-headers g++
# install dependencies
RUN pip install --upgrade pip
# copy project
COPY . $MICRO_SERVICE
RUN pip install -r requirements.txt
COPY ./entrypoint.sh $MICRO_SERVICE
CMD ["/bin/bash", "/home/app/microservice/entrypoint.sh"]
and the following docker-compose.yml file:
version: "3.7"
services:
nginx:
build: ./nginx
ports:
- 1300:80
volumes:
- static_volume:/home/app/microservice/static
depends_on:
- web
restart: "on-failure"
web:
build: . #build the image for the web service from the dockerfile in parent directory
command: sh -c "python manage.py collectstatic --no-input &&
gunicorn djsr.wsgi:application --bind 0.0.0.0:${APP_PORT}"
volumes:
- .:/microservice:rw # map data and files from parent directory in host to microservice directory in docker containe
- static_volume:/home/app/microservice/static
env_file:
- .env
image: wevbapp
expose:
- ${APP_PORT}
restart: "on-failure"
volumes:
static_volume:
I need to reference the following files (in the docker-compose.yml file) being in other directories rather than the .devcontainer:
manage.py
requirements.txt
.env
This is my folder structure:
An easy solution would be to move the dockerfile, docker-compose.yml, and .env in the django directory djsr, but I am trying to keep the files structured like this. How can I do reference those files in docker-compose.yml?
It is fairly common to put the couple of Docker-related files in the project root directory, and that can potentially save you some trouble; I'd recommend that as a first choice.
If you do want to keep it all in a subdirectory, it's possible, though. When you run docker-compose, you can specify the location of the configuration file. It will consider all paths as relative to this file's directory.
# Either:
docker-compose -f .devcontainer/docker-compose.yml up
cd .devcontainer && docker-compose up
When you go to build the image, the build reads in a context directory, and COPY statements are always interpreted relative to this directory. For your setup, you need the context directory to be the top of your source tree, and then specify an alternate Dockerfile in a subdirectory.
services:
web:
build:
context: ..
dockerfile: .dockerenv/Dockerfile
For the most part the Dockerfile itself is fine, but where the entrypoint script is in a subdirectory, the COPY command needs to reflect that too. Since you're copying the entire source directory, you could also rearrange things inside the image to be the layout you want.
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
# Either:
COPY .dockerenv/entrypoint.sh ./
# Or:
RUN mv .dockerenv/entrypoint.sh .
# Or:
CMD ["./dockerenv/entrypoint.sh"]
I don't recommend the volume structure you have, but if you want to keep it, you also need to change the source path of the bind mount to be the parent directory. (Note particularly, in the previous Dockerfile fragment, a couple of the options involve moving files inside the image, and a bind mount will hide that change.)
services:
web:
volumes:
# Ignore the application built into the container, and use
# whatever's checked out on the host system instead.
- ..:/home/app/microservice
# Further ignore the static assets on the host system and
# use the content in a named volume instead.
- static_volume:/home/app/microservice/static
why don't you mount the same as you did with the folders only for these files?
The source of the mount. For bind mounts, this is the path to the file or directory on the
Docker daemon host. May be specified as source or src.
From the toplevel maps directory, I'm able to install the gunicorn extension ...
(venv) localhost:maps davea$ pip3 install gunicorn
Collecting gunicorn
Downloading gunicorn-20.0.4-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 1.2 MB/s
Requirement already satisfied: setuptools>=3.0 in ./web/venv/lib/python3.7/site-packages (from gunicorn) (45.1.0)
Installing collected packages: gunicorn
Successfully installed gunicorn-20.0.4
Below is my docker-compose.yml file
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
And then I have web/Dockerfile as follows ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
RUN pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
However, when I build/start my docker instance, I'm told it can't find my "gunicorn" command ...
(venv) localhost:maps davea$ docker-compose up
Starting maps_web_1 ...
Starting maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
Your Docker container is a totally isolated environment. Nothing you install on the host is visible inside the container; nothing that happens inside the container is accessible on the host. There's ways to bridge this boundary (with docker run -v bind mounts) but that's not possible during the docker build phase.
In this example your local source tree has a requirements.txt file that lists out the packages that need to be installed when the container is created. (The RUN pip freeze line has no effect; the COPY on the line after it copies it from your local source tree.) It's enough to add the dependency to the requirements.txt file
gunicorn
In your development environment, you can re-run pip install -r requirements.txt to update the packages installed in your virtual environment. When you re-run docker build, having this line in the requirements.txt file will cause it to be installed when the package is built.
You can clean up the Dockerfile a little bit. The resulting Dockerfile would be a pretty typical one for Python packages with C dependencies:
# Start from a totally clean environment with Python installed,
# but no non-system libraries and nothing from your host system.
FROM python:3.7-slim
# Install C dependencies.
# It's important to do apt-get update and install in the
# same command. It's more efficient to only do it once.
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
libmariadb-dev \
libmariadb-dev-compat
# Update pip
RUN python -m pip install --upgrade pip
# Create the application directory and point there
# (WORKDIR will implicitly create it)
WORKDIR /app/
# Install all of the Python dependencies. These are
# listed, one to a line, in the requirements.txt file,
# possibly with version constraints. Having this as
# a separate block allows Docker to not repeat it if
# only your application code changes.
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# Copy in the rest of the application.
COPY . .
# Specify what port your application uses, and the
# default command to use when launching the container.
EXPOSE 8000
CMD /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.