Starting Django's runserver in a Docker container through docker-compose - django

I would like to have Django's runserver command running when I call docker-compose up
Here is what I tried, firstly, my image is starting from a Python image customized following this dockerfile:
# Dockerfile
FROM python:3.8
MAINTAINER geoffroy
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Ports exposure
EXPOSE 8000
VOLUME /data
# Install dependancies
RUN apt-get update && apt-get install -y \
vim \
git \
&& rm -rf /var/lib/apt/lists/*
# Setup python dependancies
RUN git clone https://github.com/blondelg/auto.git
WORKDIR /auto
RUN cd /auto
RUN pip install --no-cache-dir -r requirements.txt
# Build the secret key generator
RUN echo "import random" > generate_key.py
RUN echo "print(''.join(random.SystemRandom().choice('abcdefghijklmnopqrstuvwxyz0123456789!#$^&*(-_=+)') for i in range(50)))" >> generate_key.py
# Setup environment configuration
RUN cp auto/config_sample.ini auto/config.ini
RUN sed -i "s/SECRET_KEY_PATTERN/$(python generate_key.py)/gI" auto/config.ini
RUN sed -i "s/django.db.backends.sqlite3/django.db.backends.mysql/gI" auto/config.ini
RUN sed -i 's|{BASE_DIR}/db.sqlite3|autodb|gI' auto/config.ini
RUN sed -i "s/USER_PATTERN/root/gI" auto/config.ini
RUN sed -i "s/PASSWORD_PATTERN/root/gI" auto/config.ini
RUN sed -i "s/HOST_PATTERN/database/gI" auto/config.ini
RUN sed -i "s/PORT_PATTERN/3306/gI" auto/config.ini
Then, I have my docker-compose.yml designed as follow:
# docker-compose.yml
version: "3"
services:
database:
image: mariadb
container_name: database
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=autodb
hostname: 'database'
runserver:
build: .
command: python /auto/manage.py runserver 0.0.0.0:8000
container_name: runserver
ports:
- "8000:8000"
depends_on:
- database
Then I run
docker-compose up --build -d
My two containers (runserver + database) are up but going to http://127.0.0.1:8000 returns an error page whereas I should have the Django start page.
Also, when I go into the container (docker exec -ti runserver bash) and when I run python manage.py runserver 0.0.0.0:8000, I can access to the Django start page through http://127.0.0.1:8000.
What could be wrong here?

In your Dockerfile, there is this line WORKDIR /auto. It means that you are already in the /auto folder. So, in your docker-compose file, you should say
python manage.py runserver 0.0.0.0:8000
Instead of /auto/manage.py.

Following this documentation https://docs.docker.com/compose/django/, Ifixed the way I was declaring volume in my docker-compose.yaml
volumes:
- .:/auto

Related

Why is my docker image not running when using docker run (image), but i can run containers generated by docker-compose up?

My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.

Docker "is not a valid port number or address:port pair."

I am new in docker. I want to run my django app on port 9000. After command docker-compose up I am getting this message
Creating motion_full_version_backend_1 ... done
Attaching to motion_full_version_backend_1
backend_1 | Unknown command: 'makemigrations\r'. Did you mean makemigrations?
backend_1 | Type 'manage.py help' for usage.
backend_1 | Unknown command: 'migrate\r'. Did you mean migrate?
backend_1 | Type 'manage.py help' for usage.
" is not a valid port number or address:port pair.*
My Dockerfile :
FROM continuumio/miniconda3:latest
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update && apt-get upgrade -y && apt-get install -qqy \
wget \
bzip2 \
graphviz
RUN mkdir -p /backend
COPY ./backend/requirements.yml /backend/requirements.yml
# Create environment
RUN /opt/conda/bin/conda env create -f /backend/requirements.yml
# Add env path to environment
ENV PATH /opt/conda/envs/backend/bin:$PATH
# Activate interpreter
RUN echo "source activate backend" >~/.bashrc
# Create a scripts folder
RUN mkdir -p /scripts
COPY ./scripts /scripts
RUN chmod +x ./scripts*
COPY ./backend /backend
WORKDIR /backend
I am using script to run it
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:9000
my docker-compose.yml looks like this
version: '3'
services:
backend:
image: backend:latest
restart: always
env_file:
- ./envs/dev.env
command: 'sh /scripts/dev.sh'
ports:
- "9000:9000"
Any Ideas what I am doing wrong?
Your script has windows linefeeds within it, e.g. the \r you see in the output:
makemigrations\r
Since docker is running this command within a Linux environment, you need to adjust your script linefeeds for the Linux \n (without any \r). Linux sees those carriage returns as part of the string in the command you are running or port number you are passing. Adjust the linefeeds in your editor or using a tool like dos2unix.

Built Docker image cannot reach postgreSQL

I am using Django and PostgreSQL as different containers for a project. When I run the containers using docker-compose up, my Django application can connect to the PostgreSQL, but the same, when I build the docker image with docker build . --file Dockerfile --tag rengine:$(date +%s), the image successfully builds, but on the entrypoint.sh, it is unable to find host as db.
My docker-compose file is
version: '3'
services:
db:
restart: always
image: "postgres:12.3-alpine"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- rengine_network
web:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
networks:
- rengine_network
networks:
rengine_network:
volumes:
postgres_data:
Entrypoint
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
# Load default engine types
python manage.py loaddata fixtures/default_scan_engines.json --app scanEngine.EngineType
exec "$#"
and Dockerfile
# Base image
FROM python:3-alpine
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11#gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& apk add chromium \
&& apk add git \
&& pip install psycopg2 \
&& apk del build-deps
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# Download and install go 1.13
COPY --from=golang:1.13-alpine /usr/local/go/ /usr/local/go/
# Environment vars
ENV DATABASE="postgres"
ENV GOROOT="/usr/local/go"
ENV GOPATH="/root/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler github.com/haccer/subjack
RUN GO111MODULE=on go get -u -v github.com/projectdiscovery/httpx/cmd/httpx \
github.com/projectdiscovery/naabu/cmd/naabu \
github.com/projectdiscovery/subfinder/cmd/subfinder \
github.com/lc/gau
# Make directory for app
RUN mkdir /app
WORKDIR /app
# Copy source code
COPY . /app/
# Collect Static
RUN python manage.py collectstatic --no-input --clear
RUN chmod +x /app/tools/get_subdomain.sh
RUN chmod +x /app/tools/get_dirs.sh
RUN chmod +x /app/tools/get_urls.sh
RUN chmod +x /app/tools/takeover.sh
# run entrypoint.sh
ENTRYPOINT ["/app/docker-entrypoint.sh"]
When I run the build image, it says no such host db, on entrypoint.sh file.
Can somebody help me, where am I going wrong?
Base on a comment the error appeared during
docker run rengine:1234,
So the error is expected in this case, as hostname db will only work in the docker-compose network. Inside docker-compose stack one service can communicate with other service using service name, but in case of docker run both container running in isolated environment.
You have two option to resolve this issue
Run the DB container and use legacy-link to link the application container with DB
docker run -it --name db mydb_image
# now link application container
docker run -it --link db:db rengine:1234
Now the container will able to communicate with host db.
Second option is to create docker network and then run both container in same network.
docker run -itd --network=mynetwork db
docker run -itd --network=mynetwork app
But you are already using docker-compose, so better to do testing with docker-compose.

How to attach graph-tool to Django using Docker

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.

docker-compose conflicts with Dokerfile entry-point script

I'm trying to create a Docker image with my Django application, but unfortunately I'm getting troubles trying to run my entrypoint script.
Docker exits eith code error 127 and display the following message:
/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
You find below the respective configuration files:
Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /web/src
ADD . /web/src
WORKDIR /web/src
RUN pip install -U pip
RUN pip install -r requirements.txt -U
RUN chmod u+x docker-entrypoint.sh
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
docker-entrypoint.sh
#!/bin/bash
python manage.py migrate
python manage.py collectstatic --noinput
touch /srv/logs/gunicorn.log
touch /srv/logs/access.log
tail -n 0 -f /srv/logs/*.log &
echo Starting Gunicorn...
exec gunicorn config.wsgi:application \
--name django_server \
--bind 0.0.0.0:8000 \
--workers 3 \
--log-level=info \
--log-file=/srv/logs/gunicorn.log \
--access-logfile=/srv/logs/access.log \
"$#"
docker-compose.yml
version: '2.0'
services:
db:
container_name: db_server
image: postgres
web:
container_name: django_server
build: .
volumes:
- .:/web/src
environment:
- SECRET_KEY=k3jghf1jk%$JH^1GJH5#YUTR#!MBMB<5=7DXXG)JHSX=
- PGDATABASE=postgres
- PGUSER=postgres
- PGPASSWORD=''
- PGHOST=db
- DJANGO_ENV=development
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
links:
- db
After reproducing the problem locally: docker build . build the image successfully, but when trying to start the image using docker-compose up I got the following error exec: gunicorn: not found as the OP mentioned above. Based on this thread I could solve the problem running docker-compose build. So to sum up the 3 following commands should solve the problem:
docker build .
docker-compose build
docker-compose up
Despite this solves the problem for me I'm still confused here, why do I need to run build twice. I mean it should be something wrong somewhere, because as I far as I have understood, docker-compose build should do the same work as docker build ..