Built Docker image cannot reach postgreSQL - django

I am using Django and PostgreSQL as different containers for a project. When I run the containers using docker-compose up, my Django application can connect to the PostgreSQL, but the same, when I build the docker image with docker build . --file Dockerfile --tag rengine:$(date +%s), the image successfully builds, but on the entrypoint.sh, it is unable to find host as db.
My docker-compose file is
version: '3'
services:
db:
restart: always
image: "postgres:12.3-alpine"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- rengine_network
web:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
networks:
- rengine_network
networks:
rengine_network:
volumes:
postgres_data:
Entrypoint
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
# Load default engine types
python manage.py loaddata fixtures/default_scan_engines.json --app scanEngine.EngineType
exec "$#"
and Dockerfile
# Base image
FROM python:3-alpine
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11#gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& apk add chromium \
&& apk add git \
&& pip install psycopg2 \
&& apk del build-deps
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# Download and install go 1.13
COPY --from=golang:1.13-alpine /usr/local/go/ /usr/local/go/
# Environment vars
ENV DATABASE="postgres"
ENV GOROOT="/usr/local/go"
ENV GOPATH="/root/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler github.com/haccer/subjack
RUN GO111MODULE=on go get -u -v github.com/projectdiscovery/httpx/cmd/httpx \
github.com/projectdiscovery/naabu/cmd/naabu \
github.com/projectdiscovery/subfinder/cmd/subfinder \
github.com/lc/gau
# Make directory for app
RUN mkdir /app
WORKDIR /app
# Copy source code
COPY . /app/
# Collect Static
RUN python manage.py collectstatic --no-input --clear
RUN chmod +x /app/tools/get_subdomain.sh
RUN chmod +x /app/tools/get_dirs.sh
RUN chmod +x /app/tools/get_urls.sh
RUN chmod +x /app/tools/takeover.sh
# run entrypoint.sh
ENTRYPOINT ["/app/docker-entrypoint.sh"]
When I run the build image, it says no such host db, on entrypoint.sh file.
Can somebody help me, where am I going wrong?

Base on a comment the error appeared during
docker run rengine:1234,
So the error is expected in this case, as hostname db will only work in the docker-compose network. Inside docker-compose stack one service can communicate with other service using service name, but in case of docker run both container running in isolated environment.
You have two option to resolve this issue
Run the DB container and use legacy-link to link the application container with DB
docker run -it --name db mydb_image
# now link application container
docker run -it --link db:db rengine:1234
Now the container will able to communicate with host db.
Second option is to create docker network and then run both container in same network.
docker run -itd --network=mynetwork db
docker run -itd --network=mynetwork app
But you are already using docker-compose, so better to do testing with docker-compose.

Related

Why is my docker image not running when using docker run (image), but i can run containers generated by docker-compose up?

My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.

django migrations issues with postgres

I'm working on a Django project which dockerized and using Postgres for the database, but we are facing migrations issues, every time someone made changes in the model so if the other dev took a pull from git and try to migrate the migrations using python manage.py migrate because we already have the migrations file so sometimes the error is table already exists or table doesn't exists so every time I need to apply migrations using --fake but I guess that's not a good approach to migrate every time using --fake flag.
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py migrate
service cron start
service cron restart
python manage.py crontab add
printenv > env.txt
cat /var/spool/cron/crontabs/root >> env.txt
cat env.txt > /var/spool/cron/crontabs/root
uwsgi --socket :9000 --workers 4 --master --enable-threads --module alectify.wsgi
Django offers the ability to create updated migrations when the models change see https://docs.djangoproject.com/en/4.0/topics/migrations/#workflow for more information, but you can generate then apply updated migrations using:
python manage.py makemigrations
python manage.py migrate

How to execute docker-entrypoint-initdb.d/init.sql files AFTER database is created?

I have a Django app with Docker
I want to initialize my database based on init.sql file when running docker-compose up
2 containers are correctly built and init.sql file is available in db_container
but docker logs db_container show an error indicating that database has not been migrated yet:
ERROR: relation table1 does not exist
Database is created when entrypoint.sh files is executed (command python manage.py migrate)
I do not understand when init.db is executed?
Dockerfile
FROM python:3.8.3-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN apk --update add libxml2-dev libxslt-dev libffi-dev gcc musl-dev libgcc openssl-dev curl
RUN apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev
RUN pip3 install psycopg2 psycopg2-binary
COPY requirements/ requirements/
RUN pip install --upgrade pip && pip install -r requirements/dev.txt
COPY ./entrypoint.sh .
COPY . .
# Run entrypoint.sh
ENTRYPOINT [ "/usr/src/app/entrypoint.sh" ]
entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
# nc = netcap -z = scanning
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
exec "$#"
docker-compose.yml
version: '3.7'
services:
web:
...
depends_on:
- db
db:
image: postgres:12.0-alpine
restart: always
volumes:
- postgres_data:/var/lib/postgres/data/
- ./imports/init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user
- POSTGRES_DB=db_dev
volumes:
postgres_data:
init.sql
\c db_dev
INSERT INTO table1 (field1,field2,field3) VALUES ('value1','value2','value3');
...
/docker-entrypoint-initdb.d/init.sql is executed the moment your database container starts running, while your entrypoint.sh is executed the moment your web container starts running. Since your web container depends on your database container, the SQL script will always be executed ahead of your entrypoint.
In other words, what you want is impossible.
You need to either create the database and table1 in init.sql and tell Django not to attempt creating them if they already exists or somehow add your INSERT.. to the list of migrations to be run.
I've never used Django, so I do not know if either of the above is possible.
Things will generally run in the following order:
The web build: block and the Dockerfile you show execute. This will never have access to the database.
The db container is created.
The web container is created.
The application image's entrypoint script gets up to the nc -z loop, and effectively pauses here.
The postgres image's entrypoint script starts a temporary PostgreSQL instance.
The postgres image's entrypoint script creates the db_dev database, as named in the POSTGRES_DB environment variable.
The postgres image's entrypoint script runs all of the files in /docker-entrypoint-initdb.d (including the init.sql script you show).
The postgres image's entrypoint script stops the temporary PostgreSQL instance and starts the real network-accessible one.
The application image's entrypoint script succeeds in connecting to PostgreSQL and proceeds past the nc loop.
The application image's entrypoint script runs python manage.py migrate.
The application image's entrypoint script exec "$#" to run the main container command.
Put another way: the Django migration command won't run until after the PostgreSQL container has fully completed its first-time startup, including running the docker-entrypoint-initdb.d scripts. That means that tables that get created in migrations never exist.
Data that needs to get loaded into a database at initialization time, for example to run tests or for well-known "admin" users, is generally referred to as "seed data". How to seed Django project ? - insert a bunch of data into the project for initialization has examples of adding a custom manage.py command or a fixtures file to load the seed data. In the entrypoint script you show, you'd add the python manage.py seed or load data command after running the migrations, but before launching the main server at the end.
Thanks to both.
I use data migrations https://docs.djangoproject.com/en/3.1/howto/writing-migrations/
And I suppress docker-entrypoint-initdb.d scripts
my database is initialzed when entrypoint is executed (manage.py migrate apply all migration)
Will this work for your case?
https://github.com/ray-chunkit-chung/essential-postgresql
1) Firstly, create an init script in a folder db
db\01-init.sh
The init script is to create the db if not exists. You can add data insert here as well
#!/bin/bash
set -e
export PGPASSWORD=$POSTGRES_PASSWORD;
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $APP_DB_USER WITH PASSWORD '$APP_DB_PASS';
CREATE DATABASE $APP_DB_NAME;
GRANT ALL PRIVILEGES ON DATABASE $APP_DB_NAME TO $APP_DB_USER;
\connect $APP_DB_NAME $APP_DB_USER
BEGIN;
CREATE TABLE IF NOT EXISTS event (
id CHAR(26) NOT NULL CHECK (CHAR_LENGTH(id) = 26) PRIMARY KEY,
xxx INT,
yyy JSON NOT NULL,
UNIQUE(xxx)
);
COMMIT;
EOSQL
2) Then in the docker compose file, volumes will map the host db folder to container docker-entrypoint-initdb.d to share the db init script
version: '3'
services:
sql_app:
... ## Other settings
depends_on:
- postgres
networks:
- postgres-network
postgres:
image: postgres:13.1
... ## Other settings
restart: always
environment:
- POSTGRES_USER=foooooo
- POSTGRES_PASSWORD=foooooo
- APP_DB_USER=foooooo
- APP_DB_PASS=foooooo
- APP_DB_NAME=foooooo
volumes:
- ./db:/docker-entrypoint-initdb.d/
ports:
- 5432:5432
networks:
- postgres-network
networks:
postgres-network:
driver: bridge
3) Remove the migration script in Dockerfile
FROM python:3.8.3-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN apk --update add libxml2-dev libxslt-dev libffi-dev gcc musl-dev libgcc openssl-dev curl
RUN apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev
RUN pip3 install psycopg2 psycopg2-binary
COPY requirements/ requirements/
RUN pip install --upgrade pip && pip install -r requirements/dev.txt
COPY . .

Starting Django's runserver in a Docker container through docker-compose

I would like to have Django's runserver command running when I call docker-compose up
Here is what I tried, firstly, my image is starting from a Python image customized following this dockerfile:
# Dockerfile
FROM python:3.8
MAINTAINER geoffroy
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Ports exposure
EXPOSE 8000
VOLUME /data
# Install dependancies
RUN apt-get update && apt-get install -y \
vim \
git \
&& rm -rf /var/lib/apt/lists/*
# Setup python dependancies
RUN git clone https://github.com/blondelg/auto.git
WORKDIR /auto
RUN cd /auto
RUN pip install --no-cache-dir -r requirements.txt
# Build the secret key generator
RUN echo "import random" > generate_key.py
RUN echo "print(''.join(random.SystemRandom().choice('abcdefghijklmnopqrstuvwxyz0123456789!#$^&*(-_=+)') for i in range(50)))" >> generate_key.py
# Setup environment configuration
RUN cp auto/config_sample.ini auto/config.ini
RUN sed -i "s/SECRET_KEY_PATTERN/$(python generate_key.py)/gI" auto/config.ini
RUN sed -i "s/django.db.backends.sqlite3/django.db.backends.mysql/gI" auto/config.ini
RUN sed -i 's|{BASE_DIR}/db.sqlite3|autodb|gI' auto/config.ini
RUN sed -i "s/USER_PATTERN/root/gI" auto/config.ini
RUN sed -i "s/PASSWORD_PATTERN/root/gI" auto/config.ini
RUN sed -i "s/HOST_PATTERN/database/gI" auto/config.ini
RUN sed -i "s/PORT_PATTERN/3306/gI" auto/config.ini
Then, I have my docker-compose.yml designed as follow:
# docker-compose.yml
version: "3"
services:
database:
image: mariadb
container_name: database
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=autodb
hostname: 'database'
runserver:
build: .
command: python /auto/manage.py runserver 0.0.0.0:8000
container_name: runserver
ports:
- "8000:8000"
depends_on:
- database
Then I run
docker-compose up --build -d
My two containers (runserver + database) are up but going to http://127.0.0.1:8000 returns an error page whereas I should have the Django start page.
Also, when I go into the container (docker exec -ti runserver bash) and when I run python manage.py runserver 0.0.0.0:8000, I can access to the Django start page through http://127.0.0.1:8000.
What could be wrong here?
In your Dockerfile, there is this line WORKDIR /auto. It means that you are already in the /auto folder. So, in your docker-compose file, you should say
python manage.py runserver 0.0.0.0:8000
Instead of /auto/manage.py.
Following this documentation https://docs.docker.com/compose/django/, Ifixed the way I was declaring volume in my docker-compose.yaml
volumes:
- .:/auto

How to attach graph-tool to Django using Docker

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.