Installing Geospatial libraries in Docker - django

Django's official documentation lists 3 dependencies needed to start developing PostGIS application. They list a table depending on the database.
I use docker for my local development and I am confused about which of those packages should be installed in the Django container and which in the PostgreSQL container. I am guessing some of them should be on both.
I would appreciate your help with this.

You need to install the Geospatial libraries only in the Django container because they are used for interacting with a spatially enabled DB (such as PostgreSQL with PostGIS). You can deploy such a DB by using as a base a ready-made image for that purpose, such as kartoza/postgis.
Here is a nice example of a Dockerfile that uses python:3.6-slim as a base and builds the GDAL dependencies into the container. The part of the Dockerfile that you need for that is the following:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
# Add unstable repo to allow us to access latest GDAL builds
# Existing binutils causes a dependency conflict, correct version will be installed when GDAL gets intalled
RUN echo deb http://deb.debian.org/debian testing main contrib non-free >> /etc/apt/sources.list && \
apt-get update && \
apt-get remove -y binutils && \
apt-get autoremove -y
# Install GDAL dependencies
RUN apt-get install -y libgdal-dev g++ --no-install-recommends && \
pip install pipenv && \
pip install whitenoise && \
pip install gunicorn && \
apt-get clean -y
# Update C env vars so compiler can find gdal
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
ENV LC_ALL="C.UTF-8"
ENV LC_CTYPE="C.UTF-8"
You can deploy both the Django app and the DB using docker-compose, using the following docker-compose.yaml (from the same repo as the Dockerfile):
# Sample compose file for a django app and postgis
version: '3'
services:
postgis:
image: kartoza/postgis:9.6-2.4
volumes:
- postgis_data:/var/lib/postgresql
environment:
ALLOW_IP_RANGE: 0.0.0.0/0
POSTGRES_PASS: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: postgis
web:
image: intelligems/geodjango:latest
command: python manage.py runserver 0.0.0.0:8000
environment:
DEBUG: "True"
SECRET_KEY: ${SECRET_KEY}
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgis:5432/postgis
SENTRY_DSN: ${SENTRY_DSN}
ports:
- 8000:8000
depends_on:
- postgis
volumes:
postgis_data: {}
In this repository, you can find more info and interesting bits of configuration for your issue: https://github.com/intelligems/docker-library/tree/master/geodjango (the Dockerfile snippet above is from that repo).
As a Note:
If you want to create a PostgreSQL with PostGIS enabled database as a "local DB" to interact with your local Django, you can deploy the previously mentioned kartoza/postgis:
Create a Volume:
$ docker volume create postgresql_data
Deploy the container:
$ docker run \
--name=postgresql-with-postgis -d \
-e POSTGRES_USER=user_name \
-e POSTGRES_PASS=user_pass \
-e ALLOW_IP_RANGE=0.0.0.0/0 -p 5433:5432 \
-v postgresql_data:/var/lib/postgresql \
--restart=always \
kartoza/postgis:9.6-2.4
Connect to the default DB (postgres) of the container and create your DB:
$ psql -h localhost -U user_name -d postgres
$ CREATE DATABASE database_name;
Enable PostGIS extension to the database:
$ \connect database_name
$ CREATE EXTENSION postgis;
This will result in a DB with the name database_name listening to port 5432 of your localhost and you can connect to that from your local Django app.

Related

Built Docker image cannot reach postgreSQL

I am using Django and PostgreSQL as different containers for a project. When I run the containers using docker-compose up, my Django application can connect to the PostgreSQL, but the same, when I build the docker image with docker build . --file Dockerfile --tag rengine:$(date +%s), the image successfully builds, but on the entrypoint.sh, it is unable to find host as db.
My docker-compose file is
version: '3'
services:
db:
restart: always
image: "postgres:12.3-alpine"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- rengine_network
web:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
networks:
- rengine_network
networks:
rengine_network:
volumes:
postgres_data:
Entrypoint
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
# Load default engine types
python manage.py loaddata fixtures/default_scan_engines.json --app scanEngine.EngineType
exec "$#"
and Dockerfile
# Base image
FROM python:3-alpine
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11#gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& apk add chromium \
&& apk add git \
&& pip install psycopg2 \
&& apk del build-deps
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# Download and install go 1.13
COPY --from=golang:1.13-alpine /usr/local/go/ /usr/local/go/
# Environment vars
ENV DATABASE="postgres"
ENV GOROOT="/usr/local/go"
ENV GOPATH="/root/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler github.com/haccer/subjack
RUN GO111MODULE=on go get -u -v github.com/projectdiscovery/httpx/cmd/httpx \
github.com/projectdiscovery/naabu/cmd/naabu \
github.com/projectdiscovery/subfinder/cmd/subfinder \
github.com/lc/gau
# Make directory for app
RUN mkdir /app
WORKDIR /app
# Copy source code
COPY . /app/
# Collect Static
RUN python manage.py collectstatic --no-input --clear
RUN chmod +x /app/tools/get_subdomain.sh
RUN chmod +x /app/tools/get_dirs.sh
RUN chmod +x /app/tools/get_urls.sh
RUN chmod +x /app/tools/takeover.sh
# run entrypoint.sh
ENTRYPOINT ["/app/docker-entrypoint.sh"]
When I run the build image, it says no such host db, on entrypoint.sh file.
Can somebody help me, where am I going wrong?
Base on a comment the error appeared during
docker run rengine:1234,
So the error is expected in this case, as hostname db will only work in the docker-compose network. Inside docker-compose stack one service can communicate with other service using service name, but in case of docker run both container running in isolated environment.
You have two option to resolve this issue
Run the DB container and use legacy-link to link the application container with DB
docker run -it --name db mydb_image
# now link application container
docker run -it --link db:db rengine:1234
Now the container will able to communicate with host db.
Second option is to create docker network and then run both container in same network.
docker run -itd --network=mynetwork db
docker run -itd --network=mynetwork app
But you are already using docker-compose, so better to do testing with docker-compose.

What is wrong with my Gulp Watch task with Docker? [duplicate]

This question already has an answer here:
Docker bound mount - can not see changes on browser
(1 answer)
Closed 3 years ago.
EDIT: Gulp "Watch" doesn't work on windows with a mounted volumes because no "file change" event is sent. My current solution is to run Docker Windows Volume Watcher on my local machine while I see if I can integrate this solution into my code.
I'm trying to run a gulp watch task in my dockerfile and gulp isn't catching when my files are getting changed.
Quick Notes:
This set up works when I use it for my locally hosted wordpress installs
The file changes reflect in my docker container according to pycharm's docker service
Running the "styles" gulp task works, it's just the file watching that does not
It's clear to me that there's some sort of disconnect between how gulp watches for changes, and how Docker is letting that happen.
Github link
**Edit: It looks possible to do what I want, here's a link to someone doing it slightly differently.
gulpfile excerpt:
export const watchForChanges = () => {
watch('scss-js/scss/**/*.scss', gulp.series('styles'));
watch('scss-js/js/**/*.js', scripts);
watch('scss-js/scss/*.scss', gulp.series('styles'));
watch('scss-js/js/*.js', scripts);
// Try absolute path to see if it works
watch('scss-js/scss/bundle.scss', gulp.series('styles'));
}
...
// Compile SCSS through styles command
export const styles = () => {
// Want more than one SCSS file? Just turn the below string into an array
return src('scss-js/scss/bundle.scss')
// If we're in dev, init sourcemaps. Any plugins below need to be compatible with sourcemaps.
.pipe(gulpif(!PRODUCTION, sourcemaps.init()))
// Throw errors
.pipe(sass().on('error', sass.logError))
// In production use auto-prefixer, fix general grid and flex issues.
.pipe(
gulpif(
PRODUCTION,
postcss([
autoprefixer({
grid: true
}),
require("postcss-flexbugs-fixes"),
require("postcss-preset-env")
])
)
)
.pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
// In dev write source maps
.pipe(gulpif(!PRODUCTION, sourcemaps.write()))
// TODO: Update this source folder
.pipe(dest('blog/static/blog/'))
.pipe(server.stream());
}
...
export const dev = series(parallel(styles, scripts), watchForChanges);
Docker-Compose:
version: "3.7"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8002:8000"
- "3001:3001"
- "3000:3000"
volumes:
- ./django_project:/django_project
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
restart: always
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example1
ports:
- "5432:5432"
restart: always
Dockerfile:
FROM python:3.8-buster
MAINTAINER Austin
ENV PYTHONUNBUFFERED 1
# Install node
RUN apt-get update && apt-get -y install nodejs
RUN apt-get install npm -y
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update Node
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
wget
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.14.0
WORKDIR $NVM_DIR
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# What PIP installs need to get done?
COPY django_project/requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
# Copy local directory to target new docker directory
RUN mkdir -p /django_project
WORKDIR /django_project
COPY ./django_project /django_project
# Make Postgres Work
EXPOSE 5432/tcp
WORKDIR /django_project
RUN npm install gulp-cli -g
What do you think could be going on?
My guess is you are running on windows, right?
If so take a look at the following answer
https://stackoverflow.com/a/58969398/12153397
Below the gist of the linked answer
Issue identified
Bind mounting actually does not work for docker toolbox:
file change events in mounted folders of host are not propagated to
container by Docker for Windows
Solution
This script is intended to be the answer to this issue: docker-windows-volume-watcher.

How to attach graph-tool to Django using Docker

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.

How to deploy docker image created with version 2 on the aws

I am new to docker. I did somehow create docker project with version 2 docker compose following is my docker-compose.yml
version: "2"
services:
# Configuration for php web server
webserver:
image: inshastri/laravel-adminpanel:latest
restart: always
ports:
- '8080:80'
networks:
- web
volumes:
- ./:/var/www/html
- ./apache.conf:/etc/apache2/sites-available/000-default.conf
depends_on:
- db
links:
- db
# - redis
environment:
DB_HOST: db
DB_DATABASE: phpapp
DB_USERNAME: root
DB_PASSWORD: toor
# Configuration for mysql db server
db:
image: "mysql:5"
volumes:
- ./mysql:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_DATABASE: phpapp
networks:
- web
restart: always
# Configuration for phpmyadmin (optional)
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_PORT: 3306
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: toor
ports:
- "8004:80"
restart: always
depends_on:
- db
networks:
- web
redis:
image: redis:4.0-alpine
# Network connecting the whole app
networks:
web:
driver: bridge
and with docker file as below
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -qy language-pack-en-base \
&& locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN apt-get -y install apache2
RUN a2enmod headers
RUN a2enmod rewrite
# add PPA for PHP 7
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ondrej/php
# Adding php 7
RUN apt-get update
RUN apt-get install -y php7.1 php7.1-fpm php7.1-cli php7.1-common php7.1-mbstring php7.1-gd php7.1-intl php7.1-xml php7.1-mysql php7.1-mcrypt php7.1-zip
RUN apt-get -y install libapache2-mod-php7.1 php7.1 php7.1-cli php-xdebug php7.1-mbstring sqlite3 php7.1-mysql php-imagick php-memcached php-pear curl imagemagick php7.1-dev php7.1-phpdbg php7.1-gd npm nodejs-legacy php7.1-json php7.1-curl php7.1-sqlite3 php7.1-intl apache2 vim git-core wget libsasl2-dev libssl-dev
RUN apt-get -y install libsslcommon2-dev libcurl4-openssl-dev autoconf g++ make openssl libssl-dev libcurl4-openssl-dev pkg-config libsasl2-dev libpcre3-dev
RUN apt-get install -y imagemagick graphicsmagick
RUN a2enmod headers
RUN a2enmod rewrite
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
# Update application repository list and install the Redis server.
RUN apt-get update && apt-get install -y redis-server
# Allow Composer to be run as root
ENV COMPOSER_ALLOW_SUPERUSER 1
# Setup the Composer installer
RUN curl -o /tmp/composer-setup.php https://getcomposer.org/installer \
&& curl -o /tmp/composer-setup.sig https://composer.github.io/installer.sig \
&& php -r "if (hash('SHA384', file_get_contents('/tmp/composer-setup.php')) !== trim(file_get_contents('/tmp/composer-setup.sig'))) { unlink('/tmp/composer-setup.php'); echo 'Invalid installer' . PHP_EOL; exit(1); }" \
&& php /tmp/composer-setup.php \
&& chmod a+x composer.phar \
&& mv composer.phar /usr/local/bin/composer
# Install composer dependencies
RUN echo pwd: `pwd` && echo ls: `ls`
# RUN composer install
EXPOSE 80
# Expose default port
EXPOSE 6379
VOLUME [ "/var/www/html" ,"./mysql:/etc/mysql/conf.d",]
WORKDIR /var/www/html
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "FOREGROUND"]
COPY . /var/www/html
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
Now there are 2 things which i cannot understand after googling a lot
1) when i give the image to my friend he took the pull and when he ran it was without the other services like mysql and phpmyadmin
2) how should i deploy this application to ec2 amazon
There are lots of things but cannot understand any of them like ec2 beanstalk etc
Please guide a simple uploading of my image file to aws and run on it also how can i run my image on my friends pc as i thougth docker was a container managment system it should get all my services as when my friend or any one takes a pull of my image
for refrence my image is inshastri/laravel-adminpanel
Please help thanks in advance

Dockerizing an already existing app and database

I am trying to Dockerize an app that is already created (database included).
I've got the proper files in place:
docker-compose.yml
dockerfile
requirements.txt
I'm having trouble with the database part -
How do I configure the docker-compose.yml file to point to the database that is already created?
Here's why I ask - my understanding of Docker - is you create your base app - then "Dockerize" it or package it into an image that you can distribute. I'm a beginner at this - so that may be why I'm not understanding.
Here is my current docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=ar_db
ports:
- "5433:5433"
web:
build: .
command: python2.7 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and dockerfile:
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM debian:8.8
# Set the file maintainer (your name - the file's author)
MAINTAINER HeatherJ
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip libpq-dev python-dev
#install git
RUN apt-get update && apt-get install -y --no-install-recommends \
git&& rm -rf /var/lib/apt/lists/*
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=EPIC_AR
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/EPIC_AR
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/home/epic/EPIC_AR/EPIC_AR
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN mkdir media static logs
VOLUME ["$DOCKYARD_SRVHOME/media/", "$DOCKYARD_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]