How to attach graph-tool to Django using Docker - django

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!

Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.

Related

Docker-compose executes django twice

I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.

django migrations issues with postgres

I'm working on a Django project which dockerized and using Postgres for the database, but we are facing migrations issues, every time someone made changes in the model so if the other dev took a pull from git and try to migrate the migrations using python manage.py migrate because we already have the migrations file so sometimes the error is table already exists or table doesn't exists so every time I need to apply migrations using --fake but I guess that's not a good approach to migrate every time using --fake flag.
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py migrate
service cron start
service cron restart
python manage.py crontab add
printenv > env.txt
cat /var/spool/cron/crontabs/root >> env.txt
cat env.txt > /var/spool/cron/crontabs/root
uwsgi --socket :9000 --workers 4 --master --enable-threads --module alectify.wsgi
Django offers the ability to create updated migrations when the models change see https://docs.djangoproject.com/en/4.0/topics/migrations/#workflow for more information, but you can generate then apply updated migrations using:
python manage.py makemigrations
python manage.py migrate

How to start cron service on Dockerfile [duplicate]

This question already has answers here:
How to run a cron job inside a docker container?
(29 answers)
Docker Compose - How to execute multiple commands?
(20 answers)
Closed last year.
I have installed django-crontab==0.7.1 and added to INSTALLED_APPS Django configuration. I'm trying to start cron service on the Docker image build and add the command cron task with python manage.py crontab add but nothing occurs.
Dockerfile:
FROM python:3.8-slim-buster
LABEL maintainer="info#albertosanmartinmartinez.es" version="1.0.0"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update -y && apt-get install -y build-essential postgresql python-scipy python-numpy python-pandas libgdal-dev && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir /industrialareas
COPY ./project /industrialareas/
COPY ./requirements.txt /industrialareas/
WORKDIR /industrialareas
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8000
CMD service cron start
docker-compose.yml:
version: '3.7'
services:
django:
container_name: industrialareas_django_ctnr
build:
context: .
dockerfile: Dockerfile-django
restart: unless-stopped
env_file: ./project/project/settings/.env
command: python manage.py check
command: python manage.py collectstatic --noinput
command: python manage.py runserver 0.0.0.0:8000
command: python manage.py crontab add
volumes:
- ./project:/industrialareas
depends_on:
- postgres
ports:
- 8000:8000
But when I go into the container and run the service cron status command, I get the error.
[FAIL] cron is not running ... failed!
Anybody could help me please ?
Thanks in advance.

docker-compose.yml for production - Django and Celery

I'm looking to deploy a simple application which uses Django and celery.
docker-compose.yml:
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app/
ports:
- "8000:8000"
environment:
- DEBUG=1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=djcelery.backends.database:DatabaseBackend
depends_on:
- redis
celery:
build: .
command: celery -A core worker -l INFO
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=djcelery.backends.database:DatabaseBackend
depends_on:
- django
- redis
redis:
image: "redis:alpine"
volumes:
pgdata:
Dockerfile:
FROM python:3.7
WORKDIR /app
ADD . /app
#Install dependencies for PyODBC
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install tdsodbc -y \
&& apt-get clean -y
# install ODBC driver in docker image
RUN apt-get update \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install --yes --no-install-recommends msodbcsql17 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /tmp/*
# install requirements
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5000
ENV NAME OpentoAll
CMD ["python", "app.py"]
Project Directories:
When I run "docker-compose up" locally, the celery worker is run and I am able to go to localhost:8000 to access the API to make asynchronous requests to a celery task.
Now I'm wondering how can I deploy this to the cloud environment? What would be the image I would need to build and deploy? Thanks
You will need to install an application server (eg. gunicorn) in your django container and then run it on say port 8000. You'll also need a webserver (eg. nginx) in a container or installed on the host. The web server will need to act as a reverse proxy for gunicorn and also serve your static Django content.

Built Docker image cannot reach postgreSQL

I am using Django and PostgreSQL as different containers for a project. When I run the containers using docker-compose up, my Django application can connect to the PostgreSQL, but the same, when I build the docker image with docker build . --file Dockerfile --tag rengine:$(date +%s), the image successfully builds, but on the entrypoint.sh, it is unable to find host as db.
My docker-compose file is
version: '3'
services:
db:
restart: always
image: "postgres:12.3-alpine"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- rengine_network
web:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
networks:
- rengine_network
networks:
rengine_network:
volumes:
postgres_data:
Entrypoint
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
# Load default engine types
python manage.py loaddata fixtures/default_scan_engines.json --app scanEngine.EngineType
exec "$#"
and Dockerfile
# Base image
FROM python:3-alpine
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11#gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& apk add chromium \
&& apk add git \
&& pip install psycopg2 \
&& apk del build-deps
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# Download and install go 1.13
COPY --from=golang:1.13-alpine /usr/local/go/ /usr/local/go/
# Environment vars
ENV DATABASE="postgres"
ENV GOROOT="/usr/local/go"
ENV GOPATH="/root/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler github.com/haccer/subjack
RUN GO111MODULE=on go get -u -v github.com/projectdiscovery/httpx/cmd/httpx \
github.com/projectdiscovery/naabu/cmd/naabu \
github.com/projectdiscovery/subfinder/cmd/subfinder \
github.com/lc/gau
# Make directory for app
RUN mkdir /app
WORKDIR /app
# Copy source code
COPY . /app/
# Collect Static
RUN python manage.py collectstatic --no-input --clear
RUN chmod +x /app/tools/get_subdomain.sh
RUN chmod +x /app/tools/get_dirs.sh
RUN chmod +x /app/tools/get_urls.sh
RUN chmod +x /app/tools/takeover.sh
# run entrypoint.sh
ENTRYPOINT ["/app/docker-entrypoint.sh"]
When I run the build image, it says no such host db, on entrypoint.sh file.
Can somebody help me, where am I going wrong?
Base on a comment the error appeared during
docker run rengine:1234,
So the error is expected in this case, as hostname db will only work in the docker-compose network. Inside docker-compose stack one service can communicate with other service using service name, but in case of docker run both container running in isolated environment.
You have two option to resolve this issue
Run the DB container and use legacy-link to link the application container with DB
docker run -it --name db mydb_image
# now link application container
docker run -it --link db:db rengine:1234
Now the container will able to communicate with host db.
Second option is to create docker network and then run both container in same network.
docker run -itd --network=mynetwork db
docker run -itd --network=mynetwork app
But you are already using docker-compose, so better to do testing with docker-compose.