I'm new to docker and eb deployment, I want to deploy django with docker on eb
here's what I did so far
created a Dockerfile
# Pull base image
FROM python:3.9.16-slim-buster
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal libpq-dev python-dev libcurl4-openssl-dev libssl-dev gcc
# install dependencies
COPY . /code
WORKDIR /code/
RUN pip install -r requirements.txt
# set work directory
WORKDIR /code/app
then in docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python /code/hike/manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/code
it runs locally, but on deployment it fails and when I get to logs, it says
pg_config is required to build psycopg2 from source.
like it's not using the Dockerfile, I read somewhere I should set Dockerrunder.aws.json but I've no idea what to write in it!
Related
I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.
I'm trying to deploy my project on django. I almost did it but django can't see installed pillow in docker container. I'm sure that it's installed pip sends me this:
sudo docker-compose -f docker-compose.prod.yml exec web pip install pillow
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pillow in /usr/local/lib/python3.8/site-packages (8.1.0)
But when i'm trying to migrate db i see this:
ERRORS:
history_main.Exhibit.image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "python -m pip install
Pillow".
history_main.MainUser.avatar: (fields.E210) Cannot use ImageField because Pillow is not installed.
Here are parts of Dockerfile where Pillow tries to install:
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev jpeg-dev zlib-dev build-base
RUN pip install --upgrade pip
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
...
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
docker-compose:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
dockerfile for web:
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . .
RUN flake8 --ignore=E501,F401 .
# install dependencies
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
Fixed this problem by replacing ImageField with FileField in models.py file
I am a newbie to Docker. I have created one Django project and can run it in Docker. However, I have started a second project and have encountered a problem.
I created a virtual env and entered it
pipenv install django~=3.1.0 && pipenv shell
I created a Django project
django-admin startproject config .
I ran it within the virtualenv
python manage.py runserver
and could see the Django spaceship
I then exited the virtualenv and created a Dockerfile
Dockerfile
# Pull base image
FROM python:3.8
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
# Copy project
COPY . /code/
I ran
docker build .
and it reported a successful build
I created a docker-compose.yml file
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
When I run
docker-compose up
it complains
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
I have read in the comments to this question that virtual envs should not be used in docker files, so I replaced
RUN pip install pipenv && pipenv install --system
with
RUN pip install django~=3.1.0
but I still get the same error.
What is wrong?
Have you tried installing your list of requirements from a separate file, something like this?
COPY requirements.txt /code/requirements.txt
WORKDIR /code
RUN pip install -r requirements.txt
Once you have it installed you can run docker-compose run web /bin/sh to start a shell and then use django-admin startproject to create a django project. You may need to change the path in the docker-compose file so that it reflects where your manage.py file ended up (I moved mine to the root). I was able to get it working with the following:
requirements.txt
django==3.1.0
docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
Dockerfile
# Pull base image
FROM python:3.8
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/requirements.txt
WORKDIR /code
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
File tree looks like this:
From the toplevel maps directory, I'm able to install the gunicorn extension ...
(venv) localhost:maps davea$ pip3 install gunicorn
Collecting gunicorn
Downloading gunicorn-20.0.4-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 1.2 MB/s
Requirement already satisfied: setuptools>=3.0 in ./web/venv/lib/python3.7/site-packages (from gunicorn) (45.1.0)
Installing collected packages: gunicorn
Successfully installed gunicorn-20.0.4
Below is my docker-compose.yml file
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
And then I have web/Dockerfile as follows ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
RUN pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
However, when I build/start my docker instance, I'm told it can't find my "gunicorn" command ...
(venv) localhost:maps davea$ docker-compose up
Starting maps_web_1 ...
Starting maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
Your Docker container is a totally isolated environment. Nothing you install on the host is visible inside the container; nothing that happens inside the container is accessible on the host. There's ways to bridge this boundary (with docker run -v bind mounts) but that's not possible during the docker build phase.
In this example your local source tree has a requirements.txt file that lists out the packages that need to be installed when the container is created. (The RUN pip freeze line has no effect; the COPY on the line after it copies it from your local source tree.) It's enough to add the dependency to the requirements.txt file
gunicorn
In your development environment, you can re-run pip install -r requirements.txt to update the packages installed in your virtual environment. When you re-run docker build, having this line in the requirements.txt file will cause it to be installed when the package is built.
You can clean up the Dockerfile a little bit. The resulting Dockerfile would be a pretty typical one for Python packages with C dependencies:
# Start from a totally clean environment with Python installed,
# but no non-system libraries and nothing from your host system.
FROM python:3.7-slim
# Install C dependencies.
# It's important to do apt-get update and install in the
# same command. It's more efficient to only do it once.
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
libmariadb-dev \
libmariadb-dev-compat
# Update pip
RUN python -m pip install --upgrade pip
# Create the application directory and point there
# (WORKDIR will implicitly create it)
WORKDIR /app/
# Install all of the Python dependencies. These are
# listed, one to a line, in the requirements.txt file,
# possibly with version constraints. Having this as
# a separate block allows Docker to not repeat it if
# only your application code changes.
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# Copy in the rest of the application.
COPY . .
# Specify what port your application uses, and the
# default command to use when launching the container.
EXPOSE 8000
CMD /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.