django with docker dosen't see pillow - django

I'm trying to deploy my project on django. I almost did it but django can't see installed pillow in docker container. I'm sure that it's installed pip sends me this:
sudo docker-compose -f docker-compose.prod.yml exec web pip install pillow
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pillow in /usr/local/lib/python3.8/site-packages (8.1.0)
But when i'm trying to migrate db i see this:
ERRORS:
history_main.Exhibit.image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "python -m pip install
Pillow".
history_main.MainUser.avatar: (fields.E210) Cannot use ImageField because Pillow is not installed.
Here are parts of Dockerfile where Pillow tries to install:
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev jpeg-dev zlib-dev build-base
RUN pip install --upgrade pip
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
...
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
docker-compose:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
dockerfile for web:
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . .
RUN flake8 --ignore=E501,F401 .
# install dependencies
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]

Fixed this problem by replacing ImageField with FileField in models.py file

Related

Django Docker ElasticBeanstalk fails

I'm new to docker and eb deployment, I want to deploy django with docker on eb
here's what I did so far
created a Dockerfile
# Pull base image
FROM python:3.9.16-slim-buster
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal libpq-dev python-dev libcurl4-openssl-dev libssl-dev gcc
# install dependencies
COPY . /code
WORKDIR /code/
RUN pip install -r requirements.txt
# set work directory
WORKDIR /code/app
then in docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python /code/hike/manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/code
it runs locally, but on deployment it fails and when I get to logs, it says
pg_config is required to build psycopg2 from source.
like it's not using the Dockerfile, I read somewhere I should set Dockerrunder.aws.json but I've no idea what to write in it!

Docker socket is not accesible in Docker.prod

I have the following docker-compose file which builds and starts 4 containers one of them is Django container for which I am mounting the /var/run/docker.sock in volumes so that Django container can access the host docker engine.
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- postgresdb
restart: always
postgresdb:
container_name: postgresdb
image: timescale/timescaledb:latest-pg11
volumes:
- ./:/imports
- postgres_data:/var/lib/postgresql/data/
command: 'postgres -cshared_preload_libraries=timescaledb'
ports:
- "5432:5432"
env_file:
- ./.env.prod.db
restart: always
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 80:80
depends_on:
- web
restart: always
volttron1:
container_name: volttron1
hostname: volttron1
build:
context: ./volttron
dockerfile: Dockerfile
image: volttron/volttron:develop
volumes:
- ./volttron/platform_config.yml:/platform_config.yml
- ./volttron/configs:/home/volttron/configs
- ./volttron/volttronThingCerts:/home/volttron/volttronThingCerts
environment:
- CONFIG=/home/volttron/configs
- LOCAL_USER_ID=1000
network_mode: host
restart: always
mem_limit: 700m
cpus: 1.5
volumes:
postgres_data:
static_volume:
media_volume:
The content of the Docker.prod for django web container is following
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.6-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add libc-dev
RUN apk add --update-cache
RUN apk add --update alpine-sdk && apk add libffi-dev openssl-dev && apk --no-cache --update add build-base
# lint
RUN pip install -U pip
RUN pip install flake8==3.9.2
COPY . .
RUN flake8 --ignore=E501,F401 ./hello_django
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.6-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint.prod.sh
COPY ./entrypoint.prod.sh .
RUN sed -i 's/\r$//g' $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chmod 666 /var/run/docker.sock
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
The problem is in the statement RUN chmod 666 /var/run/docker.sock which raises the following error
chmod: cannot access "/var/run/docker.sock": No such file or directory
but why I am getting this error? when I have mounted the /var/run/docker.sock in docker.compose.yml file
You're trying to chmod the docker.sock file when building the image. The volume is only mounted and used when running the container. You'll probably need to change permissions of the socket file on the host if needed.

Docker Compose Up is running while with the same image its not running the image?

docker-compose up working fine. Screenshot attached.
Here is the docker-compose file
version: '3.0'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9090
ports:
- 9090:9090
env_file:
- .env
Dockerfile
FROM python:3.7-alpine
RUN mkdir -p /app
COPY . /app
COPY .env /app
WORKDIR /app/
RUN apk --update add python3-dev
RUN apk add mariadb-dev mariadb-client
RUN apk --update add python3 py-pip openssl ca-certificates py-openssl wget
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache \
gcc g++ make libc-dev linux-headers
RUN pip install --upgrade pip
RUN pip install uwsgi
RUN pip install -r requirements.txt --default-timeout=10000 --no-cache-dir
EXPOSE 9090
docker run testbackend_web:latest
Above Command not working with the build
Can someone help in the same?
Error in Container

docker-compose build requirements.txt not update

I want to use docker to publish my Django project.
I have create a docker-compose.yml file, a .dockerignore and a Dockerfile like this one:
FROM python:3.6-alpine
RUN apk add --no-cache gcc musl-dev linux-headers
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /code
COPY requirements.txt /code
WORKDIR /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
When I first run docker compose I get an error installing a package contained in my requirements.txt file, at this point, I remove the packages from my file and run:
docker-compose down
docker-compose build --no-cache
here my docker-compose.yml
version: '3'
networks:
mynetwork:
driver: bridge
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
POSTGRES_DB: mydb
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
networks:
- mynetwork
volumes:
- .:/DEV
ports:
- "8000:8000"
depends_on:
- db
When I proceed to execute pip install -r requirements.txt there is again the package in file that execute causing an issue... how can I clear the cache and use my new saved requirements.txt file?

Dockerizing an already existing app and database

I am trying to Dockerize an app that is already created (database included).
I've got the proper files in place:
docker-compose.yml
dockerfile
requirements.txt
I'm having trouble with the database part -
How do I configure the docker-compose.yml file to point to the database that is already created?
Here's why I ask - my understanding of Docker - is you create your base app - then "Dockerize" it or package it into an image that you can distribute. I'm a beginner at this - so that may be why I'm not understanding.
Here is my current docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=ar_db
ports:
- "5433:5433"
web:
build: .
command: python2.7 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and dockerfile:
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM debian:8.8
# Set the file maintainer (your name - the file's author)
MAINTAINER HeatherJ
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip libpq-dev python-dev
#install git
RUN apt-get update && apt-get install -y --no-install-recommends \
git&& rm -rf /var/lib/apt/lists/*
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=EPIC_AR
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/EPIC_AR
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/home/epic/EPIC_AR/EPIC_AR
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN mkdir media static logs
VOLUME ["$DOCKYARD_SRVHOME/media/", "$DOCKYARD_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]