How to checkout branches if there are files created in docker image? - django

In my pet project I set up docker-compose for development. The issue is that I've create django migration inside dockerimage and created commit. After checkout to main branch I see an error. These files become untracked and I cannot merge sub branch into the main.
git checkout master
warning: unable to unlink 'apps/app_name/migrations/0001_initial.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/0002_auto_20190127_1815.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/__init__.py': Permission denied
Switched to branch 'master'
Also I tried to it with sudo. All new files will appear untracked in main branch but no new commits will be added(based on git log)
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
command: /start
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
links:
- db:db
Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
RUN mkdir /code
WORKDIR /code
COPY /requirements /code/requirements/
RUN pip install -r requirements/dev.txt
COPY . /code/
COPY ./compose/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
start.sh
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate
python manage.py runserver_plus 0.0.0.0:8000

Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
ARG CONTAINER_USER="python"
ARG CONTAINER_UID="1000"
ARG CONTAINER_GID="1000"
ARG WORKSPACE=/home/"${CONTAINER_USER}"/code
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client && \
addgroup -g "${CONTAINER_GID}" -S "${CONTAINER_USER}" && \
adduser -s /bin/ash -u "${CONTAINER_UID}" -G "${CONTAINER_USER}" -h /home/"${CONTAINER_USER}" -D "${CONTAINER_USER}"
USER "${CONTAINER_USER}"
WORKDIR "${WORKSPACE}"
COPY ./requirements/dev.txt "${WORKSPACE}"/requirements.txt
RUN pip install -r requirements.txt
Is bad practice to run whatsoever in a docker container as the root user, just like you wouldn't do it in your computer. I added a user python that will have the same uid of your computer, assuming your operating system user as the uid 1000 as it is normal in Linux machines. If you are in another OS than this may not work and you will need to find the solution for your specific OS.
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
args:
CONTAINER_UID: ${UID:-1000}
CONTAINER_GID: ${GID:-1000}
command: ./compose/start
volumes:
- .:/home/python/code
ports:
- "8000:8000"
depends_on:
- db
links is deprecated and was replaced by depends_on, thus not necessary to use both.
In order to build the container with the same permissions of your filesystem for your user I have added args to de dockerfile build section and I use the OS values for $UID and $GID, but if they are not set will default to 1000.
You can see what are the ones in your Linux OS with id -u for $UID and id -g for the $GID.
Shell Script
Make it executable in your repo and commit the change so that you don't need to do it each time you build the docker image.
chmod 700 ./compose/start
I don't use +x because that is a bad practice in terms of security, once you will allow everyone to execute the script.
Summary
Any files created now inside of the container will have the uid and gid of 1000, thus no more conflict should occur with permissions.

Related

How to add django-crontab in docker container with non-rooted user django project

Working on a Django project which is running on docker-container with python:3.9-alpine3.13
FROM python:3.9-alpine3.13
LABEL maintainer=<do not want to show>
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
COPY ./scripts /scripts
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev gcc python3-dev bash openssl-dev libffi-dev libsodium-dev linux-headers && \
apk add jpeg-dev zlib-dev libjpeg && \
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home app &&\
mkdir -p /vol/web/static && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER app
CMD ["run.sh"]
I used this tutorial for implementation and I don't this error is because of this because
I am getting this error.
sumit#LAPTOP-RT539Q9C MINGW64 ~/Desktop/RentYug/rentyug-backend-deployment (main)
$ docker-compose run --rm app sh -c "python manage.py crontab show"
WARNING: Found orphan containers (rentyug-backend-deployment_proxy_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating rentyug-backend-deployment_app_run ... done
/bin/sh: /usr/bin/crontab: Permission denied
Currently active jobs in crontab:
I used these lines for that
apk add --update busybox-suid && \
apk --no-cache add dcron libcap && \
I found my answer that is, cron should run as the root user. I found that answer there.

django with docker dosen't see pillow

I'm trying to deploy my project on django. I almost did it but django can't see installed pillow in docker container. I'm sure that it's installed pip sends me this:
sudo docker-compose -f docker-compose.prod.yml exec web pip install pillow
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pillow in /usr/local/lib/python3.8/site-packages (8.1.0)
But when i'm trying to migrate db i see this:
ERRORS:
history_main.Exhibit.image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "python -m pip install
Pillow".
history_main.MainUser.avatar: (fields.E210) Cannot use ImageField because Pillow is not installed.
Here are parts of Dockerfile where Pillow tries to install:
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev jpeg-dev zlib-dev build-base
RUN pip install --upgrade pip
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
...
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
docker-compose:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
dockerfile for web:
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . .
RUN flake8 --ignore=E501,F401 .
# install dependencies
COPY ./req.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r req.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/req.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
Fixed this problem by replacing ImageField with FileField in models.py file

Docker Compose Up is running while with the same image its not running the image?

docker-compose up working fine. Screenshot attached.
Here is the docker-compose file
version: '3.0'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9090
ports:
- 9090:9090
env_file:
- .env
Dockerfile
FROM python:3.7-alpine
RUN mkdir -p /app
COPY . /app
COPY .env /app
WORKDIR /app/
RUN apk --update add python3-dev
RUN apk add mariadb-dev mariadb-client
RUN apk --update add python3 py-pip openssl ca-certificates py-openssl wget
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache \
gcc g++ make libc-dev linux-headers
RUN pip install --upgrade pip
RUN pip install uwsgi
RUN pip install -r requirements.txt --default-timeout=10000 --no-cache-dir
EXPOSE 9090
docker run testbackend_web:latest
Above Command not working with the build
Can someone help in the same?
Error in Container

Docker compose throws error during build: Service 'postgres' failed to build: exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown

I pulled a fresh cookiecutter django template and I want to use docker. When I do docker-compose -f local.yml build I get this error:
Service 'postgres' failed to build: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
Researching the problem I found out that it could be corrupted images. So I deleted all containers, images and pruned the system with:
docker rm -vf $(docker ps -a -q)
docker rmi -f $(docker images -a -q)
docker system prune
Then I also did:
docker-compose -f local.yml down
docker-compose -f local.yml up
I restarted docker, restarted my computer....
When I list all containers and images they are all gone. Then I build it again and this confuses me, because I get:
fc7181108d40: Already exists
81cfa12d39e9: Already exists
793d305ca761: Already exists
41e3ced3a2aa: Already exists
a300bc9d5405: Already exists
3c6a5c3830ed: Already exists
fb8c79b24338: Already exists
fcda1144379f: Already exists
476a22a819cc: Downloading [===============> ] 25.23MB/82.14MB
78b36b49bb24: Download complete
6a096a28591f: Download complete
c0cb89b5217b: Download complete
778f1469a309: Download complete
7c4413fcad87: Download complete
So there is still something that exists? I assume something is not getting deleted. Then everything fails with:
ERROR: Service 'postgres' failed to build: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
So I guess something is wrong with my postgres image... I just don't know what else to try. I have another project which works perfectly fine with docker and cookiecutter. Thus I don't think reinstalling docker will help.
Any ideas on what else I could try? I'm not a docker expert and this is pretty much the end of my troubleshoot knowledge... Help is greatly appreciated, thanks!
Following is from automatic creation of cookiecutter:
Compose file:
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: nginx_local_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: nginx_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
Dockerfile:
FROM python:3.7-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install -r /requirements/local.txt
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
My postgres dockerfile
FROM postgres:11.3
COPY ./compose/production/postgres/maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/*
RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenance
try to add this to your postgres Dockerfile change apk if you use some other linux distro
RUN apk add --no-cache dos2unix
RUN dos2unix YOUR_SCRIPT_NAME
RUN chmod +x YOUR_SCRIPT_NAME
and make sure the first line is #!/bin/sh

How to setup docker-compose, pytest, Selenium Driver and geckodriver/chromedriver correctly

I am testing my django app with pytest. Now I want to use Selenium for system tests. I am also using docker so to execute my tests I do docker-compose -f local.yml run django pytest
When I try to use Selenium I get the message that 'geckodriver' executable needs to be in PATH..
I read many questions here on SO and I followed the instructions. Yet, I think my path can't be found because I am executing everything in a docker-container. Or am I mistaken?
I added the geckodriver path to my bash-profile by checking my path with:
which geckodriver. The result of this is:/usr/local/bin/geckodriver
Then I added this to my profile and when I type echo $PATH I can see this: /usr/local/bin/geckodriver/bin:. So I assume all is set correctly. I also tried setting it manually doing:
self.selenium = webdriver.Firefox(executable_path=r'/usr/local/bin/geckodriver')
Since I still get the error I assume it has sth to do with docker. That's why my specific question is: How can I setup the PATH for geckodriver with pytes, selenium and docker-compose?
Any help or hint into the right direction is very much appreciated! Thanks in advance!
Here is my docker-compose file as requested:
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install -r /requirements/local.txt
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
When I am debugging by doing
for p in sys.path:
print(p)
I get:
usr/local/bin
/usr/local/lib/python36.zip
/usr/local/lib/python3.6
/usr/local/lib/python3.6/site-packages
So my geckodriver is not there I assume?