Can't modify files created in docker container - django

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.

By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.

Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)

The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh

Related

/entrypoint.sh: line 8: syntax error: unexpected end of file

Docker project was created on Linux machine, I'm running windows and I can't get docker-compose up to work. I've read through Stackoverflow's answers and so far I've tried the following (none have worked):
Using Visual Studio Code I saved as "LF" instead of CRLF
Deleted the file entirely, created a new one using Visual Studio Code, typed the words
Cut the entire file, pasted it in Notepad so that formatting gets cleared, copied and pasted back
Added various forms of #!/bin/bash to the start of the entrypoint.sh
Changed Docker File to use COPY instead of ADD
At this point I'm not sure what else to try. Any ideas?
Edit
entrypoint.sh
if [ "$1" == 'celery' ]; then
celery -A vicmun worker -l info --uid=celery --gid=celery
else
./../wait_for_it.sh db:5433 --timeout=10
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
fi
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
ARG APP_ENV=${APP_ENV}
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
ADD entrypoint-${APP_ENV}.sh /entrypoint.sh
ADD wait_for_it.sh /wait_for_it.sh
RUN addgroup --system celery && adduser --system --ingroup celery celery
RUN ["chmod", "+x", "/wait_for_it.sh"]
RUN apt-get -y update
RUN apt-get -y install ffmpeg
RUN pip install -r requirements.txt
ENTRYPOINT ["bash", "/entrypoint.sh"]
I don't know your config, but I could resolve that problem by adding in the CMD.
In my case, I could execute a script with docker as follows:
Dockerfile
FROM python:3.10-alpine3.15
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apk update \
&& apk add --no-cache gcc musl-dev postgresql-dev python3-dev libffi-dev \
&& pip install --upgrade pip
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD [ "sh", "entrypoint.sh" ]
entrypoint.sh
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
Well, I feel the cringe for this. Turns out the solution was something I had already done, but it didn't go through until I rebuilt with --no-cache option.
Solution was to:
Using Visual Studio Code I saved as "LF" instead of CRLF
and run docker-compose build --no-cache

How to copy a Django SQlite database to a Docker container?

I'm new to Docker. I am trying to dockerize a Django app.
My project contains a Wagtail app and I'm using the auto generated Dockerfile by Wagtail.
FROM python:3.8.6-slim-buster
RUN useradd wagtail
EXPOSE 8000
ENV PYTHONUNBUFFERED=1 \
PORT=8000
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
nano \
vim \
&& rm -rf /var/lib/apt/lists/*
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . .
USER wagtail
RUN python manage.py collectstatic --noinput --clear
CMD set -xe; python manage.py migrate --noinput; gunicorn mysite.wsgi:application
It's working well but my sqlite Database is empty and I'd like to run my container with the wagtail pages that I will create locally.
How can I change the Dockerfile for that endeavor?
Just by dumping and reloading the data in the Docker.
I found out it's pretty bad to use sqlite3 in a Docker container.
I'm better off with that Docker + PostGres solution: https://docs.docker.com/samples/django/

Running django developing server in a docker container

So I'm trying to run Django developing server on a container but I can't access it through my browser. I have 2 containers using the same docker network, one with postgress and the other is Django. I manage to ping both containers and successfully connect 2 of them together and run ./manage.py runserver ok but can't curl or open it in a browser
Here is my Django docker file
FROM alpine:latest
COPY ./requirements.txt .
ADD ./parking/ /parking
RUN apk add --no-cache --virtual .build-deps python3-dev gcc py3-pip postgresql-dev py3-virtualenv musl-dev libc-dev linux-headers
RUN virtualenv /.env
RUN /.env/bin/pip install -r /requirements.txt
WORKDIR /parking
EXPOSE 8000 5432
The postgres container I pulled it from docker hub
I ran django with
docker run --name=django --network=app -p 127.4.3.1:6969:8000 -it dev/django:1.0
I ran postgres with
docker run --name=some-postgres --network=app -p 127.2.2.2:6969:5432 -e POSTGRES_PASSWORD=123 -e POSTGRES_DB=parking postgres
Any help would be great. Thank you
I think you forget to add the command to run your application at the and of the dockerfile, when you run this image it just start the virtualenv and install all python dependencies at the requirements.txt, but the django application was not started. You need put at the end something like
CMD "python parking/manage.py runserver"
this will make your container still running at the choosed port and make you application accessible at 127.4.3.1:6969:8000.
Okay so I manage to figure it out, I have looked at #leonardo-alves-dos-santos answer and I come to the conclusion that I run CMD "python parking/manage.py runserver 0.0.0.0:8000" . Now I can access my Django app with Django container port 127.4.3.1:6969 and 172.18.0.2:8000 from docker network

How to checkout branches if there are files created in docker image?

In my pet project I set up docker-compose for development. The issue is that I've create django migration inside dockerimage and created commit. After checkout to main branch I see an error. These files become untracked and I cannot merge sub branch into the main.
git checkout master
warning: unable to unlink 'apps/app_name/migrations/0001_initial.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/0002_auto_20190127_1815.py': Permission denied
warning: unable to unlink 'apps/app_name/migrations/__init__.py': Permission denied
Switched to branch 'master'
Also I tried to it with sudo. All new files will appear untracked in main branch but no new commits will be added(based on git log)
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
command: /start
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
links:
- db:db
Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
RUN mkdir /code
WORKDIR /code
COPY /requirements /code/requirements/
RUN pip install -r requirements/dev.txt
COPY . /code/
COPY ./compose/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
start.sh
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate
python manage.py runserver_plus 0.0.0.0:8000
Dockerfile
FROM python:3.6.8-alpine
ENV PYTHONUNBUFFERED 1
ARG CONTAINER_USER="python"
ARG CONTAINER_UID="1000"
ARG CONTAINER_GID="1000"
ARG WORKSPACE=/home/"${CONTAINER_USER}"/code
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client && \
addgroup -g "${CONTAINER_GID}" -S "${CONTAINER_USER}" && \
adduser -s /bin/ash -u "${CONTAINER_UID}" -G "${CONTAINER_USER}" -h /home/"${CONTAINER_USER}" -D "${CONTAINER_USER}"
USER "${CONTAINER_USER}"
WORKDIR "${WORKSPACE}"
COPY ./requirements/dev.txt "${WORKSPACE}"/requirements.txt
RUN pip install -r requirements.txt
Is bad practice to run whatsoever in a docker container as the root user, just like you wouldn't do it in your computer. I added a user python that will have the same uid of your computer, assuming your operating system user as the uid 1000 as it is normal in Linux machines. If you are in another OS than this may not work and you will need to find the solution for your specific OS.
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build:
dockerfile: ./compose/Dockerfile.dev
context: .
args:
CONTAINER_UID: ${UID:-1000}
CONTAINER_GID: ${GID:-1000}
command: ./compose/start
volumes:
- .:/home/python/code
ports:
- "8000:8000"
depends_on:
- db
links is deprecated and was replaced by depends_on, thus not necessary to use both.
In order to build the container with the same permissions of your filesystem for your user I have added args to de dockerfile build section and I use the OS values for $UID and $GID, but if they are not set will default to 1000.
You can see what are the ones in your Linux OS with id -u for $UID and id -g for the $GID.
Shell Script
Make it executable in your repo and commit the change so that you don't need to do it each time you build the docker image.
chmod 700 ./compose/start
I don't use +x because that is a bad practice in terms of security, once you will allow everyone to execute the script.
Summary
Any files created now inside of the container will have the uid and gid of 1000, thus no more conflict should occur with permissions.

How to update django app deployed with docker-compose? Now it seems running old version of code

I have django web application based on cookiecutter-django. The stack is build on several containers running: django, redis, celerybeat, celery worker, celery flower, postgres, caddy. When I launched application in production-like environment on VPS I have experienced strange behavior - django seems running old version of code (e.g. using version of form) despite checking out new code from git repository. I have tried few actions to "force" refresh of application code:
docker-compose down and then, rebuild all containers with docker-compose build, and then docker-compose up
similar rebuild as above but only for the container with django.
When I inspect code inside django container - there is proper version of code.
I did checkup app with Django Debug Toolbar - and seems that pages are are not loaded from cache (there are no calls to cache from backend, and there is number of queries to database which might indicate that pages are not loaded from cache).
I was expecting that django will automatically detect change of code and restart running new code, additionally interpreter restart could be needed (which should be solved via putting containers down and rebuid). Are there ideas what else to check or try? Removing all containers, images and volumes helped but it is no my preferred way to introduce each update.
I went through solutions from Why does docker-compose build not reflect my django code changes? and After docker-compose build the docker-compose up run old not updated containers
but none was working for me except "nuke everything". Is there a way for "soft reload"?
Here is Dockerfile for django container:
# Dockerfile for django container
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi
RUN addgroup -S django \
&& adduser -S -G django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install --no-cache-dir -r /requirements/production.txt \
&& rm -rf /requirements
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start
COPY ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r//' /start-celeryworker
RUN chmod +x /start-celeryworker
RUN chown django /start-celeryworker
COPY ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r//' /start-celerybeat
RUN chmod +x /start-celerybeat
RUN chown django /start-celerybeat
COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r//' /start-flower
RUN chmod +x /start-flower
COPY . /app
RUN chown -R django /app
USER django
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
Incase someone comes across this, the answer is you have to rebuild the container every time you push new code.
Just run
docker-compose -f production.yml build
to update the production version