Setting USER in Dockerfile prevents saving file fields (eg. ImageField) in Django - django

I am trying to containerize Django with Dockerfile and docker-compose.yml as defined below. I built the Dockerfile as (fiifidev/postgres:test) for the compose file. Everything works fine. However anytime I try to save a model with a file field (eg. ImageField or FileField), I get Permission Error PermissionError: [Errno 13] Permission denied docker.
I suspect I am not adding the appropriate permission of user creation (useradd) in the Dockerfile (not sure). But when I remove the USER everything works fine.
How can I fix this any help will be much appreciated.
FROM python:3.10-slim-bullseye as base
# Setup env
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
FROM base AS python-deps
# Install pipenv and compilation dependencies
RUN pip install pipenv
RUN apt-get update && apt-get install -y --no-install-recommends gcc
# Install python dependencies in /.venv
COPY Pipfile .
COPY Pipfile.lock .
RUN PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy
FROM base AS runtime
# Copy virtual env from python-deps stage
COPY --from=python-deps /.venv /.venv
ENV PATH="/.venv/bin:$PATH"
# Create and switch to a new user
RUN useradd --create-home appuser
WORKDIR /home/appuser/src
USER appuser
# Install application into container
COPY --chown=appuser:appuser . .
version: "3.9"
services:
web:
image: fiifidev/postgres:test
command: sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py initiate_admin &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/home/appuser/src
networks:
postgres-network:
env_file:
- .env
ports:
- ${APP_PORT}:8000
restart: "on-failure"
networks:
postgres-network:
external: true

From Dockerfile:
WORKDIR /home/appuser/src
USER appuser
# Install application into container
COPY --chown=appuser:appuser . .
Here you are creating a src directory and copying your code into it. This is baked into the resulting image.
From docker-compose.yml:
volumes:
- .:/home/appuser/src
Here you are mounting the current directory on your host on top of the the src directory. A mount will take precedence over the image's idea of what a directory contains, so this effectively means your COPY and chown have no effect. (Those files are still there in your image, but the mount hides them; they are not available.)
The behavior as far as permissions here will vary by platform. On Windows, I don't know what would happen. Using Docker Desktop on Mac it would "just work", because Docker Desktop more or less ignores the permissions in this case and makes everything happy.
On Linux, however, the file ownership inside the container and outside the container must match for you to be able to write to it. What's probably happening is that the files in your mounted directory are owned by a different uid (yours) than the 'appuser' in the container has. Therefore, you get permission errors because you don't have permission to write to the files/directory.
There are two solutions you could try:
Chown the files in the host directory (outside the container) to have the same uid that 'appuser' has inside the container.
Or you could specify the uid of 'appuser' when creating it (during the image build). In that case, specify 'appuser' has the same UID as your user on the Linux host. It wouldn't matter if the Linux user has a different name; if the uid number is the same, then you will have permissions.
EDIT: Including from comments. The OP ended up solving this in docker-compose.yml by adding:
user: "${UID}:${GID}"
This would set the user/group for the service to the UID and GID from the environment of the user calling docker-compose. That is, assuming that the shell has set the UID and GID environment variables; not all shells or OSes will behave in the exact same manner.

Related

how to make the django devserver run everytime i create a docker container, instead of when i build the image

tldr version: how do i do x everytime i build a container, instead of everytime i build a new image.
im building a very basic docker django example. when i do docker-compose build everything works as i want
version: '3.9'
services:
app:
build:
context: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./app:/app
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1
this runs the django devserver, however only when the image is being built. the containers created by the image do nothing, but actually i want them to run the django devserver. So i figure i should just move the command: sh -c "python manage.py runserver 0.0.0.0:8000" from docker-compose to my dockerfile as an entrypoint.
below is my docker file
FROM python:3.9-alpine3.13
LABEL maintainer="madshit.com"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
adduser --disabled-password --no-create-home app
ENV PATH="/py/bin:$PATH"
USER app
ENTRYPOINT python manage.py runserver # i added this because i thought it would be called everytime my docker environment was finished setting up. no dice :(
The bottom section of the image below is a screenshot of the logs of my image from docker desktop. strangely the last command it accepted was to set the user not anything to do with entrypoint. maybe it ignored entrypoint and thats the problem? The top section shows the logs of the instance created from this image (kinda bare).
what do i need to do to make the django webserver run in each container when deployed?
why doesnt entrypoint seem to get called? (its not in the logs)
I would recommend changing your environment variable logic slightly.
environment:
- SECRET_KEY=devsecretkey
- DEBUG=1 <-- replace this
- SERVER='localhost' <-- or other env like staging or live
And then in your settings file you can do:
SERVER = os.environ.get('SERVER')
And then you can set variables based on the string like so:
if SERVER == 'production':
DEBUG = FALSE
else:
DEBUG = True
This is a very regular practice so that we can customise all kinds of settings and there are plenty of use cases for this method.
If that still doesn't work, we may have to look at other issues that might be causing these symptoms.

docker-compose doesn't mount volumes correct for django container

Running Docker on Windows 10 with WSL 2 Ubuntu on it. I have the following Dockerfile:
FROM ubuntu
#base directory
ENV HOME /root
#subdirectory name for the REST project
ENV PROJECT_NAME django_project
#subdirectory name of the users app
ENV APP_NAME users
#set the working directory
WORKDIR $HOME
#install Python 3, the Django REST framework and the Cassandra Python driver
RUN apt-get update
RUN apt -y install python3-pip 2> /dev/null
RUN pip3 install djangorestframework
RUN pip3 install cassandra-driver
#initialize the project (blank project) and creates a folder called $PROJECT_NAME
#with manager.py on its root directory
RUN django-admin startproject $PROJECT_NAME .
#install an app in the project and create a folder named after it
RUN python3 manage.py startapp $APP_NAME
ENV CASSANDRA_SEEDS cas1
ENTRYPOINT ["python3","manage.py", "runserver", "0.0.0.0:8000"]
I build a image with docker build -t django-img . and then I have the following .yaml:
version: '3'
services:
django_c:
container_name: django_c
image: django-img
environment:
- CASSANDRA_SEEDS='cas1'
ports:
- '8000:8000'
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/django_project
When I run docker-compose up -d inside django-project folder (.yml and Dockerfile are there), I get the container running, but I can't see in the host any file from the container. If I run ls in the container, however, I see all files are there:
How am I supposed to edit the container files using an editor in my host?
p.s.: I've already tested the volume slashes ("/") with another container and they work fine since I'm using WSL.
ADDITION
Here the content of my container folders using relative paths, I tried
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project
but it still did not show the files in the host.
I think the issue is that your volume mount refers to the absolute path /django_project, but you specify WORKDIR $HOME which is /root in your Dockerfile. An additional clue is that you see your files when you ls -la ./django_project in the container using a relative path.
I'll bet you can fix the problem by updating your docker-compose.yml django_c service definition to specify /root/django_project as your volume mount instead:
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project

Docker Compose, Django: role "_" does not exist

Context
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named teddycrepineau (as opposed to using the default postgres user and db).
Goal
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
System/Version
python: 3.7
django: 2.1
OS: Mac OS High Sierra
What error am I getting
When running docker-compose up with my custom role and db, I am getting the following error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist. When running the same command with the default role and db postgres Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
What I have tried so far
I read multiple threat on this site, GitHub, and docker:
tried to delete my container and rebuilt it with formatting as suggested here
Went through this GitHub issue
Tried to move my environment variable from .env file the environment inside my docker-compose.yml file and rebuild my container
Files
docker-compose.yml
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
start.sh
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
.env
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
EDIT
My file structure is as follow
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to 127.0.0.1:8000 but that is mostly a different issue) with custom user and db. When I put my docker-compose.yml file to my root folder, I get the error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you docker-compose up you will have a fresh db with envs read. Simple way to do it is docker-compose down -v
Change your env order like this.
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
I find it at this issue. I hope it works.
when you run the
sudo docker-compose exec web python manage.py migrate
yes of course you will receive
"django.db.utils.OperationalError: FATAL: role "user" does not exist
first you need to put
sudo docker-compose down -v
sudo docker system prune
check container, they should be deleted
sudo docker ps -a
then check images
sudo docker image ls
don`t forget delete images
sudo docker image rm 3e57319a7a3a
run to the project folder and then check out
python manage.py migrate
if it didn`t works put the
python manage.py migrate —run-syncdb
and
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
I encountered the issue due to a mismatch between the $POSTGRES_DB and $POSTGRES_USER variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
FATAL: database "root" does not exist
I had to edit the init script that I was writing to include the -d "$POSTGRES_DB" option like so:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL

Docker compose with multiple Dockfiles and images (Django+Angular)

I'm building an app that uses Django and Angular that is split up into two different repositories and docker images.
My file structure is:
.
docker-compose.yml
djangoapp/
Dockerfile
manage.py
...
angularapp/
Dockerfile
...
I've been unable to get it to work properly, all documentation I found on this seem to expect you to have the docker-compose.yml file together with the DockerFile.
I've tried multiple different variations but my current (not working) docker-compose.yml file looks like this:
version : '3'
services:
web:
build: ./djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
This gives me error can't open file 'manage.py': [Errno 2] No such file or directory.
If I go into the djangoapp/ directory and create a docker-compose.yml file there according to the offical Docker docs, it works fine. So it's nothing wrong with the actual build, the issue is accessing it from outside like I'm trying to do.
Update: I also decided to add my Dockerfile located at ./djangoapp/Dockerfile.
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
From what I can see it seems just to be a simple typo
version : '3'
services:
web:
build: ./djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
That ./ is all your missing I think. I'm pretty sure without it the docker compose file doesn't go into the directory specified.
Update
Now that I saw your Dockerfile, I've noticed that you havent added the manage.py to your container. The same way you added your requirements.txt is what you need to do with the manage.py. When a Docker container is built, it only has what you give it plus any other file structure it has from the base Image.
You can either add it in the Dockerfile or you can have a shared volume between a local directory and a container directory. I think adding it in the Dockerfile would be easiest though.
This issue was solved by rebuilding my Docker image with docker-compose build web command. Seems like at the stage when I was having the error, that build didn't include the files needed. So in the code that I show above in the question post, nothing is wrong, it just needed to be rebuild.

Files created inside docker are write protected on host

I am using docker container for rails and ember.I am mounting the source from my local to the container. All the changes I make here on local are reflected in the container.
Now I want to use generators to create files. The files are created, but they are write protected on my machine.
When I try to do docker-compose run frontend bash, I get a root#061e4159d4ef:/frontend# superuser prompt access inside of the container. I can create files when I am in this mode. These files are write protected in my host.
I have also tried docker-compose run --user "$(id -u):$(id -g)" frontend bash, I get a I have no name!#31bea5ae977c:/frontend$, I am unable to create any file in this mode. Below is the error message that I get.
I have no name!#31bea5ae977c:/frontend$ ember g template about
/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:90
throw err0;
^
Error: EACCES: permission denied, mkdir '/.config'
at Error (native)
at Object.fs.mkdirSync (fs.js:916:18)
at sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:71:13)
at Function.sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:77:24)
at Object.create.all.get (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:39:13)
at Object.Configstore (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:28:44)
at clientId (/frontend/node_modules/ember-cli/lib/cli/index.js:22:21)
at module.exports (/frontend/node_modules/ember-cli/lib/cli/index.js:65:19)
at /usr/local/lib/node_modules/ember-cli/bin/ember:26:3
at /usr/local/lib/node_modules/ember-cli/node_modules/resolve/lib/async.js:44:21
Here is my Dockerfile:
FROM node:6.2
ENV INSTALL_PATH /frontend
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Copy package.json separately so it's recreated when package.json
# changes.
COPY package.json ./package.json
RUN npm install
COPY . $INSTALL_PATH
RUN npm install -g phantomjs bower ember-cli ;\
bower --allow-root install
EXPOSE 4200
EXPOSE 49152
CMD [ "ember", "server" ]
Here is my docker-compose.yml file, please note it is not in the current directory, but the parent.
frontend:
build: "frontend/"
dockerfile: "Dockerfile"
environment:
- EMBER_ENV=development
ports:
- "4200:4200"
- "49152:49152"
volumes:
- ./frontend:/frontend
I want to know, how can I use generateors? I am new to learning docker. Any help is appreciated.
You get the I have no name! because of this: $(id -u):$(id -g)
The user id and group in your host are not linked to any user in your container.
Solution:
Execute chown UID:GID -R /frontend inside the container if its already running and you cannot stop it for some reason. Otherwise you could just do the chown command in the host and then run your container again. Node that UID and GID must belong to a user inside the container
Example: chown 101:101 -R /frontend with 101 is the UID:GID of www-data.
If there are no other user exept root in your container, you will have to create a new one. To do so you must create a Dockerfile and put something like this:
FROM your_image_name
RUN useradd -ms /bin/bash newuser
More information about Dockerfiles can be found here or just by googlin' it.