Docker compose could not open directory permisson denied - django

I am totally a newbie when it comes to Docker. And I am trying to understand it with a dummy project.
I have a django project and my Dockerfile is inside the Django project's root folder. And my docker-compose.yml file is under the top root folder which contains django project folder and other config files.
my docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
and my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /web
WORKDIR /web
ADD requirements.txt /web/
RUN pip install -r requirements.txt
ADD . /web/
I am trying to run the following commands
# stop and remove the existing containers
docker-compose stop
docker-compose rm -f
# up and run the container
docker-compose build
docker-compose up -d
docker-compose exec dummy_project bash
When I do docker-compose up -d, I see this error.
docker-compose up -d
dummy_project_postgres is up-to-date
Starting dummy_project ... done
warning: could not open directory 'data/db/': Permission denied
I know this question asked before, but I didn't quite get the solution I need and I am stuck for hours now.
EDIT: I have all the permissions for all the folders under the top folder
EDIT2: sudo docker-compose up -d also results the same error.

I solved by adding ":z" to end of volume defintion
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data:z
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
What ":z" means
Labeling systems like SELinux require that proper labels are placed on
volume content mounted into a container. Without a label, the security
system might prevent the processes running inside the container from
using the content. By default, Docker does not change the labels set
by the OS.
To change the label in the container context, you can add either of
two suffixes :z or :Z to the volume mount. These suffixes tell Docker
to relabel file objects on the shared volumes. The z option tells
Docker that two containers share the volume content. As a result,
Docker labels the content with a shared content label. Shared volume
labels allow all containers to read/write content. The Z option tells
Docker to label the content with a private unshared label. Only the
current container can use a private volume.
https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from
what is 'z' flag in docker container's volumes-from option?

You're trying to mount ./data/db in /var/lib/postgresql/data and you're executing docker-compose with a non-privileged user.
So, we can have two possibilities:
Problem with ./data/db permissions.
Problem with /var/lib/postgresql/data
The simpiest solution is execute docker-compose with a privileged user (root), but if you don't want to do that, you can try this:
Give permissions to ./data/db (I see your EDIT that you've already done it).
Give permissions to /var/lib/postgresql/data
How can you give /var/lib/postgresql/data permissions? Read the following lines:
First, note that /var/lib/postgresql/data is auto-generated by postgre
docker, so, you need to define a new Dockerfile which modifies these
permissions. After that, you need also modify docker-compose to use
this new Dockerfile.
./docker-compose.yml
version: '3'
services:
db:
build:
context: ./mypostgres
dockerfile: Dockerfile_mypostgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dumy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
./dumy_project/Dockerfile --> Without changes
./mypostgres/Dockerfile_mypostgres
FROM postgres
RUN mkdir -p /var/lib/postgresql/data
RUN chmod -R 777 /var/lib/postresql/data
ENTRYPOINT docker-entrypoint.sh

This solution is for case that your user is not present in docker group.
First check if your user is in docker group:
grep 'docker' /etc/group
Add user to docker group:
If the command return is empty, then create docker group:
sudo groupadd docker
Else if your user is not present in command return then add him to the group:
sudo usermod -aG docker $USER
Reboot your system
Test it again:
docker run hello-world
Tip: Remember to have the docker service started
If it works, try your docker-compose command again.

Related

Docker pull Django image and run container

So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

How do I actually package and move my code, say for a production deployment using Docker?

I realize there are all sorts of different strategies and Dev Opsy architectures to deploy Docker containers to production, but I'm still learning some of the basics of Docker and I'm really looking for the most straight forward way to move the following example, into say AWS or another remote site. I think I understand how to setup projects in various containers, link them, etc. But I don't understand how to save or package my existing code and deploy it elsewhere.
Here is a sample template that I am trying to work off of.
Docker-Compose
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
This Dockerfile just creates a directory called 'code' which is where I assume the actual development will take place. ie; run docker compose, containers are created and then start developing in the code directory. So let's say I'm done with whatever I'm working on and now I deploy to production using the exact same template above, won't this Dockerfile just create an empty 'code' directory in AWS? What about the work I'm trying to deploy from the code directory on my local machine? How do I package the actual code directory to move to production?
The result of your docker build should be an image that contains all of the commands, scripts, compiled code, and interpreted code needed to run your application anywhere. Your developers may inject their own copy of this code into the image with a volume mount while they are making changes to avoid constantly rebuilding the entire container (though docker's layer caching does shorten the build time).
Once built, you docker push your image up to a registry server, e.g. Docker Hub, or you can run your own (docker has a registry image in addition to their own commercial offering) or use one provided by a third party like AWS. Then your production environment should docker pull and run this image when you deploy to production.

docker-compose volumes with a custom docker file

I am trying to mount my code directory using docker volume, but unable to do so.
Here's the relevant section of my docker-compose file.
web:
build: ./web
dockerfile: Dockerfile
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ./web:/usr/src/app
web folder has a DockerFile with the following instructions.
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
docker-compose up is working without any issues, but I don't see any volumes after start up.
root#test-new:/home/django/test# docker inspect test_web | grep -i volume
"Volumes": null,
"Volumes": null,
Here's the rest of my stack, if that is relevant.
elasticsearch
nginx
postgres db
The following syntax works.
volumes:
- ${PWD}/web:/usr/src/app
web:
build: ./web
dockerfile: Dockerfile-aside
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ${PWD}/web:/usr/src/app
env_file: .env
One of the reasons the earlier migrations were not being copied is because I was running the code inside a new container, instead of the container that is currently up.
Instead of
docker-compose run --rm web /bin/bash
When I tried
docker exec -it my_container_name web /bin/bash
And then run the migrate command, the newly generated database migration commands are not present on the source directory. It would be ideal to use docker-compose exec, but the current version I am using is below 1.7, so I am using the above interim solution in the mean time.
https://github.com/d11wtq/dockerpty/pull/48

Auto-reloading of code changes with Django development in Docker with Gunicorn

I'm using a Docker container for Django development, and the container runs Gunicorn with Nginx. I'd like code changes to auto-load, but the only way I can get them to load is by rebuilding with docker-compose (docker-compose build). The problem with "build" is that it re-runs all my pip installs.
I'm using the Gunicorn --reload flag, which is apparently supposed to do what I want. Here are my Docker config files:
## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
nginx:
restart: always
build: ./config/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
I've tried some of the other Docker commands (docker-compose restart, docker-compose up), but the code won't refresh.
What am I missing?
Thanks to kikicarbonell, I looked into having a volume for my code, and after looking at the Docker Compose recommended Django setup, I added volumes: - .:/code to my web container in docker-compose.yml, and now any code changes I make automatically apply.
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
Update: for a thorough example of using Gunicorn and Django with Docker, checkout this example project from Rackspace, which also shows how to use docker-machine to launch the setup on remote servers like Rackspace Cloud.
Caveat: currently, this method does not work when your code is stored locally and the docker host is remote (e.g., on a cloud provider like Digital Ocean or Rackspace). This also applies to virtual machines if your local file system is not mounted on the VM. Note that there are separate volume drivers (e.g., flocker), and there might be something out there to address this need. For now, the "fix" is to rsync/scp your files up to a directory on the remote docker host. Then, the --reload flag will auto-reload gunicorn after any scp/rsync. Update: If pushing code to remote docker host, I find it far easier to just rebuild the docker container (e.g., docker-compose build web && docker-compose up -d). This can be slower though than the rsync approach if your src folder is large.
You have another problem- Docker caches each layer that it builds. You shouldn't have to re-run pip install every time!
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
This is your problem- Docker checks every ADD statement to see if any files have changed and invalidates the cache for it and every later step if it has. The correct way to do this is...
ADD ./requirements/docker.txt /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./code/
Which will only invalidate your pip install line if your requirements file changes!
It seems like you need to match your WORKDIR/COPY commands in your Dockerfile in your docker-compose.yml when creating the volume. Here is an example:
Dockerfile
WORKDIR /app
COPY . /app
docker-compose.yml
app:
/ other commands /
volumes:
- ./app:/app
I faced very similar problem trying to configure auto-reload of the project with a little bit different setup. I set up volumes but it did not work anyway. After an hour of googling and thorough examination of my code I figured out that volume paths in Dockerfile and docker-compose.yml simply do not match. Make sure that they are the same.
My Dockerfile
FROM python:3.6.9-alpine3.10
COPY ./requirements/local.txt /app/requirements/local.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev git gcc libgcc musl-dev jpeg-dev zlib-dev build-base \
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements/local.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
### Here is the path to the project
COPY . /app
WORKDIR /app/project
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
EXPOSE 8088
My docker-compose.yml
version: '3'
services:
web:
build:
context: ../..
dockerfile: compose/local/Dockerfile
restart: on-failure
command: python manage.py runserver 0.0.0.0:8088 --settings=project.settings.local
volumes:
# - .:/var/www/app # messed up path
- .:/app # correct path
env_file:
- ../../.env.local
depends_on:
- db
ports:
- "8000:8000"
Since I never found a desirable solution consider this interesting hack. Posting here I wanted to see if anyone has similar/good/bad experiences with this "work around".
To make code reload locally for development I simply created a View that immediately calls exit(). The exit will crash Django and a reload will occur where code changes are available. The reboot takes a fraction of a second and can be done via a tab in the browser, a requests.get call, or any other similar call. The reload is not automatic but it does skip any Docker lag such as a restart.
When the exit is called you will see the PID increment (if tailing logs):
web | [2019-07-15 18:29:52 +0000] [22] [INFO] Worker exiting (pid: 22)
web | [2019-07-15 18:29:52 +0000] [24] [INFO] Booting worker with pid: 24
I hope this helps others and/or gets feed back on this approach.
I you use docker-compose:
DockerFile: When you build image from Dockerfile you need to add some directory to save your code (in my case /api/):
WORKDIR /api/ -> important
COPY . . -> important
Docker-compose: Your docker-compose file haves you app service with the image in django just builded from Dockerfile, now you need to add the volume with the same WORKDIR that you use in Dockerfile:
volumes:
- .:/app -> important
And is all.