There are many questions that have been asked on here about similar issues that I went through such as this, this, this and this that are very similar but none of the solutions there solve my problem. Please don't close this question.
Problem:
I am running django with nginx and postgres on docker. Secret information is stored in an .env file. My postgres data is not persisting with docker-compose up/start and docker-compose down/stop/restart.
This is my docker-compose file:
version: '3.7'
services:
web:
build: ./app
command: gunicorn umngane_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
expose:
- 8000
environment:
- SECRET_KEY=${SECRET}
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=${POSTGRESQLUSER}
- SQL_PASSWORD=${POSTGRESQLPASSWORD}
- SQL_HOST=db
- SQL_PORT=5432
- SU_NAME=${SU_NAME}
- SU_EMAIL=${SU_EMAIL}
- SU_PASSWORD=${SU_PASSWORD}
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/assets
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
external: true # I tried running without this and the result is the same
static_volume:
My entrypoint scipt is this:
python manage.py flush --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser --user "${SU_NAME}" --email "${SU_EMAIL}" --password "${SU_PASSWORD}"
python manage.py collectstatic --no-input
exec "$#"
where createsuperuser is a custom module that creates a superuser in the application.
This setup is not persisting the information in postgres_data.
Additional information:
Before doing anything, I check to see that there is no volume named postgres_data using docker volume ls and get just that.
At which point I run docker-compose up -d/docker-compose up -d --build and everything works out fine with no errors.
I run docker inspect postgres_data and it shows "CreatedAt": "X1"
I am able to login as the superuser. I proceed to create admin users, logout as the superuser and then login as any of the admin users with no problem. I run docker exec -it postgres_data psql -U <postgres_user> to make sure the admin users are in the database and find just that.
At which point I proceed to run docker-compose down/docker-compose stop with no problem. I run docker volume ls and it shows that postgres_data is still there.
I run docker inspect postgres_data and it shows "CreatedAt": "X2"
To test that everything works as expected I run docker-compose up -d/docker-compose up -d --build/docker-compose start/docker-compose restart.
I run docker inspect postgres_data and it shows "CreatedAt": "X3"
At which point I proceed to try and login as an admin user and am not able to. I run docker exec -it postgres_data psql -U <postgres_user> again but this time only see the superuser, no admin users.
(Explanation: I am here using the forward slash to show all the different things I tried on different attempts. I tried every combination of commands shown here.)
The issue is you run "flush" in your entrypoint script which clears the database. The entrypoint will run whenever you boot or recreate the container.
One way of having persistent data is specifying an actual path on the disk instead of creating a volume:
...
db:
image: postgres:11.2-alpine
volumes:
- "/local/path/to/postgres/data:/var/lib/postgresql/data/"
...
This way, the container's postgres data location is mapped to a path you specify. This way, the data persists directly on disk unless purposely deleted.
A docker volume, as far as I know, is going to be removed on container removal.
Related
I have an application running in a docker container and psql database running in a docker container as well. i want to dump database while in django container, i know there is dumpdata in django but this command takes long time, i also tried docker exec pg_dump but inside django container this command doesn't work.
services:
db_postgres:
image: postgres:10.5-alpine
restart: always
volumes:
- pgdata_invivo:/var/lib/postgresql/data/
env_file:
- .env
django:
build: .
restart: always
volumes:
- ./static:/static
- ./media:/media
ports:
- 8000:8000
depends_on:
- db_postgres
env_file:
- .env
Is there any way to do pg_dump without using docker exec pg_dump while in django container?
While your container is running type:
docker-compose down -v
This will remove the volumes and thus all the data stored in your database of the container will be removed.
Now run
docker-compose up --build
docker-compose exec django python manage.py migrate
to create your tables again.
So, I have followed this tutorial by Docker to create a Django image.
It completely works on my local machine by just running a docker-compose up command from the root directory of my project.
But, after pushing the image to docker hub https://hub.docker.com/repository/docker/vivanks/firsttry
I am pulling the image to another machine and then running:
docker run -p 8020:8020 vivanks/firsttry
But it's not getting started and showing this error:
EXITED(0)
Can anyone help me on how to pull this image and run it?
My Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
As #larsks mentioned in his answer your problem is that your command is in the Compose file, rather than in Dockerfile.
To run your project on another machine as-is, use the following docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
If you already added CMD python manage.py runserver 0.0.0.0:8000 to your Dockerfile and rebuilt the image, the above can be further simplified to:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: vivanks/firsttry:latest
ports:
- "8000:8000"
depends_on:
- db
Using docker run will fail in either case, since it won't set up a database.
Edit:
OP, I admire your persistence, but at the same time do not understand the insistence on using Docker CLI rather than docker-compose. I recommend using one of the above docker-compose.yml files to start your app.
Nevertheless, I accept the challenge of running it without docker-compose.
Your application fails to start when you use docker run command, because it tries to connect to database on host db, which does not exist. In your (and mine) docker-compose.yml there is a definition of a service called db. Docker-compose uses that definition to set up a database container for you and makes it available for your application under hostname db.
To start your application without using docker-compose, you need to manually do everything it does for you automatically (the commands below assume you have added CMD... to your Dockerfile:
docker network create --driver bridge django-test-network
docker run --detach --env POSTGRES_DB=postgres --env POSTGRES_USER=postgres --env POSTGRES_PASSWORD=postgres --network django-test-network --name db postgres:latest
docker run -it --rm --network django-test-network --publish 8080:8000 vivanks/firsttry:latest
The above 3 commands create a new bridged network, create and start a detached (background) container with properly configured database connected to that network and finally create and start an attached (foreground) container based on your image, also attached to that new network. Since both containers are on the same, non-default bridged network, your application will be able to resolve hostname db to internal IP address of the database container and start properly.
Once you shut it down with Ctrl+C, the container with your application will delete itself (as it was started with option --rm), but you need to also manually clean up the rest. To do so run the following commands:
docker stop db
docker rm -v db
docker network remove django-test-network
The first one stops the database container, the second one removes it and its anonymous volume and the third one removes the network.
I hope this explains everything.
Your Dockerfile doesn't specify a CMD or ENTRYPOINT. When you run...
docker run -p 8020:8020 vivanks/firsttry
...the container has nothing to do (which means it will actually try to start a Python interactive shell, but since you're not allocating a terminal with -t, the shell just exits. Successfully). In your docker-compose.yml, you're passing in an explicit command:
command: python manage.py runserver 0.0.0.0:8000
So the equivalent docker run command line would look like:
docker run -docker run -p 8020:8020 vivanks/firsttry python manage.py runserver 0.0.0.0:8000
But you probably want to bake that into your Dockerfile like this:
CMD python manage.py runserver 0.0.0.0:8000
I would to run a script (populate my MySql Docker container) only when my docker containers are built. I'm running the following docker-compose.yml file, which contains a Django container.
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
volumes:
my-db:
I have this web/Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY entrypoint.sh /app/
COPY . /app/
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
and these are the contents of my entrypoint.sh file
#!/bin/bash
set -e
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
exec "$#"
The issue is, when I repeatedly run "docker-compose up," the entrypoint.sh script is getting run with its commands. I would prefer the commands only get run when the docker container is first built but they seem to always get run when the container is restored. Is there any way to adjust what I have to achieve this?
An approach that I've used before is to wrap your loaddata calls in your own management command, which first checks if there's any data in the database, and if there is, doesn't do anything. Something like this:
# your_app/management/commands/maybe_init_data.py
from django.core.management import call_command
from django.core.management.base import BaseCommand
from address.models import Country
class Command(BaseCommand):
def handle(self, *args, **options):
if not Country.objects.exists():
self.stdout.write('Seeding initial data')
call_command('loaddata', 'maps/fixtures/country_data.yaml')
call_command('loaddata', 'maps/fixtures/seed_data.yaml')
And then change your entrypoint script to:
python manage.py migrate
python manage.py maybe_init_data
(Assumption here that you have a Country model - replace with a model that you do actually have in your fixtures.)
The idea of seeding your database in the first run, is a very common case. As others have suggested, you can change your entrypoint.sh script and apply some conditioning logic to it and make it the way you want it to work.
But I think it is a really better practice if you separate the logic for seeding the database and running services and do not keep them tangled to each other. This might cause some unwanted behavior in the future.
I was going to suggest a workaround using docker-compose and started searching for some syntax for excluding some services while doing docker-compose up but found out this is still an open issue. But I found this stack overflow answer witch has suggested a very nice approach.
version: '3'
services:
all-services:
image: docker4w/nsenter-dockerd # you want to put there some small image
command: sh -c "echo start"
depends_on:
- mysql
- web
- apache
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
volumes:
my-db:
If you use something like above, you will be able to run seeding stage before running docker-compose up.
For seeding your databse, run:
docker-compose up seed
For running all your stack, use:
docker-compose up -d all-services
I think it is a clean approach and, can be extended to many different scenarios and use cases.
UPDATE
If you really want to be able to run the whole stack altogether and also prevent unexpected behaviors caused by running loaddata command multiple times, I would suggest you define a new django management command to check for existing data. Look at this:
checkseed.py
from django.core.management.base import BaseCommand, CommandError
from project.models import Country # or whatever model you have seeded
class Command(BaseCommand):
help = 'Check if seed data already exists'
def handle(self, *args, **options):
if Country.objects.all().count() > 0:
self.stdout.write(self.style.WARNING('Data already exists .. skipping'))
return False
# do all the checks for your data integrity
self.stdout.write(self.style.SUCCESS('Nothing exists'))
return True
And after this, you can change your seed part of docker-compose as below:
seed:
build: ./web
env_file: .env
environment:
DEBUG: 'true'
entrypoint: /bin/bash -c "/bin/bash -c \"$${#}\""
command: |
/bin/bash -c "
set -e
python manage.py checkseed &&
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
/bin/bash || exit 0
"
depends_on:
- mysql
This way, you can be sure that if anyone runs docker-compose up -d by mistake, will not cause integrity errors and problems like that.
Instead of using the entrypoint.sh file, why not just run the commands in the web/Dockerfile?
RUN python manage.py migrate maps
RUN python manage.py loaddata maps/fixtures/country_data.yaml
RUN python manage.py loaddata maps/fixtures/seed_data.yaml
That way these changes will be baked into the image and, when you start the image, these changes will already have been executed.
I had a similar case recently. As the "ENTRYPOINT" contains the command that will be executed every time the container starts a solution would be to include some logic on the entrypoint.sh script in order to avoid to apply the updates ( in your case the migration and the load of the data ) if the effects of these operations are already present on the database.
For example:
#!/bin/bash
set -e
#Function that goes to verify if effects of migration and load data are present on database
function checkEffects() {
IS_UPDATED=0
#Check effects and set to 1 IS_UPDATED if effects are not present
}
checkEffects
if [[ $IS_UPDATED == 0 ]]
then
echo "Database already initialized. Nothing to do"
else
echo "Database is clean. Initializing it"
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
fi
exec "$#"
However the scenario is more complex because to verify the effects that allow to decide if to proceed or not with the updates can be a quite difficult if these involves multiple data and data.
Moreover it becomes very complex if you think on the containers upgrade over time.
Example: Today you're working with a local Dockerfile for your web service but I think in production you'll start to versioning this service uploading it on a Docker registry. So when you'll upload
your first release ( for example the 1.0.0 version ) you'll
specify the following on your docker-compose.yml:
web:
restart: always
image: <DOCKER_REGISTRY_HOST>:<DOCKER_REGISTRY_PORT>/web:1.0.0
ports: # to access the container from outside
- "8000:8000"
Then you'll release the "1.2.0" version of the web service
container when you'll include other changes on the schema for example
loading other data on entrypoint.sh:
#1.0.0 updates
python manage.py migrate maps
python manage.py loaddata maps/fixtures/country_data.yaml
python manage.py loaddata maps/fixtures/seed_data.yaml
#1.2.0 updates
python manage.py loaddata maps/fixtures/other_seed_data.yaml
Here you'll have 2 scenarios (
let's ignore for now the need to check for effects on the script ):
1- You deploy for the first time your services with web:1.2.0: As you start from a clean database you should be sure that all
updates are executed ( both 1.1.0 and 1.2.0 ).
The solution to this case is easy because you can just execute all updates.
2- You upgrade web container to 1.2.0 on an existing environment where 1.0.0 was running: As your database has been initialized
with updates from 1.0.0 you should be sure that only 1.2.0
updates are executed
Here is difficult because you should be able to check what is the version on database applied in order to skip 1.0.0 updates. This will
means you should store the web version somewhere on database for
example
As per all this discussion so I think the best solution is to work directly on scripts that goes to create schema and populate data in order to make these instructions idempotent paying attention on upgrade ones.
Some examples:
1- Create a table
Instead to create table as follow:
CREATE TABLE country
use the if not exists to avoid table already present error:
CREATE TABLE IF NOT EXISTS country
2- Insert default data
Instead to insert data without primary key specified:
INSERT INTO maps.country (name) VALUES ("USA");
Include primary key in order to avoid duplicates:
INSERT INTO maps.country (id,name) VALUES (1,"USA");
Usually build and deploy steps are separated.
Your ENTRYPOINT is part of deploy.
If you want configure manually witch deploy run should run migrate commands and witch just replace containers by a new one (maybe from fresh image), then you can slit it into a separate commands
start database (if not running)
docker-compose -p production -f docker-compose.yml up mysql -d
migrate
docker run \
--rm \
--network production_default \
--env-file docker.env \
--entrypoint python \
my-backend-image-name:prod python manage.py migrate maps
and then deploy fresh image
docker-compose -p production -f docker-compose.yml up -d
And each time manually decide should you run migrate step or not
Context
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named teddycrepineau (as opposed to using the default postgres user and db).
Goal
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
System/Version
python: 3.7
django: 2.1
OS: Mac OS High Sierra
What error am I getting
When running docker-compose up with my custom role and db, I am getting the following error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist. When running the same command with the default role and db postgres Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
What I have tried so far
I read multiple threat on this site, GitHub, and docker:
tried to delete my container and rebuilt it with formatting as suggested here
Went through this GitHub issue
Tried to move my environment variable from .env file the environment inside my docker-compose.yml file and rebuild my container
Files
docker-compose.yml
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
start.sh
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
.env
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
EDIT
My file structure is as follow
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to 127.0.0.1:8000 but that is mostly a different issue) with custom user and db. When I put my docker-compose.yml file to my root folder, I get the error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you docker-compose up you will have a fresh db with envs read. Simple way to do it is docker-compose down -v
Change your env order like this.
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
I find it at this issue. I hope it works.
when you run the
sudo docker-compose exec web python manage.py migrate
yes of course you will receive
"django.db.utils.OperationalError: FATAL: role "user" does not exist
first you need to put
sudo docker-compose down -v
sudo docker system prune
check container, they should be deleted
sudo docker ps -a
then check images
sudo docker image ls
don`t forget delete images
sudo docker image rm 3e57319a7a3a
run to the project folder and then check out
python manage.py migrate
if it didn`t works put the
python manage.py migrate —run-syncdb
and
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
I encountered the issue due to a mismatch between the $POSTGRES_DB and $POSTGRES_USER variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
FATAL: database "root" does not exist
I had to edit the init script that I was writing to include the -d "$POSTGRES_DB" option like so:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
I am totally a newbie when it comes to Docker. And I am trying to understand it with a dummy project.
I have a django project and my Dockerfile is inside the Django project's root folder. And my docker-compose.yml file is under the top root folder which contains django project folder and other config files.
my docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
and my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /web
WORKDIR /web
ADD requirements.txt /web/
RUN pip install -r requirements.txt
ADD . /web/
I am trying to run the following commands
# stop and remove the existing containers
docker-compose stop
docker-compose rm -f
# up and run the container
docker-compose build
docker-compose up -d
docker-compose exec dummy_project bash
When I do docker-compose up -d, I see this error.
docker-compose up -d
dummy_project_postgres is up-to-date
Starting dummy_project ... done
warning: could not open directory 'data/db/': Permission denied
I know this question asked before, but I didn't quite get the solution I need and I am stuck for hours now.
EDIT: I have all the permissions for all the folders under the top folder
EDIT2: sudo docker-compose up -d also results the same error.
I solved by adding ":z" to end of volume defintion
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data:z
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
What ":z" means
Labeling systems like SELinux require that proper labels are placed on
volume content mounted into a container. Without a label, the security
system might prevent the processes running inside the container from
using the content. By default, Docker does not change the labels set
by the OS.
To change the label in the container context, you can add either of
two suffixes :z or :Z to the volume mount. These suffixes tell Docker
to relabel file objects on the shared volumes. The z option tells
Docker that two containers share the volume content. As a result,
Docker labels the content with a shared content label. Shared volume
labels allow all containers to read/write content. The Z option tells
Docker to label the content with a private unshared label. Only the
current container can use a private volume.
https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from
what is 'z' flag in docker container's volumes-from option?
You're trying to mount ./data/db in /var/lib/postgresql/data and you're executing docker-compose with a non-privileged user.
So, we can have two possibilities:
Problem with ./data/db permissions.
Problem with /var/lib/postgresql/data
The simpiest solution is execute docker-compose with a privileged user (root), but if you don't want to do that, you can try this:
Give permissions to ./data/db (I see your EDIT that you've already done it).
Give permissions to /var/lib/postgresql/data
How can you give /var/lib/postgresql/data permissions? Read the following lines:
First, note that /var/lib/postgresql/data is auto-generated by postgre
docker, so, you need to define a new Dockerfile which modifies these
permissions. After that, you need also modify docker-compose to use
this new Dockerfile.
./docker-compose.yml
version: '3'
services:
db:
build:
context: ./mypostgres
dockerfile: Dockerfile_mypostgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dumy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
./dumy_project/Dockerfile --> Without changes
./mypostgres/Dockerfile_mypostgres
FROM postgres
RUN mkdir -p /var/lib/postgresql/data
RUN chmod -R 777 /var/lib/postresql/data
ENTRYPOINT docker-entrypoint.sh
This solution is for case that your user is not present in docker group.
First check if your user is in docker group:
grep 'docker' /etc/group
Add user to docker group:
If the command return is empty, then create docker group:
sudo groupadd docker
Else if your user is not present in command return then add him to the group:
sudo usermod -aG docker $USER
Reboot your system
Test it again:
docker run hello-world
Tip: Remember to have the docker service started
If it works, try your docker-compose command again.