I'm trying to import PostgreSql dump to docker container, but it doesn't work
Dockerfile:
FROM postgres
COPY postgres.sql /docker-entrypoint-initdb.d/
version: "3.9"
docker-compose.yml
services:
db:
build: ./DB
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
depends_on:
- db
structure:
docker compose up LOGS:
enter image description here
I would suggest that you use a proper Postgres image:
postgres:
image: postgres:13
volumes:
- '.:/app:rw'
- 'postgres:/var/lib/postgresql/data'
Here's a list of all the tags you can use: https://hub.docker.com/_/postgres
Just spin that up, your volume maps the data to your hd, so it's not ephemeral in the container, then you can run pg_restore on your dump file.
Related
I created a Dockerfile and docker-compose.yml file.
I build image with tag "django-image".
Dockerfile:
FROM python:3
WORKDIR /code-django
COPY . /code-django
RUN pip3 install -r requirements.txt
docker-compose.yml:
services:
db:
image: postgres
container_name: db-money
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
image: django-image
container_name: money
volumes:
- .:/code-django
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
In docker compose I have 2 services: "db" and "web". Docker compose creates "db" with "container_name: db-money" and starts it. Compose doesn't create another container with "container_name: money". Why doesn't the second "container_name" work?
Just execute:
docker compose up
inside the folder which contains docker-compose.yaml
Or -f pathToYourDockerComposeFile
My purpose is to keep data even after deleting and rebuilding containers.
More specifically, I want to keep data even after putting like "docker-comand down" or "docker-comand up -d --build".
My environment is
Docker
Django
PostgreSQL
Nginx
Gunicorn
docker-compose.prod.yml
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn ecap.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- db_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- ./static_volume:/home/app/web/staticfiles
- ./media_volume:/home/app/web/mediafiles
ports:
- 80
depends_on:
- web
volumes:
db_data:
static_volume:
media_volume:
env.prod.db
POSTGRES_USER=nita
POSTGRES_PASSWORD=*******
POSTGRES_DB=ecap_prod
I assume that the problems is how to write volumes in the yml file.
Although I followed the shown way, I cannot keep the data.
The problem is in the entrypoint.sh file.
In the file, there is the code python manage.py flush --no-input.
When I removed the code, the data can persist.
I have a problem with the initial launch of docker-compose up, when DB is not initialized yet and django throws an error.
I tried to use 'restart_police', but it didn't help and the webservice was restarted almost without waiting and forward the DB service, regardless of which reload period I wouldn't set
version: "3.9"
services:
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
deploy:
restart_policy:
condition: on-failure
delay: 15s
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=db
depends_on:
- db
db:
container_name: db_pg
image: postgres
hostname: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_HOST: db
volumes:
- ./data/db:/var/lib/postgresql/data
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
depends_on:
- db
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- ./data/pgadmin:/var/lib/pgadmin/data
I have build my postgres and django appplication using the following
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
When I check the docker-desktop, I got 2 docker containers, "django" and "pgdb".
When I check the django, it says
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
Originally, on my windows 10 machine, I saved the secret key in the windows variable. What is the way to build the docker-compose so it has the secret get?
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
You would need to create a .env file with the SECRET_KEY.
In the django_secrets.env you can store like this:
SECRET_KEY=my_secret_key
Then in the docker-compose.yml file you can specify the django_secrets.env file:
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./django_secrets.env
volumes:
- .:/usr/src/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
And then you can get the value in the settings.py file like this:
SECRET_KEY = os.environ.get("SECRET_KEY", 'my_default_secret_key')
You can have the django_secrets.env file in any path, you just need to specify the path in the docker-compose.yml file. Also you can name it as you like
Assume this simple docker-compose file.
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
How can i add django-q worker to handle tasks from web container?
I could probably build same image with different command such as python manage.py qcluster but I dont think this solution si elegant. Could you suggest some approach how to do that?
Probably the most easy thing that you can do is to add a new cluster for qcluster.
Something like this:
version: "3.9"
services:
redis:
image: redis:alpine
ports:
- 6379:6379
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- redis
djangoq:
build: .
command: python manage.py qcluster
volumes:
- .:/code
ports:
- "8000:8001"
depends_on:
- redis