Need to run project via docker containers. I need to mount existing database to postgres container. Have the next in my docker-compose.yml
services:
web:
build: .
env_file: .env
command: bash -c "python manage.py collectstatic --no-input && python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
- redis
- postgres
restart: always
volumes:
- static:/static
expose:
- 8000
environment:
- .env
links:
- postgres
- redis
redis:
image: "redis:alpine"
postgres:
image: "postgres:10"
env_file:
- .env
volumes:
- POSTGRES_DATA:/var/lib/postgresql/data
ports:
- "5433:5432"
expose:
- 5433
volumes:
POSTGRES_DATA:
static:
From my .env file
POSTGRES_NAME=dbname
POSTGRES_USER=dbuser
POSTGRES_PASSWORD=dbpassword
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DATA=/var/lib/postgresql/10/main
But inside my web container I have next logs
File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL: database "dbname" does not exist
It means that databse mount failed. But I really can not find reason why it happens.
I'm not sure, but you can try this command: docker-compose down -v to remove the volumes along with the containers
Related
I want to run PostgreSql database with docker, I created a docker-compose like below:
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
I want to run PostgreSql database with docker, I created a docker-compose like below:
After building I run docker run -p 80:8000 surgery4:dev & as follows.
I am getting the following error in terminal:
django.db.utils.OperationalError: could not translate host name "pgdb" to address: Try again
There are indentation issue in your docker-compose file : django should be in proper place.
django:
restart: always
build: .
ports:
- "8000:8000"
depends_on:
- pgdb
#environment:
# - url=https://api.backend.example.com
#command: "gunicorn config.wsgi:application --bind 0.0.0.0:8000"
#networks:
# - bridge
pgdb:
image: postgres
container_name: pgdb
volumes:
- pg-data/:/var/lib/postgresql
environment:
- POSTGRES_DB=hbys_filyos
- POSTGRES_USER=healmedy
- POSTGRES_PASSWORD=mhacare1
Also you just need to execute : docker-compose up -d from where your docker-compose file located.
I hope you are doing well!
I am having a problem. I am trying to run my Django tests inside of my container with docker compose with the command line sudo docker compose run --rm app sh -c 'python3 manage.py test', but I am receiving these logs
I am not pretty sure what is happening here. I am not noticing something weird in my docker-compose file either. I have tried cleaning the volumes, deleting all the images, and building again just in case I made something wrong through the process but it didn't fix my problem. I will let you know the file just in case.
version: "3.9"
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=dev
- DB_USER=devuser
- DB_PASS=changeme
container_name: django_container
depends_on:
- db
db:
image: postgres
container_name: postgresql_db
restart: always
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=dev
- POSTGRES_USER=devuser
- POSTGRES_PASSWORD=changeme
volumes:
dev-db-data:
I have the django project which works with PostGresql db, both in docker containers.
It works Ok, but on highload sometimes gives the
django.db.utils.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution
error.
Is there a way to tune the django db connector for more retries or more timeout to solve these case?
docker-compose.yml (with some changes) :
version: '3.5'
services:
django:
build:
context: ./django
dockerfile: Dockerfile
command:
python manage.py runserver 0.0.0.0:8000
volumes:
django_volume:/home/django/django/files/
ports:
8000:8000
env_file:
env.django
depends_on:
db
db:
build:
context: ./db
dockerfile: Dockerfile
volumes:
db_volume:/var/lib/postgresql/data/
env_file:
env.db
volumes:
django_volume:
db_volume:
django container's entrypoint.sh contains the cycle:
while ! nc -x $DB_HOST $DB_PORT ; do
sleep 0.1
done
exec "$#"
To be sure that db completely started before django start.
My django app is failing to connect to the psql container with the standard connection refused error. I used django-cookiecutter which supplies the psql username and password automatically via environment variables and then this I gather is passed back into django with via a .env file that hosts a DATABASE_URL string.
Error
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
When I set a breakpoint in django settings I can see that the DATABASE_URL seems to be converted appropriately into the standard db dict:
{'NAME': 'hustlestat', 'USER': 'HjhPLEwuVjUIIKEHebPqNG<redacted>', 'PASSWORD': 'I43443fR42wRkUaaQ8mkd<redacted>', 'HOST': 'postgres', 'PORT': 5432, 'ENGINE': 'django.db.backends.postgresql'}
When I exec into the psql container with psql hustlestat -U HjhPLEwuVjUIIKEHebPqN<redcated> I can connect to the db using that username. I'm not 100% on the password as it isn't asking me for one when I try to connect.
Here is the docker compose which is generated automatically by cookie cutter:
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: hustlestat_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: hustlestat_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: hustlestat_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./hustlestat:/app/hustlestat:z
ports:
- "7000:7000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: hustlestat_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: hustlestat_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: hustlestat_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
node:
build:
context: .
dockerfile: ./compose/local/node/Dockerfile
image: hustlestat_local_node
container_name: node
depends_on:
- django
volumes:
- .:/app:z
# http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
- /app/node_modules
command: npm run dev
ports:
- "3000:3000"
# Expose browsersync UI: https://www.browsersync.io/docs/options/#option-ui
- "3001:3001"
The only oddity I have noticed is that despite django being named in the docker compose, when I view the running containers it has a random name such as:
hustlestat_django_run_37888ff2c9ca
Not sure if that is relevant.
Thanks for any help!
Okay have figured this out. I set a DATABASE_URL environment variable because I was originally getting an error saying it was unset. After googling I came across a cookie cutter doc that said to set it but didn't read it well enough to realise that the instruction was intended for non-docker setups. Mine is docker.
The reason I was getting that error is because I was exec'ing into the container and running management commands like this:
docker exec -it django bash then python manage.py migrate
The way this project is setup and environment variables are setup, you can't do that, you have to use this method from outside the exec:
docker-compose -f local.yml run --rm django python manage.py migrate
I thought the two methods were interchangeable but they are not. Everything works now.
Hello I am trying to make workable a project. But when I run the following instruction :
docker-compose up I got the following error :
ERROR: The Compose file is invalid because:
Service makemigrations has neither an image nor a build context specified. At least one must be provided.
Here is the yml file docker-compose.yml :
version: '3'
services:
db:
image: postgres
expose:
- "5432"
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/myproject/
ports:
- "8000:8000"
- "3000:3000"
depends_on:
- db
- makemigrations
- migrations
makemigrations:
command: python manage.py makemigrations --noinput
volumes:
- .:/myproject/
depends_on:
- db
migrations:
command: python manage.py migrate --noinput
volumes:
- .:/myproject/
depends_on:
- db
Could you help me please ?