Docker compose can not create container for postgresql and redis - django

I have a problem with docker-compose and can not create containers for service postgresql and redis, after I run docker-compose up -d I've got this error:
ERROR: for dockerizingdjango_postgres_1 Cannot create container for service postgres: b'invalid port specification: "None"'
Creating dockerizingdjango_redis_1 ...
Creating dockerizingdjango_redis_1 ... error
ERROR: for dockerizingdjango_redis_1 Cannot create container for service redis: b'invalid port specification: "None"'
ERROR: for postgres Cannot create container for service postgres: b'invalid port specification: "None"'
ERROR: for redis Cannot create container for service redis: b'invalid port specification: "None"'
ERROR: Encountered errors while bringing up the project.
The docker-compose.yml file looks like this:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
apache:
restart: always
build: ./apache/
ports:
- "80:80"
volumes:
- /www/static
- /www/media
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
I'm using this version of docker-compose:
docker-compose version 1.13.0, build 1719ceb
docker-py version: 2.3.0
CPython version: 3.4.3
OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
also I'm using python3.5 for this container, because I'm serving django in this container, so can someone help me and explain what is going on here, and how to solve this, thanks.

It seems that you are hitting this issue(https://github.com/docker/compose/issues/4729). Workaround as mentioned in link is to downgrade compose or upgrade python.

Related

django.db.utils.OperationalError: could not translate host name "db" to address: Name does not resolve. How to solve this issue?

Can some body help me solve this issue. Why am i getting this error? I have db in .env host and links, network in docker-compose file too. I am not being to figure out where the issue is being raised.
Here is my docker-compose file.
version: "3.9"
volumes:
dbdata:
networks:
django:
driver: bridge
services:
web:
build:
context: .
volumes:
- .:/home/django
ports:
- "8000:8000"
command: gunicorn Django.wsgi:application --bind 0.0.0.0:8000
container_name: django_web
restart: always
env_file: .env
depends_on:
- db
links:
- db:db
networks:
- django
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5430:5432
networks:
- django
container_name: django_db
here is my .env with database settings
DB_USER=admin
DB_NAME=test
DB_PASSWORD=admin
DB_HOST=db
DB_PORT=5432
DB_SCHEMA=public
CONN_MAX_AGE=60
The problem is that when you are using:
docker compose up --build
As docker compose document describes :
flag
reference
--build
Build images before starting containers.
That means that during the build time there is no "db" container running, so there is no possible to exist a "db" host name.
A suggestion would be to to not engage any DB transaction during the build phase.
You can make any database "rendering" **as a seeding process in your app start

Environment variable ElasticBeanstalk Multicontainer

I'am trying do deploy 2 containers by using docker-compose on ElasticBeanstalk with new Docker running on 64bit Amazon Linux 2 (v3). When I add .env_file directive in compose I got error
Stop running the command. Error: no Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment
My working compose:
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
After which the error occurs
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
env_file: .env
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
What am I doing wrong?
In "Software" "Environment properties" are added
For the document: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables. You are right.
But error will be in eb-engine.log look like: "Couldn't find env file: /opt/elasticbeanstalk/deployment/.env"
Please try using absolute path:
env_file:
- /opt/elasticbeanstalk/deployment/env.list
The problem was that the server could not pull images from private docker hub without authorization.

Docker auto reload not working on remote server

I've setup a Django project that runs in Docker. I want to have my dev environment on a remote server and have it auto reload as I change my files locally. I've set up remote deployment in Pycharm and it works fine. All local changes are reflected on my remote server, I can also see that files get changed inside the Docker container as I've setup a volume in my docker-compose file. However auto reloading is not working, and I cannot figure out why.
My docker-compose file:
version: '3.7'
services:
web:
container_name: my_project_ws
restart: always
build:
context: .
dockerfile: Dockerfile
command: gunicorn my_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
- .:/my_project
expose:
- 8000
env_file:
- .env
depends_on:
- db
db:
container_name: my_project_db
restart: always
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- .env
nginx:
restart: always
build: ./nginx
volumes:
- static_volume:/my_project/staticfiles
- media_volume:/my_project/mediafiles
ports:
- 80:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
Running Gunicorn with --reload solved the problem.

Keep Redis data alive between docker-compose down and up in Docker container

Question is about keeping Redis data alive between docker-compose up and docker-compose down.
In the docker-compose.yaml file bellow db service uses - postgres_data:/var/lib/postgresql/data/ volume to keep data alive.
I would like to do something like this for redis service but I can not find workable solution to do so. Only one way I have managed to achieve this goal is to store data in local storage - ./storage/redis/data:/data. All experiments with external volume gave no results.
Question is -is it possible somehow to store redis data between docker-compose down and docker-compose up in a volume likewise it made in DB service?
Sorry if question is naiveā€¦
Thanks
version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
env_file:
- ./series/.env
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
db:
build:
context: .
dockerfile: postgres.dockerfile
restart: always
env_file:
- ./series/.env
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=1q2w3e
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- target: 5432
published: 5433
protocol: tcp
mode: host
redis:
image: redis:alpine
command: redis-server --appendonly yes
ports:
- target: 6379
published: 6380
protocol: tcp
mode: host
volumes:
- ./storage/redis/data:/data
restart: always
environment:
- REDIS_REPLICATION_MODE=master
volumes:
postgres_data:
You just need to add a named volume for Redis data next to the postgres_data:
volumes:
postgres_data:
redis_data:
Then change host path to the named volume:
redis:
...
volumes:
- redis_data:/data
If Redis saved data with host path, then the above will work for you. I mention that because you have to configure Redis to enable persistent storage (see Redis Docker Hub page https://hub.docker.com/_/redis).
Beware, running docker-compose down -v will destroy volumes as well.

Beginner Docker docker-compose

I'm going through this tutorial and I've successfully got the stack up and running.
What's bugging me is that when I change my code (in the web service) on my host, it does automatically make the changes when I reload the page in the browser. I don't understand why it's doing that. Here's my docker-compose.yml file:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- ./web:/usr/src/app
- ./web/static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
I didn't think that this was gunicorn doing the reloading because I believe gunicorn needs the --reload flag to actually do hot reloading.
This line means that you are mapping locations on your host to locations within your web container.
volumes:
- ./web:/usr/src/app
- ./web/static:/usr/src/app/static
Therefore any time you change you change code in the .web directory, it is updated within the container. If you don't want that to happen then you need to copy those directories when you build your container by specifying that in Dockerfile for that container.