Django Docker/Kubernetes Postgres data not appearing - django

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data

You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)

For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>

Fixed in the last Docker Edge update.

Related

Docker-compose - Cannot connect to Postgres

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Docker creates a new volume everytime I do docker compose up

I have a docker-compose file that spins up several services. I just got an error stating I ran out of disk space so I typed "docker system df" and saw I have 21 volumes. If I have 3 docker containers with each having a volume attached to them, why its showing me a total count of 21 volumes?
I am using AWS EC2. This is my compose file, is there anything wrong with the way I have the volumes set? The postgres data got persisted when I shutdown and restart, I am just confused aboutthe volume size and the message that I can not rebuild due to no space on a T2 Large instance.
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '443:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
A Docker image's Dockerfile can contain a VOLUME directive. This is an instruction to Docker that tells it that some container directory contains data that needs to be persisted, and Docker should always ensure a volume of some sort is mounted on that directory.
More specifically, the postgres image declares
VOLUME /var/lib/postgresql/data
Your Compose setup doesn't mount anything on that specific directory. Because of this, Docker creates an anonymous volume and mounts it there for you. This isn't specific to the postgres image and other containers in your stack may have similar local data directories. Those anonymous volumes are what you're seeing in the docker system df output (docker volume ls will also show them).
In a later question you also note that Compose has trouble finding these anonymous volumes, and it's better to not rely on this functionality. Make sure you're mounting a host directory or named volume for these data directories via Compose volumes:.
if your main containers are up and running run
docker volume prune
and it should remove any volumes that are detached or unused by any container
i make it a habit to run periodically on my aws instance
docker system prune

Deploying an ECS application to AWS using docker compose

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?
From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

docker-compose returns error when trying to start on AWS

I want to deploy my server to an AWS EC2 instance. When I enter 'sudo docker-compose up' in the ssh console, I get the following error:
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\"/home/ubuntu/site/nginx/default.conf\\" to rootfs \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged\\" at \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged/etc/nginx/conf.d/default.conf\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
This is my docker-compose.yml file:
version: '2'
networks:
SRVR:
services:
nginx:
image: nginx:stable-alpine
container_name: SRVR_nginx
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
depends_on:
- php
networks:
- SRVR
php:
build: ./php
container_name: SRVR_php
volumes:
- ./code:/code
ports:
- "9000:9000"
networks:
- SRVR
The same docker-compose.yml is working fine on my local compuer which runs Ubuntu OS. The EC2 instance also runs Ubuntu OS.
Your problem is with ./nginx/default.conf. docker recognises that as a folder and you have /etc/nginx/conf.d/default.conf being a file.
I was too hasty to ask. Here's what's going on. The Nginx container requires the 'default.conf' file to be created manually. If it's not, bringing nginx server up creates a folder called 'default.conf'. Once I've manually copied my original 'default.conf' file to the appropriate location, everything seems to be working fine.

How to mount docker volume which is running in EC2 instance to EBS volume? What are the steps for that?

Currently ec2 instance is running and inside that docker container is running.
I have 2 containers running in EC2 instance.
1 - Aplication container.
2 - Data Base(CouchDb) container.
I need to store data which is in database to EBS volumes.
I have a docker-compose file and I'm bringing up containers using 'docker-compose up' command.
version: '2.1'
services:
web:
build: .
ports:
- "4200:4200"
- "7020:7020"
depends_on:
couchdb:
condition: "service_healthy"
volumes:
- ./app:/usr/src/app/app
couchdb:
image: "couchdb:1.7.1"
ports:
- "5984:5984"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5984/"]
interval: 10s
timeout: 90s
retries: 9
You need to bind the EC2 instance directory with CouchDB container.
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
couchdb images to familiarize themselves with the options available
Create a data directory on a suitable volume on your host system, e.g.
/home/ec2-user/data. Start your couchdb container like this:
$ docker run --name mycouchdb -v /home/ec2-user/data:/opt/couchdb/data -d couchdb:tag
The -v /home/ec2-user/data:/opt/couchdb/data part of the command mounts the /home/ec2-user/data directory from the
underlying host system as /opt/couchdb/data inside the container,
where CouchDB by default will write its data files.
CouchDB Store Data
Update:
In the case of docker-compose
couchdb:
image: "couchdb:1.7.1"
volumes:
- /home/ec2-user/data:/opt/couchdb/data
ports:
- "5984:5984"