How to use Docker Volumes? - django

I am completely new to Docker, right now I am using an open-source tool, which is a Docker application, I need to make some changes to the existing application for my requirement and it should reflect the changes, I did a lot of searching, then I found that we can do this with the help of Docker Volumes, but I am unable to follow any of the articles on the web as well as documentation? Any suggestions will be appreciated.
docker-compose.yml:
version: "3.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat/server
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy: nuclio,${no_proxy}
socks_proxy:
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
CLAM_AV: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: "cvat_redis"
CVAT_POSTGRES_HOST: "cvat_db"
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
image: cvat/ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: localhost
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.28.0.0/24
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:

Docker volumes are mostly used as a way to save data outside of your container. If you mount a volume and store data in it, the data will not be erased when the container is destroyed. In order to mount a volume, you have to add -v <directory in your machine>:<directory in your container> to your docker run command. It may fulfill your requirements, but it most likely wont be enough.
If your assignment requires you to change for instance the behaviour of the application, then you have to rebuild the docker image and use it in your docker-compose.yml

Related

AWS ElasticBeanstalk failed to deploy Django/Postgres app

I'm having a hard time deploying my app built with Django, Postgres, DjangoQ, Redis and ES on AWS Elastic Beanstalk, using docker-compose.yml.
I've used EB CLI (eb init, eb create) to do it and it shows the environment is successfully launched but I still have the following problem.
On the EC2 instance, there is no postgres, djangoq and ec containers built like it says in the docker-compose file as below. Only django, redis and ngnix containers are found on the ec2 instance.
The environment variables that I specified in the docker-compose.yml file aren't being configured to the django container on EC2, so I can't run django there.
I'm pretty lost and am not sure where to even start to fix the problems here.. Any insight will be very much appreciated..
version: '3'
services:
django:
build:
context: .
dockerfile: docker/Dockerfile
command: gunicorn --bind 0.0.0.0:5000 etherscan_project.wsgi:application
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- es
django-q:
build:
context: .
dockerfile: docker/Dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py qcluster"
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- django
- es
db:
image: postgres:latest
expose:
- 5432
env_file: .env
volumes:
- ./docker/volumes/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
expose:
- 6379
ports:
- 6379:6379
volumes:
- ./docker/volumes/redis:/data
nginx:
container_name: nginx
image: nginx:1.13
ports:
- 80:80
depends_on:
- db
- django
- redis
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- $PWD:/srv/app/:delegated
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
ports:
- 9200:9200
- 9300:9300
expose:
- 9200
- 9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./docker/volumes/es:/usr/share/elasticsearch/data
volumes:
app-files:
driver_opts:
type: nfs
device: $PWD
o: bind
can you confirm that your environment variables are being used correctly? A common mistake with EB and docker-compsoe is that it is assumed that your .env file works the same way in EB as it does in docker-compose when it does not. I have made that mistake before. Check out the docs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables

Undefined volume when deploying docker container to ECS

I'm following this guide and currently trying to run my compose app using docker ecs compose up but I'm getting this error:
% docker ecs compose up
service "feature-test-web" refers to undefined volume : invalid compose project
Here's my docker-compose.yml:
version: '3.7'
x-web:
&web
build: ./web
volumes:
- ./web:/app
- /app/node_modules
x-api:
&api
build: ./api
volumes:
- ./api:/app
env_file:
- ./api/.env
depends_on:
- postgres
- redis
links:
- mailcatcher
services:
web:
<< : *web
environment:
- API_HOST=http://localhost:3000
ports:
- "1234:1234"
depends_on:
- api
api:
<< : *api
ports:
- "3000:3000"
stdin_open: true
tty: true
postgres:
image: postgres:11.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=portal
- POSTGRES_PASS=portal
ports:
- 8000:5432
restart: on-failure
healthcheck:
test: "exit 0"
redis:
image: redis:5.0.4-alpine
ports:
- '6379:6379'
sidekiq:
build: ./api
env_file:
- ./api/.env
depends_on:
- postgres
- redis
mailcatcher:
image: schickling/mailcatcher
ports:
- '1080:1080'
feature-test-api:
<< : *api
depends_on:
- selenium
- feature-test-web
feature-test-web:
<< : *web
environment:
- API_HOST=http://feature-test-api:3210
selenium:
image: selenium/standalone-chrome-debug:3.141.59-neon
volumes:
- /dev/shm:/dev/shm
ports:
- 5901:5900
What am I missing? Running docker compose up by itself works and I'm able to go to localhost:1234 to see the app running. I'm trying to deploy this to AWS but it's been very difficult to do so if I'm doing this wrong, any pointers to the right way would be much appreciated as well.
As mentioned on the comments section, the volume mounts won't work on ECS as the cluster won't have a local copy of your code.
So as a first step, remove the entire volumes section.
Second, you'll need to first build a docker image out of your code, and push it to a docker registry of your liking, then link to it on your docker compose as follows
x-api:
&api
image: 12345.abcd.your-region.amazonaws.com/your-docker-repository
env_file:
- ./api/.env
depends_on:
- postgres
- redis
My answer here could give you more insight into how I deploy to ECS.

Docker: How do I use access a service that's in another container from the frontend?

I'm running a RESTful Django project on port 8000 and React project on port 3000.
For development I've had all my urls on the frontend as
href='localhost:8000/api/name' or href='localhost:8000/api/address'.
Now that i'm going into production, I want my href's to be href='mysite.com/api/name' or href='mysite.com/api/address'.I can't figure out how to do this.How do I access my RESTful data which is on another container?
I found this article but don't think its correct for production.
docker-compose.yml
version: "3.2"
services:
backend:
build: ./backend
volumes:
- ./backend:/app/backend
ports:
- "8000:8000"
stdin_open: true
tty: true
command: python3 manage.py runserver 0.0.0.0:8000
depends_on:
- db
- cache
links:
- db
frontend:
build: ./frontend
volumes:
- ./frontend:/app
#One-way volume to use node_modules from inside image
- /app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
tty: true
command: npm start
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
volumes:
- "./mysql:/var/lib/mysql"
- "./.data/conf:/etc/mysql/conf.d"
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: temp
MYSQL_USER: root
MYSQL_PASSWORD: root
volumes:
mysql: {}
you can pass the for example API_URL ("mysite.com/api/" for prod, and "localhost:8000/api/" for dev ) as an environment variable to react, and use it to make the urls

Data from postgres mysteriously getting deleted

I am using cookiecutter-django(https://github.com/pydanny/cookiecutter-django) for one of my live projects. And from last few days, I am observing the database data randomly getting deleted in parts. I checked logs but I found nothing. I don't know how to approach the issue to resolve it. Will really appreciate any guidance. I am using docker setup with Traefik, Postgres, Redis and celery and django. The code is deployed on a Digital Ocean Bucket. Only I have access to the bucket (Rules out the possibility of any person doing this)
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: fancy_tsunami_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: fancy_tsunami_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: fancy_tsunami_production_traefik
depends_on:
- django
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.2
celeryworker:
<<: *django
image: fancy_tsunami_production_celeryworker
command: /start-celeryworker
celerybeat:
<<: *django
image: fancy_tsunami_production_celerybeat
command: /start-celerybeat
flower:
<<: *django
image: fancy_tsunami_production_flower
ports:
- "5555:5555"
command: /start-flower

In my Dockered Django application, my Celery task does not update the SQLite database (in other container). What should I do?

This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./docker_nginx:/etc/nginx/conf.d
- ./timezone:/etc/timezone
depends_on:
- web
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: .
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
volumes:
- ./:/app
- ./timezone:/etc/timezone
expose:
- "8080"
depends_on:
- celerybeat
celerybeat:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celerybeat.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- celeryd
celeryd:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celeryd.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- rabbit
Normally, I have a task that executed every minutes and it updates the database located in "web". Everything works fine in development environment. However, the "celerybeat" and "celeryd" don't update my database when ran via docker-compose? What went wrong?