I am trying to deploy my Django application to Digital ocean droplets, using the less expensive one, that gives me 512 mb of ram, 1 CPU and 10 gigs of SSD. Then, after I set up everything properly, I run docker-compose up --build to see if everything is fine. It launches all. In my docker compose, I use a postgres instance, a redis one and a celery one, and the django application I wrote. If that matters, here is the docker-compose file
version: "3.9"
services:
db:
container_name: my_table_postgres
image: postgres
ports:
- 5432/tcp
volumes:
- my_table_postgres_db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_table_postgres
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=blablabla
redis:
container_name: redis
image: redis
ports:
- 6739:6739/tcp
environment:
- REDIS_HOST=redis-oauth-user-service
volumes:
- redis_data:/var/lib/redis/data/
my_table:
container_name: my_table
build: .
command: python manage.py runserver 0.0.0.0:5000
volumes:
- .:/api
ports:
- "5000:5000"
depends_on:
- db
- redis
celery:
image: celery
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: ['python', '-m', 'celery', '-A', 'mytable' ,'worker', '-l', 'INFO']
volumes:
- .:/api
depends_on:
- redis
- my_table
links:
- redis
volumes:
my_table_postgres_db:
redis_data:
Then, all starts up quite slowly, but after I try to make a request from something like postman, in the terminal of docker compose, the main process of the django app says that my_table exited with code 247. Can you please tell me why? Do I need to change some setup? Or is the droplet ram too low?
Thank you a lot
It was just a problem of dimensions. In fact, the cluster used to have to little RAM to support all the requirements. So, after changing the droplet, everything worked fine.
Related
I had a setup docker on Django project, images are stored on server but whenever I upload changes on server and then run "docker-compose up --build" then it loss every images that I uploaded, path showing images there but I am not able to view it. I don't know want happened there. if anyone have idea what to do for that ? How to resolved this issue ? is there any things need to add in docker-compose file ?
Below is my docker compose file.
version: '3'
volumes:
web-django:
web-static:
services:
redis:
image: "redis:alpine"
web:
build: .
image: project_web
command: python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- redis
celery:
image: project_web
command: celery -A project_name worker -l info
restart: always
depends_on:
- web
- redis
celery-beat:
image: citc_web
command: celery -A project_name beat -l info
restart: always
depends_on:
- web
- redis
I want solution of it, what i missing on above file and want to resolve it.
Can some body help me solve this issue. Why am i getting this error? I have db in .env host and links, network in docker-compose file too. I am not being to figure out where the issue is being raised.
Here is my docker-compose file.
version: "3.9"
volumes:
dbdata:
networks:
django:
driver: bridge
services:
web:
build:
context: .
volumes:
- .:/home/django
ports:
- "8000:8000"
command: gunicorn Django.wsgi:application --bind 0.0.0.0:8000
container_name: django_web
restart: always
env_file: .env
depends_on:
- db
links:
- db:db
networks:
- django
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5430:5432
networks:
- django
container_name: django_db
here is my .env with database settings
DB_USER=admin
DB_NAME=test
DB_PASSWORD=admin
DB_HOST=db
DB_PORT=5432
DB_SCHEMA=public
CONN_MAX_AGE=60
The problem is that when you are using:
docker compose up --build
As docker compose document describes :
flag
reference
--build
Build images before starting containers.
That means that during the build time there is no "db" container running, so there is no possible to exist a "db" host name.
A suggestion would be to to not engage any DB transaction during the build phase.
You can make any database "rendering" **as a seeding process in your app start
When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.
I have dockerized a Django project with Postgres, Gunicorn, and Nginx following this tutorial.
Now i want to move the application to azure container instances. Can i simply create a container group following this tutorial, and expect the container images to communicate the right way?
To run the project locally i use docker-compose -f docker-**compose.prod.yml** up -d --build But how is the communication between the containers handled in azure container instances?
The docker-compose.prod.yml looks like this:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
media_volume:
The containers will be able to communicate with each others using the services names (web, db, nginx) because they are part of the container group's local network. Also, take a look at the documentation as you can't use docker-composes file directly unless you use the edge version of Docker Desktop.
On another note, upon restarting, you will loose whatever you stored in your volumes because you are not using some kind of external storage. Look at the documentation.
I have noticed that developing with celery in container, something like this:
celeryworker:
build: .
user: django
command: celery -A project.celery worker -Q project -l DEBUG
links:
- redis
- postgres
depends_on:
- redis
- postgres
env_file: .env
environment:
DJANGO_SETTINGS_MODULE: config.settings.celery
if I want to do some changes on some celery task, I have to completly rebuild the docker image in order to have the latest changes.
So I tried:
docker-compose -f celery.yml down
docker-compose -f celery.yml up
Nothing changed, then:
docker-compose -f celery.yml down
docker-compsoe -f celery.yml build
docker-compose -f celery.yml up
I have the new changes.
Is this the way to do it? seems very slow to me, all the time rebuilding the image, is then better have the local celery outsite docker containers?
Mount your . (that is, your working copy) as a volume within the container you're developing in.
That way you're using the fresh code from your working directory without having to rebuild (unless, say, you're changing dependencies or something else that requires a rebuild).
The idea is explained here by Heroku, emphasis mine:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
env_file: .env
depends_on:
- db
volumes:
- ./webapp:/opt/webapp # <--- Whatever code your Celery workers need should be here
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"