django and AWS: Which is better lambda or fargate - django

I currently use docker-compose.yml to deploy my django application on an AWS EC2 instance. But i feel the need for scaling and load balancing.
I have two choices
AWS lambda (using zappa - but heard that this is no longer maintained. There are limitations memory consumption and execution time (It was increased to 15 mins recently from the earlier 5 mins) and also 3GB memory limit (CPU increases proportionally). Also if it is used sporadically it may need to be pre-warmed (called on a schedule) for extra performance.)
AWS fargate (not sure how to use the docker-compose.yml file here)
My django application required some big libraries like pandas etc.
Currently i use docker-compose.yml file to run the django application. My docker-compose.yml file is as below
I have used the following images django-app, reactjs-app, postgres, redis and nginx.
# My version of docker = 18.09.4-ce
# Compose file format supported till version 18.06.0+ is 3.7
version: "3.7"
services:
nginx:
image: nginx:1.19.0-alpine
ports:
- 80:80
volumes:
- ./nginx/localhost/configs:/etc/nginx/configs
networks:
- nginx_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_postgres_data
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
redis:
image: "redis:5.0.9-alpine3.11"
command: redis-server
environment:
- REDIS_REPLICATION_MODE=master
networks: # connect to the bridge
- redis_network
celery_worker:
image: "python3.9_django_image"
command:
- sh -c celery -A celery_worker.celery worker --pool=solo --loglevel=debug
depends_on:
- redis
networks: # connect to the bridge
- redis_network
- postgresql_network
webapp:
image: "python3.9_django-app"
command:
- sh -c python manage.py runserver 0.0.0.0:8000
depends_on:
- postgresql
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- postgresql_network
- nginx_network
- redis_network
node_reactjs:
image: "node16_reactjs-app"
command:
- sh -c yarn run dev
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- nginx_network
networks:
postgresql_network:
driver: bridge
redis_network:
driver: bridge
nginx_network:
driver: bridge
and access using
domain-name:80 for accessing reacts app
api.domain-name:80 for accessing the django rest apis
which i configured in the nginx.
So in my scenario how can I shift to AWS

Related

Getting error code 247 while deploying django app

I am trying to deploy my Django application to Digital ocean droplets, using the less expensive one, that gives me 512 mb of ram, 1 CPU and 10 gigs of SSD. Then, after I set up everything properly, I run docker-compose up --build to see if everything is fine. It launches all. In my docker compose, I use a postgres instance, a redis one and a celery one, and the django application I wrote. If that matters, here is the docker-compose file
version: "3.9"
services:
db:
container_name: my_table_postgres
image: postgres
ports:
- 5432/tcp
volumes:
- my_table_postgres_db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_table_postgres
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=blablabla
redis:
container_name: redis
image: redis
ports:
- 6739:6739/tcp
environment:
- REDIS_HOST=redis-oauth-user-service
volumes:
- redis_data:/var/lib/redis/data/
my_table:
container_name: my_table
build: .
command: python manage.py runserver 0.0.0.0:5000
volumes:
- .:/api
ports:
- "5000:5000"
depends_on:
- db
- redis
celery:
image: celery
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: ['python', '-m', 'celery', '-A', 'mytable' ,'worker', '-l', 'INFO']
volumes:
- .:/api
depends_on:
- redis
- my_table
links:
- redis
volumes:
my_table_postgres_db:
redis_data:
Then, all starts up quite slowly, but after I try to make a request from something like postman, in the terminal of docker compose, the main process of the django app says that my_table exited with code 247. Can you please tell me why? Do I need to change some setup? Or is the droplet ram too low?
Thank you a lot
It was just a problem of dimensions. In fact, the cluster used to have to little RAM to support all the requirements. So, after changing the droplet, everything worked fine.

In aws ecs logs Getting error,Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied. Also my Redis is clearing

Getting Redis error in aws ecs logs
version: "3"
services:
cache:
image: redis:5.0.13-buster
container_name: redis
restart: always
ports:
- 6379:6379
command: redis-server --save 20 1 --loglevel warning
volumes:
- ./redis/data:/data
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
node-app:
build: .
depends_on:
- cache
ports:
- 9000:9000
environment:
PORT: 9000
REDIS_PORT: 6379
REDIS_HOST: redis
links:
- cache
container_name: node_api
I have tried to change redis version and also try to give volumes different location but none of this is working.

Why does this docker compose file build the same image four times?

When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.

how to open a postgres database created with docker in a database client?

I have a question about how to open a database created with docker using https://github.com/cookiecutter/cookiecutter in a database client
image 1
image 2
image3
COMPOSE LOCAL FILE
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: tienda_local_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: tienda_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
Is the port on the container mapped? Try using 127.0.0.1(assuming this is on the same machine) as the host in your local client instead of the container name. If that doesn't work can you share your docker-compose.yaml
You haven't network between containers/services in docker-compose:
You can achieve this in number ways:
link between containers/services. This is a deprecated way but, depends on your docker version still work. Add to your docker-compose file:
django:
...
links:
- postgres
Connect services to the same network. Add to both your services definition:
networks:
- django
Also, you need to define network django in docker-compose file
networks
django:
Connect your services via a host network. Just add to both services defenition:
network: host

Is it possible to specify sourcePath for AWS ECS in yml?

I have docker-compose file, which I'm trying to upload to AWS ECS. I'm using ecs-cli to upload it. I run ecs-cli compose up, and everything works fine, except that I can not define host.sourcePath for docker named volumes. I want to do it in ecs-params.yml, but there is no information about it in the ECS documentation
docker-compose.yml:
version: '3.0'
services:
nginx:
image: nginx
ports:
- 80:80
depends_on:
- php
volumes:
- app:/app
- nginx-config:/etc/nginx/conf.d
php:
image: ...
restart: always
working_dir: /app
volumes:
- app:/app
- php-config:/usr/local/etc/php/conf.d
volumes:
app:
nginx-config:
php-config:
ecs-params.yml:
version: 1
task_definition:
services:
nginx:
cpu_shares: 256
mem_limit: 512MB
php:
cpu_shares: 768
mem_limit: 512MB
Encountered the same question as you, found the answer in ecs-cli github issues:
https://github.com/aws/amazon-ecs-cli/issues/318