Is it possible to specify sourcePath for AWS ECS in yml? - amazon-web-services

I have docker-compose file, which I'm trying to upload to AWS ECS. I'm using ecs-cli to upload it. I run ecs-cli compose up, and everything works fine, except that I can not define host.sourcePath for docker named volumes. I want to do it in ecs-params.yml, but there is no information about it in the ECS documentation
docker-compose.yml:
version: '3.0'
services:
nginx:
image: nginx
ports:
- 80:80
depends_on:
- php
volumes:
- app:/app
- nginx-config:/etc/nginx/conf.d
php:
image: ...
restart: always
working_dir: /app
volumes:
- app:/app
- php-config:/usr/local/etc/php/conf.d
volumes:
app:
nginx-config:
php-config:
ecs-params.yml:
version: 1
task_definition:
services:
nginx:
cpu_shares: 256
mem_limit: 512MB
php:
cpu_shares: 768
mem_limit: 512MB

Encountered the same question as you, found the answer in ecs-cli github issues:
https://github.com/aws/amazon-ecs-cli/issues/318

Related

In aws ecs logs Getting error,Failed opening the RDB file crontab (in server root dir /etc) for saving: Permission denied. Also my Redis is clearing

Getting Redis error in aws ecs logs
version: "3"
services:
cache:
image: redis:5.0.13-buster
container_name: redis
restart: always
ports:
- 6379:6379
command: redis-server --save 20 1 --loglevel warning
volumes:
- ./redis/data:/data
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
node-app:
build: .
depends_on:
- cache
ports:
- 9000:9000
environment:
PORT: 9000
REDIS_PORT: 6379
REDIS_HOST: redis
links:
- cache
container_name: node_api
I have tried to change redis version and also try to give volumes different location but none of this is working.

django and AWS: Which is better lambda or fargate

I currently use docker-compose.yml to deploy my django application on an AWS EC2 instance. But i feel the need for scaling and load balancing.
I have two choices
AWS lambda (using zappa - but heard that this is no longer maintained. There are limitations memory consumption and execution time (It was increased to 15 mins recently from the earlier 5 mins) and also 3GB memory limit (CPU increases proportionally). Also if it is used sporadically it may need to be pre-warmed (called on a schedule) for extra performance.)
AWS fargate (not sure how to use the docker-compose.yml file here)
My django application required some big libraries like pandas etc.
Currently i use docker-compose.yml file to run the django application. My docker-compose.yml file is as below
I have used the following images django-app, reactjs-app, postgres, redis and nginx.
# My version of docker = 18.09.4-ce
# Compose file format supported till version 18.06.0+ is 3.7
version: "3.7"
services:
nginx:
image: nginx:1.19.0-alpine
ports:
- 80:80
volumes:
- ./nginx/localhost/configs:/etc/nginx/configs
networks:
- nginx_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_postgres_data
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
redis:
image: "redis:5.0.9-alpine3.11"
command: redis-server
environment:
- REDIS_REPLICATION_MODE=master
networks: # connect to the bridge
- redis_network
celery_worker:
image: "python3.9_django_image"
command:
- sh -c celery -A celery_worker.celery worker --pool=solo --loglevel=debug
depends_on:
- redis
networks: # connect to the bridge
- redis_network
- postgresql_network
webapp:
image: "python3.9_django-app"
command:
- sh -c python manage.py runserver 0.0.0.0:8000
depends_on:
- postgresql
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- postgresql_network
- nginx_network
- redis_network
node_reactjs:
image: "node16_reactjs-app"
command:
- sh -c yarn run dev
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- nginx_network
networks:
postgresql_network:
driver: bridge
redis_network:
driver: bridge
nginx_network:
driver: bridge
and access using
domain-name:80 for accessing reacts app
api.domain-name:80 for accessing the django rest apis
which i configured in the nginx.
So in my scenario how can I shift to AWS

Environment variable ElasticBeanstalk Multicontainer

I'am trying do deploy 2 containers by using docker-compose on ElasticBeanstalk with new Docker running on 64bit Amazon Linux 2 (v3). When I add .env_file directive in compose I got error
Stop running the command. Error: no Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment
My working compose:
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
After which the error occurs
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
env_file: .env
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
What am I doing wrong?
In "Software" "Environment properties" are added
For the document: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables. You are right.
But error will be in eb-engine.log look like: "Couldn't find env file: /opt/elasticbeanstalk/deployment/.env"
Please try using absolute path:
env_file:
- /opt/elasticbeanstalk/deployment/env.list
The problem was that the server could not pull images from private docker hub without authorization.

Why does this docker compose file build the same image four times?

When I run docker-compose build on the following docker-compose file, which is for a django server with celery, it builds an identical image four times (for the web service, celeryworker, celerybeat and flower).
The entire process is repeated four times
I thought the point of inheriting from other service descriptions in docker-compose was so that you could reuse the same image for different services?
How can I reuse the web image in the other services, to reduce my build time by 75%?
version: '3'
services:
web: &django
image: myorganisation/myapp
container_name: myapp_web
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
# This is a multistage build installing private dependencies, hence this arg is needed
args:
PERSONAL_ACCESS_TOKEN_GITHUB: ${PERSONAL_ACCESS_TOKEN_GITHUB}
command: /start
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
environment:
- DJANGO_SETTINGS_MODULE=backend.settings.local
- DATABASE_URL=postgres://postgres_user:postgres_password#db/postgres_db
- REDIS_URL=redis://:redis_password#redis:6379
- CELERY_FLOWER_USER=flower_user
- CELERY_FLOWER_PASSWORD=flower_password
env_file:
- ./.env
celeryworker:
<<: *django
container_name: myapp_celeryworker
depends_on:
- redis
- db
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
container_name: myapp_celerybeat
depends_on:
- redis
- db
ports: []
command: /start-celerybeat
flower:
<<: *django
container_name: myapp_flower
ports:
- 5555:5555
command: /start-flower
volumes:
postgres_data:
driver: local
pgadmin_data:
driver: local
Because you are specifying the build section in all the services (using the django anchor), it is getting built for every service.
If you want to use the same image for all services but build it only once, you can specify the build section in only one service which would be started first (i.e., service with no dependencies) and then specify just the image field without build section the in other services and make these services depend on the first service which builds the image.

Best practics working docker-compose on AWS ECS for Continuos Deploy

I'm new on ECS and and I'm somewhat confused about how to deploy in AWS ECS Fargate automatically with a docker-compose file with multiple services.
I was able to perform an End-to-End from a git push to the deploy of a single container, with the following steps:
Create an AWS ECR
Tag the docker image
Create CodeCommit
Create CodeBuild
Create CodeDeploy
Create a Cluster with a Task Description
Create Pipeline to join everything before and automate until the end.
Done
But what happens when you have multiple services?
Do I have to modify the docker-compose file to be compatible with ECS? If so, how can I separate the repository if the entire project is in a folder (pydanny cookiecutter structure) ?
Do I have to create an ECR repository for each service of my docker-compose?
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
How can I modify the volumes of the docker-compose to work on ECS?
I use the following docker-compose file generated by the pydanny cookiecutter and it has 7 services:
Django + Postgres + Redis + Celery + Celeryworker + Celerybeat + Flower
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: test_cd_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: test_cd_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:3.2
celeryworker:
<<: *django
image: test_cd_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: test_cd_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: test_cd_flower
ports:
- "5555:5555"
command: /start-flower
Thank you very much for any help.
it depends if you want to use your docker-compose to perform all the operations. If you want to build, push and pull using your docker-compose, you'll need to have the image blocks in the docker-compose.yml matching the ECR address.
e.g.
image: ${ID}.dkr.ecr.${region}.amazonaws.com/${image_name}:${image_tag:-latest}
Do I have to create an ECR repository for each service of my docker-compose?
You don't have to create an ECR repository for each service but for each image you build. In your case, you don't have to create a repo for redis but you'll have to do it for django and postgres since you're building them using your Dockerfiles. celeryworker and celerybeat are using the django image to start so you won't need to create an extra repo for them.
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
Here I can only provide some suggestions, it all depends on your setup. I tend to remain as cloud service agnostic as possible.
You can have images in the docker-compose.yml defined as follow:
services:
postgres:
image: ${ID}.dkr.ecr.${region}.amazonaws.com/my_postgres:${image_tag:-latest}
django:
image: <theID>.dkr.ecr.<theRegion>.amazonaws.com/my_django:${image_tag:-latest}
and then simply prepare a .env file on the fly during the build containing the info you need. e.g.
image_tag=1.2.0
How can I modify the volumes of the docker-compose to work on ECS?
Unfortunately I can't answer this question and I found the following answer:
https://devops.stackexchange.com/questions/6228/using-volumes-on-aws-fargate