How to push to a compose-built image to ECR manually? - amazon-web-services

I have a docker-compose file:
version: '3.4'
services:
nginx:
container_name: some-nginx
image: nginx:latest
restart: always
ports:
- 80:80
- 443:443
mongodb:
container_name: some-mongo
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
I want to push to Amazon Elastic Container Registry (ECR), using the commands:
docker tag testapper:latest 619625705037.dkr.ecr.us-east-2.amazonaws.com/testapper:latest
But I got an message:
Error response from daemon: No such image: testapper:latest
When I run docker-compose build I got this message:
nginx uses an image, skipping
mongodb uses an image, skipping
What does this mean? How do push my images to ECR?

Your docker containers are all using existing images (image keyword)
services:
nginx:
image: nginx:latest
mongodb:
image: mongo:latest
therefore you do not need to build them.
I believe ECS will find these official images by itself, so you do not need to push them to your private repo (ECR). (not 100% sure)
In case you do want to push a custom built image, the general flow is
docker build -t your_image_name:tag path
docker tag your_image_name:tag 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
# or
docker build -t 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag path
docker push 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
You can use docker-compose build to build and tag at the same time if your compose file is like
services:
nginx:
image: 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
build: ./my-nginx-path

Related

Environment variable ElasticBeanstalk Multicontainer

I'am trying do deploy 2 containers by using docker-compose on ElasticBeanstalk with new Docker running on 64bit Amazon Linux 2 (v3). When I add .env_file directive in compose I got error
Stop running the command. Error: no Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment
My working compose:
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
After which the error occurs
version: '3.9'
services:
backend:
image: my_docker_hub_image_backend
env_file: .env
container_name: backend
restart: unless-stopped
ports:
- '8080:5000'
frontend:
image: my_docker_hub_image_frontend
container_name: frontend
restart: unless-stopped
ports:
- '80:5000'
What am I doing wrong?
In "Software" "Environment properties" are added
For the document: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables. You are right.
But error will be in eb-engine.log look like: "Couldn't find env file: /opt/elasticbeanstalk/deployment/.env"
Please try using absolute path:
env_file:
- /opt/elasticbeanstalk/deployment/env.list
The problem was that the server could not pull images from private docker hub without authorization.

Docker Compose Up works locally, fails to deploy to AWS

I am trying to deploy my docker container (hosting two images in a container) to AWS. I can succesfully run my docker compose up locally, and that builds and runs the container on my local Docker.
However, when I have set up a new context for ECS, and switched to this new context. However, when I run docker compose up (which I believe should now deploy to AWS), I get the error docker.io/xxxx/concordejs_backend:latest: not found.
My docker-compose.yml file looks like this:
version: '3'
services:
backend:
image: xxxx/concordejs_backend
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: concorde-backend
ports:
- "5000:5000"
frontend:
image: xxxx/concordejs_frontend
build:
context: ./frontend
dockerfile: ./Dockerfile
container_name: concorde-frontend
ports:
- "3001:3000"
The image has been built on your local machine and is subquently retrieved from their each time you launch docker-compose locally.
The AWS service is trying to retrieve the image from the public repository docker.io (dockerhub) since it doesn't have the image you built locally.
One solution might be to push your local image to dockerhub for it to be accessible by ECS or you can use AWS's repository service, ECR. https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_ECS.html

Best practics working docker-compose on AWS ECS for Continuos Deploy

I'm new on ECS and and I'm somewhat confused about how to deploy in AWS ECS Fargate automatically with a docker-compose file with multiple services.
I was able to perform an End-to-End from a git push to the deploy of a single container, with the following steps:
Create an AWS ECR
Tag the docker image
Create CodeCommit
Create CodeBuild
Create CodeDeploy
Create a Cluster with a Task Description
Create Pipeline to join everything before and automate until the end.
Done
But what happens when you have multiple services?
Do I have to modify the docker-compose file to be compatible with ECS? If so, how can I separate the repository if the entire project is in a folder (pydanny cookiecutter structure) ?
Do I have to create an ECR repository for each service of my docker-compose?
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
How can I modify the volumes of the docker-compose to work on ECS?
I use the following docker-compose file generated by the pydanny cookiecutter and it has 7 services:
Django + Postgres + Redis + Celery + Celeryworker + Celerybeat + Flower
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: test_cd_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: test_cd_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:3.2
celeryworker:
<<: *django
image: test_cd_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: test_cd_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: test_cd_flower
ports:
- "5555:5555"
command: /start-flower
Thank you very much for any help.
it depends if you want to use your docker-compose to perform all the operations. If you want to build, push and pull using your docker-compose, you'll need to have the image blocks in the docker-compose.yml matching the ECR address.
e.g.
image: ${ID}.dkr.ecr.${region}.amazonaws.com/${image_name}:${image_tag:-latest}
Do I have to create an ECR repository for each service of my docker-compose?
You don't have to create an ECR repository for each service but for each image you build. In your case, you don't have to create a repo for redis but you'll have to do it for django and postgres since you're building them using your Dockerfiles. celeryworker and celerybeat are using the django image to start so you won't need to create an extra repo for them.
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
Here I can only provide some suggestions, it all depends on your setup. I tend to remain as cloud service agnostic as possible.
You can have images in the docker-compose.yml defined as follow:
services:
postgres:
image: ${ID}.dkr.ecr.${region}.amazonaws.com/my_postgres:${image_tag:-latest}
django:
image: <theID>.dkr.ecr.<theRegion>.amazonaws.com/my_django:${image_tag:-latest}
and then simply prepare a .env file on the fly during the build containing the info you need. e.g.
image_tag=1.2.0
How can I modify the volumes of the docker-compose to work on ECS?
Unfortunately I can't answer this question and I found the following answer:
https://devops.stackexchange.com/questions/6228/using-volumes-on-aws-fargate

Is it possible to specify sourcePath for AWS ECS in yml?

I have docker-compose file, which I'm trying to upload to AWS ECS. I'm using ecs-cli to upload it. I run ecs-cli compose up, and everything works fine, except that I can not define host.sourcePath for docker named volumes. I want to do it in ecs-params.yml, but there is no information about it in the ECS documentation
docker-compose.yml:
version: '3.0'
services:
nginx:
image: nginx
ports:
- 80:80
depends_on:
- php
volumes:
- app:/app
- nginx-config:/etc/nginx/conf.d
php:
image: ...
restart: always
working_dir: /app
volumes:
- app:/app
- php-config:/usr/local/etc/php/conf.d
volumes:
app:
nginx-config:
php-config:
ecs-params.yml:
version: 1
task_definition:
services:
nginx:
cpu_shares: 256
mem_limit: 512MB
php:
cpu_shares: 768
mem_limit: 512MB
Encountered the same question as you, found the answer in ecs-cli github issues:
https://github.com/aws/amazon-ecs-cli/issues/318

how to go from development docker-compose.yml to deployed docker-compose.yml in aws

I have the following docker-compose.yml:
version: '3'
services:
server:
build:
context: ../../
dockerfile: ./packages/website/Dockerfile
command: yarn serve
environment:
PORT: 3000
NODE_ENV: production
restart: always
nginx:
build:
context: ./
dockerfile: ./nginx/Dockerfile
command: nginx -c /etc/nginx/nginx.conf -g "daemon off;"
depends_on:
- server
ports:
- "80:80"
restart: always
This works fantastically locally but now I want to deploy this to t2 micro or some other paid service but I don't know how I would go about it.
I think I would need to create a separate docker-compose.yml file which referenced images rather than physical Dockerfile(s)
Can anyone shed any light on how I would go about this?
It's gonna work if you put the entire directory onto cloud (and keep the directory structure).
If you somehow just want to upload the docker-compose.yml file to cloud without anything else, you need to modify it by removing the build field and adding an image: xxx field.
The question now becomes "how can I refer to an image in my docker-compose.yml file`.
Two ways to achieve this:
Build the image and put it in some container registry, can be DockerHub, or a private one. And refer to it with registry-url/image-name:image-tag. If you're using DockerHub, you can ignore the registry-url/ part.
Build the image and scp to the cloud. And refer to it with image-name:image-tag.