Docker Compose Up works locally, fails to deploy to AWS - amazon-web-services

I am trying to deploy my docker container (hosting two images in a container) to AWS. I can succesfully run my docker compose up locally, and that builds and runs the container on my local Docker.
However, when I have set up a new context for ECS, and switched to this new context. However, when I run docker compose up (which I believe should now deploy to AWS), I get the error docker.io/xxxx/concordejs_backend:latest: not found.
My docker-compose.yml file looks like this:
version: '3'
services:
backend:
image: xxxx/concordejs_backend
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: concorde-backend
ports:
- "5000:5000"
frontend:
image: xxxx/concordejs_frontend
build:
context: ./frontend
dockerfile: ./Dockerfile
container_name: concorde-frontend
ports:
- "3001:3000"

The image has been built on your local machine and is subquently retrieved from their each time you launch docker-compose locally.
The AWS service is trying to retrieve the image from the public repository docker.io (dockerhub) since it doesn't have the image you built locally.
One solution might be to push your local image to dockerhub for it to be accessible by ECS or you can use AWS's repository service, ECR. https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_ECS.html

Related

How to pull and use existing image from Azure ACR through Dockerfile

I am performing AWS to Azure services migration.
I am using a centos VM and am trying to pull an existing image from ACR and create container. I am using Dockerfile to do so. I have created an image on Azure ACR. I need help in pulling this image and creating container on centos VM.
Earlier, I was doing this with images on AWS ECR (not sure if by using AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY) as below. But I am not sure how this can be done using Azure ACR. How do I provide Azure access to the application containing below Dockerfile and docker-compose.yml . Do I need to use access and secret key similar to AWS. If so, how do I create this pair on Azure.
Below are the files I was using when I was dealing with container creation on Centos using AWS image
Dockerfile:
FROM 12345.ecrImageUrl/latestImages/pk-image-123:latest
RUN yum update -y
docker-compose.yml:
version: '1.2`
services:
initn:
<<: *mainnode
entrypoint: ""
command: "npm i"
bldn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:build"
runn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:run"
links:
- docker-host
ports:
- "8011:8080"
environment:
- AWS_ACCESS_KEY=${AWS_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Docker Compose to Cloud Run

i created a docker compose file containing django apps and postgresql, and it runs perfectly. then I'm confused whether I can deploy this docker compose file to the google container registry to run a cloud run?
version: "3.8"
services:
app:
build: .
volumes:
- .:/app
ports:
- 8000:8000
image: django-app
container_name: django_container
command: >
bash -c "python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=nukacola
- POSTGRES_PASSWORD=as938899
container_name: postgres_db
thank you for answering my question
You cannot run a docker-compose configuration on Cloud Run. Cloud Run only supports individual containers.
To run your Django app on Cloud Run, you can do the following.
Build your docker image for Django locally using the docker build command.
Push the image to GCR using docker push command.
Create a new Cloud Run service and use the newly pushed Docker image.
Create a Cloud SQL Postgres instance and use its credentials as environment variables in your Cloud Run service.
You can also host your own Compute Engine instance and run docker-compose on it but I would not recommend that.
You can also create a GKE cluster and run Django and Postgres in it but it requires knowledge of Kubernetes(deployments, statefulsets, services etc).

Best practics working docker-compose on AWS ECS for Continuos Deploy

I'm new on ECS and and I'm somewhat confused about how to deploy in AWS ECS Fargate automatically with a docker-compose file with multiple services.
I was able to perform an End-to-End from a git push to the deploy of a single container, with the following steps:
Create an AWS ECR
Tag the docker image
Create CodeCommit
Create CodeBuild
Create CodeDeploy
Create a Cluster with a Task Description
Create Pipeline to join everything before and automate until the end.
Done
But what happens when you have multiple services?
Do I have to modify the docker-compose file to be compatible with ECS? If so, how can I separate the repository if the entire project is in a folder (pydanny cookiecutter structure) ?
Do I have to create an ECR repository for each service of my docker-compose?
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
How can I modify the volumes of the docker-compose to work on ECS?
I use the following docker-compose file generated by the pydanny cookiecutter and it has 7 services:
Django + Postgres + Redis + Celery + Celeryworker + Celerybeat + Flower
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: test_cd_django
depends_on:
- postgres
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: test_cd_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:3.2
celeryworker:
<<: *django
image: test_cd_celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: test_cd_celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: test_cd_flower
ports:
- "5555:5555"
command: /start-flower
Thank you very much for any help.
it depends if you want to use your docker-compose to perform all the operations. If you want to build, push and pull using your docker-compose, you'll need to have the image blocks in the docker-compose.yml matching the ECR address.
e.g.
image: ${ID}.dkr.ecr.${region}.amazonaws.com/${image_name}:${image_tag:-latest}
Do I have to create an ECR repository for each service of my docker-compose?
You don't have to create an ECR repository for each service but for each image you build. In your case, you don't have to create a repo for redis but you'll have to do it for django and postgres since you're building them using your Dockerfiles. celeryworker and celerybeat are using the django image to start so you won't need to create an extra repo for them.
What are the steps to automate the tag and push of each ECR and then its respective deploy to achieve the complete End-to-End process?
Here I can only provide some suggestions, it all depends on your setup. I tend to remain as cloud service agnostic as possible.
You can have images in the docker-compose.yml defined as follow:
services:
postgres:
image: ${ID}.dkr.ecr.${region}.amazonaws.com/my_postgres:${image_tag:-latest}
django:
image: <theID>.dkr.ecr.<theRegion>.amazonaws.com/my_django:${image_tag:-latest}
and then simply prepare a .env file on the fly during the build containing the info you need. e.g.
image_tag=1.2.0
How can I modify the volumes of the docker-compose to work on ECS?
Unfortunately I can't answer this question and I found the following answer:
https://devops.stackexchange.com/questions/6228/using-volumes-on-aws-fargate

How to push to a compose-built image to ECR manually?

I have a docker-compose file:
version: '3.4'
services:
nginx:
container_name: some-nginx
image: nginx:latest
restart: always
ports:
- 80:80
- 443:443
mongodb:
container_name: some-mongo
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
I want to push to Amazon Elastic Container Registry (ECR), using the commands:
docker tag testapper:latest 619625705037.dkr.ecr.us-east-2.amazonaws.com/testapper:latest
But I got an message:
Error response from daemon: No such image: testapper:latest
When I run docker-compose build I got this message:
nginx uses an image, skipping
mongodb uses an image, skipping
What does this mean? How do push my images to ECR?
Your docker containers are all using existing images (image keyword)
services:
nginx:
image: nginx:latest
mongodb:
image: mongo:latest
therefore you do not need to build them.
I believe ECS will find these official images by itself, so you do not need to push them to your private repo (ECR). (not 100% sure)
In case you do want to push a custom built image, the general flow is
docker build -t your_image_name:tag path
docker tag your_image_name:tag 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
# or
docker build -t 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag path
docker push 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
You can use docker-compose build to build and tag at the same time if your compose file is like
services:
nginx:
image: 619625705037.dkr.ecr.us-east-2.amazonaws.com/your_image_name:tag
build: ./my-nginx-path

how to go from development docker-compose.yml to deployed docker-compose.yml in aws

I have the following docker-compose.yml:
version: '3'
services:
server:
build:
context: ../../
dockerfile: ./packages/website/Dockerfile
command: yarn serve
environment:
PORT: 3000
NODE_ENV: production
restart: always
nginx:
build:
context: ./
dockerfile: ./nginx/Dockerfile
command: nginx -c /etc/nginx/nginx.conf -g "daemon off;"
depends_on:
- server
ports:
- "80:80"
restart: always
This works fantastically locally but now I want to deploy this to t2 micro or some other paid service but I don't know how I would go about it.
I think I would need to create a separate docker-compose.yml file which referenced images rather than physical Dockerfile(s)
Can anyone shed any light on how I would go about this?
It's gonna work if you put the entire directory onto cloud (and keep the directory structure).
If you somehow just want to upload the docker-compose.yml file to cloud without anything else, you need to modify it by removing the build field and adding an image: xxx field.
The question now becomes "how can I refer to an image in my docker-compose.yml file`.
Two ways to achieve this:
Build the image and put it in some container registry, can be DockerHub, or a private one. And refer to it with registry-url/image-name:image-tag. If you're using DockerHub, you can ignore the registry-url/ part.
Build the image and scp to the cloud. And refer to it with image-name:image-tag.