Deploying Docker image to AWS elastic beanstalk - amazon-web-services

I have created a spring boot app following this tutorial. Following the tutorial I managed to dockerize my app with the command:
docker-compose up
My docker-compose.yml file:
version: '3'
services:
app:
build: .
volumes:
- .:/app
- ~/.m2:/root/.m2
working_dir: /app
ports:
- 8080:8080
command: mvn clean spring-boot:run
When I check for docker images afterwards I see the new image for the app. Now I want to deploy this app to AWS elastic beanstalk. When creating the environment with docker platform what do I need to upload for the application code? How do I upload my docker image to aws? I can't find a good tutorial on how to deploy a dockerized app/image to aws like this. I am new to docker so any help would be appreciated!

Update Oct 2020
Docker-compose is now officially supported by EB:
AWS Elastic Beanstalk Adds Support for Running Multi-Container Applications on AL2 based Docker Platform
Original answer below
EB does not support docker-compose. To make your container (is it single, or multiple-conatiner setup?) you have to use either single or multi-container EB platforms.
In both cases you have to translate your docker-compose.yml into Dockerrun.aws.json. The file has different form, depending on whether you are using single or multi-container setup.
How do I upload my docker image to aws?
If its single EB setup, you can just provide your Dockerfile to EB and it will take care of everything for you. For multi-conainer EB, you can store your images in public repo such as dockerhub, or a private repo such as ECR.
To translate your docker-compose.yml file into Dockerrun.aws.json, you can try using container-transform tool. It can be helpful, though you will most likely need to manually make further adjustments to the file generated.

Related

Dockerized Spring Boot on AWS Beanstalk not accessible

I have deployed a Spring boot app to AWS Beanstalk through Github action but it is not accessible. Set up Spring boot to run on port 5000 and exposed it because from my understanding beanstalk open the port 5000. Watching the AWS logs I see that Spring boot correctly starts at port 5000. Below my configuration files:
Dockerfile.dev
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ADD /target/demoCI-CD-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
EXPOSE 5000
This is the link not working: http://dockerreact-env.eba-v2y3spbp.eu-west-3.elasticbeanstalk.com/test
Having a docker-compose.yml in the project beanstalk takes it in consideration and there was the issue with port mapping. Below the correct map porting in docker-composer.yml.
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:8080"

Elastic Beanstalk rebuilding Docker image when using AWS CodePipeline

I am currently working on a CI/CD pipeline (AWS CodePipeline with CodeBuild) for deploying the production stage of our application to elastic beanstalk (docker platform). I am using docker-compose for building and deploying.
This is my docker-compose.yml so far:
version: '3'
services:
api:
container_name: "api"
image: "***.dkr.ecr.***.amazonaws.com/api"
build:
context: .
ports:
- "80:5000"
However, the problem is that elastic beanstalk again builds the docker image when defining the build statement within the docker-compose.yml. Is there a way to avoid this since it is already built and pushed to ECR during the build-stage of the CI/CD pipeline?
A workaround. which worked for me, was to add a separate docker-compose-build.yml and exclude the build statement from docker-compose.yml but the problem here is that this does not work for our other stages. In detail, our test stage is usually deployed to elastic beanstalk by using the eb-cli and there we need elastic beanstalk to build the images locally without ECR.
Thank you!

Is it possible to deploy Keycloak service on GCP's CloudRun?

I'm trying to deploy a keycloak service on Cloud Run using a Postgres database on Cloud SQL. The dockefile I'm using looks as follows:
# syntax=docker/dockerfile:1
FROM quay.io/keycloak/keycloak:latest
ENV DB_VENDOR postgres
ENV DB_ADDR <IP_ADDRESS>
ENV DB_DATABASE <DB_NAME>
ENV DB_SCHEMA public
ENV DB_USER postgres
ENV DB_PASSWORD postgres
ENV KEYCLOAK_USER admin
ENV KEYCLOAK_PASSWORD admin
ENV PROXY_ADDRESS_FORWARDING true
ENV PORT 8080
EXPOSE ${PORT}
Running this file on my localhost (via docker-compose) runs smoothly without issues, however, once I deploy it using GCP SDK it points to the following issue that I've been unable to fix. Does anyone have come to an issue like this one?
ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
[UPDATE]
After reviewing the logs I realized that I was having an error connecting to my Postgres database. However, even using two CPUs and 4Gi memory, the deployed service seems to be very slow in response when comparing with the same configuration deployed on App Engine.

Run multiple task-definition using docker-compose.yml and ecs-params.yml file in AWS ECS with different lauch type and volume mounting

I have 4 docker images which i want to run on ECS. For my local system i use docker-compose file where i have multiple services.
I want to do similar docker compose on ECS.
I want my database image to run on EC2 and rest on fargate and host the volume of database on EC2 and make sure each container can communicate with each-other using their name.
How do i configure my docker-compose.yml and ecs-param.yml file??
My sample docker-compose.yml file
version: '2.1'
services:
first:
image: first:latest
ports:
- "2222:2222"
depends_on:
database:
condition: service_healthy
second:
image: second:latest
ports:
- "8090:8090"
depends_on:
database:
condition: service_healthy
third:
image: third:latest
ports:
- "3333:3333"
database:
image: database
environment:
MYSQL_ROOT_PASSWORD: abcde
MYSQL_PASSWORD: abcde
MYSQL_USER: user
ports:
- "3306:3306"
volumes:
- ./datadir/mysql:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 5
I don't see how you connect containers with each other.
depends_on just tells Docker Compose the order to use when
containers are started.
You may have actual connection hardcoded inside containers, that's not good.
Assuming Docker Compose file you shared containers can reach each
other using their aliases from Compose file.
For example third container can use database domain name to reach
database container.
So if you have such names hardcoded in your containers, it will work. However
usually people configure connection points(URLs) as environment variables in
Docker Compose file. In this case there is nothing hardcoded.
Hosting DB volume on EC2 can be a bad idea.
EC2 has 2 types of storage mapping - EBS and instance (based on S3). Instance
storage is destroyed when EC2 instance is terminated. EBS has data preserves on
EBS always.
So you either use EBS storage (and not EC2) or S3 that is not suitable for your
need here.
Hosting DB in a container is very bad idea.
You can find the same info in a description of many DB images in Docker Hub.
Instead you can use MySql as a service using AWS RDS.
The problem you have now has nothing common with AWS and ECS
now.
When you have Docker Compose running fine locally, you will
get the same on ECS side.
You can see example of configuration via Compose file here.

Deploy Node Express API via Docker Compose on EC2

In my EC2, I pulled my Docker images from my ECR : API + WEB
I then start both of them up via Docker Compose
It seems to start fine, but I don't know why I can't seem to go to my API.
I can go to my site
When I go to my API, I see this
I already open up the 3002 port on my EC2 inbound rule
docker-compose.yml
version: "2"
services:
iproject-api:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-script:latest'
ports:
- '3002:3002'
iproject-web:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-web-script:latest'
ports:
- '80:8080'
links:
- iproject-api
Did I forgot to restart any service?
Inbound rule looks fine. Check your API code status in EC2 docker logs {API_Container_Id}/telnet localhost 3002