I am currently working on a CI/CD pipeline (AWS CodePipeline with CodeBuild) for deploying the production stage of our application to elastic beanstalk (docker platform). I am using docker-compose for building and deploying.
This is my docker-compose.yml so far:
version: '3'
services:
api:
container_name: "api"
image: "***.dkr.ecr.***.amazonaws.com/api"
build:
context: .
ports:
- "80:5000"
However, the problem is that elastic beanstalk again builds the docker image when defining the build statement within the docker-compose.yml. Is there a way to avoid this since it is already built and pushed to ECR during the build-stage of the CI/CD pipeline?
A workaround. which worked for me, was to add a separate docker-compose-build.yml and exclude the build statement from docker-compose.yml but the problem here is that this does not work for our other stages. In detail, our test stage is usually deployed to elastic beanstalk by using the eb-cli and there we need elastic beanstalk to build the images locally without ECR.
Thank you!
Related
I have deployed a Spring boot app to AWS Beanstalk through Github action but it is not accessible. Set up Spring boot to run on port 5000 and exposed it because from my understanding beanstalk open the port 5000. Watching the AWS logs I see that Spring boot correctly starts at port 5000. Below my configuration files:
Dockerfile.dev
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ADD /target/demoCI-CD-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
EXPOSE 5000
This is the link not working: http://dockerreact-env.eba-v2y3spbp.eu-west-3.elasticbeanstalk.com/test
Having a docker-compose.yml in the project beanstalk takes it in consideration and there was the issue with port mapping. Below the correct map porting in docker-composer.yml.
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:8080"
I am trying to deploy a web application in AWS fargate as well as AWS Beanstalk.
my docker compose file looks like this.(just an example , please focus on ports)
services:
application-gateway:
image: "gcr.io/docker-public/application:latest"
container_name: application-name
ports:
- "443:9443"
- "8443:8443"
**Issue with AWS Fargate
**
I need to know how to map these ports - Bridge doesnt get enabled and I see only
How to change Host Port
I can see that once I deploy the public docker image it gets deployed in Fargate however how to access the application DNS URL ?
**Issue facing in AWS Beanstalk
**
I was able to deploy the application in single instance however I am unable to deploy it in application load balanced enviroment. again I suspect the issue is with the ports in load balancer , I have opened these ports in security group though.
Thanks,
I am deploying a compose to an AWS ECS context with the following docker-compose.yml
x-aws-loadbalancer: "${LOADBALANCER_ARN}"
services:
webapi:
image: ${DOCKER_REGISTRY-}webapi
build:
context: .
dockerfile: webapi/Dockerfile
environment:
ASPNETCORE_URLS: http://+:80
ASPNETCORE_ENVIRONMENT: Development
ports:
- target: 80
x-aws-protocol: http
When I create a loadbalancer using these instructions the loadbalancer assigns the default security group for the default vpc. Which apparently doesn't match the ingress rules for the docker services because if I go and look at the task in ECS I see it being killed over and over for failing an ELB healtcheck.
The only way to fix it is to go into AWS Console and assign the created security group created by docker compose to represent the default network to the loadbalancer. But thats insane.
How do I create a loadbalancer with the correct minimum access security group so it will be able to talk to later created compose generated services?
I have created a spring boot app following this tutorial. Following the tutorial I managed to dockerize my app with the command:
docker-compose up
My docker-compose.yml file:
version: '3'
services:
app:
build: .
volumes:
- .:/app
- ~/.m2:/root/.m2
working_dir: /app
ports:
- 8080:8080
command: mvn clean spring-boot:run
When I check for docker images afterwards I see the new image for the app. Now I want to deploy this app to AWS elastic beanstalk. When creating the environment with docker platform what do I need to upload for the application code? How do I upload my docker image to aws? I can't find a good tutorial on how to deploy a dockerized app/image to aws like this. I am new to docker so any help would be appreciated!
Update Oct 2020
Docker-compose is now officially supported by EB:
AWS Elastic Beanstalk Adds Support for Running Multi-Container Applications on AL2 based Docker Platform
Original answer below
EB does not support docker-compose. To make your container (is it single, or multiple-conatiner setup?) you have to use either single or multi-container EB platforms.
In both cases you have to translate your docker-compose.yml into Dockerrun.aws.json. The file has different form, depending on whether you are using single or multi-container setup.
How do I upload my docker image to aws?
If its single EB setup, you can just provide your Dockerfile to EB and it will take care of everything for you. For multi-conainer EB, you can store your images in public repo such as dockerhub, or a private repo such as ECR.
To translate your docker-compose.yml file into Dockerrun.aws.json, you can try using container-transform tool. It can be helpful, though you will most likely need to manually make further adjustments to the file generated.
We are using eb_deployer to deploy to Elastic Beanstalk and we would like to provision each node using .ebextensions and Ansible.
A package created for eb_deployer looks something like this (simplified), it is assembled on the control node with Ansible:
- Procfile
- application.jar
- .ebextensions
- ansible.config
- provision.yml
- roles
- appdynamics
- tasks
- main.yml
ansible.config installs ansible on the Beanstalk node and runs a single playbook:
packages:
python:
ansible: []
container_commands:
ansible:
command: "ansible-playbook .ebextensions/provision.yml"
provision.yml (simplified) only includes a single role:
- name: provision eb instance
hosts: localhost
connection: local
gather_facts: yes
roles:
- role: appdynamics
controller_host: "example.com"
controller_port: 443
Now the problem is that appdynamics role uses a variable appdynamics_accesskey which stored in the vault, but the vault password file is stored on the control node.
We would like to avoid copying the vault password file from the control machine to the .ebextensions on S3 bucket and then Beanstalk node.
What would you do in such scenario? Maybe there are other tools which are more appropriate in this case?
It appears that one way to solve this issue is to launch temporary instance, configure it with Ansible running on the control machine only, create an image with ec2_ami Ansible module, and use that image to configure custom image for autoscaling group.