AWS CodePipeline + CodeDeploy to EC2 with docker-compose - amazon-web-services

Hi I've been trying to get autodeployment working on AWS. Whenever there's a merge or commit to repo, CodePipeline will detect it and have CodeDeploy update the tagged EC2 instance with the new changes. The app is a simple node.js app which I want to start with docker-compose. I have installed docker-compose + docker on the EC2 instance already and enabled CodeDeploy Agent.
I tested the whole process and it is mostly working except for the part where CodeDeploy fails the deployment because it is unable to run the command docker-compose up -d in my ApplicationStart portion of the appspec.yml. I get the error docker compose cannot be found which is kind of weird because in the BeforeInstall script I download and install docker + docker-compose and set all the permissions. Is there something I'm missing or it is just not meant to happen with CodeDeploy and EC2?
I can confirm when I SSH into the EC2 instance and use the command docker-compose up -d in project root directory it works, but as soon as I try to run the docker-compose command in the script portion of the appspec.yml it fails.
The project repo is here, just in case there's anything I missed: https://github.com/c3ho/simple_crud

Related

cdk deploy in a docker image

I am trying to run the aws cdk on a CI/CD pipeline.
The pipeline runs everything in a docker container.
When the cdk deploy step is run, it requires the docker daemon to run docker build and push an imace to ECR.
I am running into the error:
Error: write EPIPE
at afterWriteDispatched (internal/stream_base_commons.js:156:25)
at writeGeneric (internal/stream_base_commons.js:147:3)
at Socket._writeGeneric (net.js:787:11)
at Socket._write (net.js:799:8)
at writeOrBuffer (internal/streams/writable.js:358:12)
at Socket.Writable.write (internal/streams/writable.js:303:10)
at /usr/local/lib/node_modules/aws-cdk/node_modules/cdk-assets/lib/private/shell.ts:28:19
at new Promise (<anonymous>)
at Object.shell (/usr/local/lib/node_modules/aws-cdk/node_modules/cdk-assets/lib/private/shell.ts:26:10)
at Docker.execute (/usr/local/lib/node_modules/aws-cdk/node_modules/cdk-assets/lib/private/docker.ts:75:13)
Not sure how to get around this, is this a limitation of the aws cdk?
Essentially your CI/CD process needs to be able to handle Docker in Docker if you run all of your builds in containers. Create a mock CI/CD pipeline that just runs docker image ls or something like that to see if docker is working. In order to do docker with CDK, the server the CDK command is running on must have docker.
I've seen the EPIPE error when docker isn't even installed.

AWS SAM: Build Failed Error: Docker is unreachable. Docker needs to be running to build inside a container

Im running AWS SAM and using sam build --use-container then get the following error.
Starting Build inside a container Building function 'SamTutorialFunction Build Failed Error: Docker is unreachable. Docker needs to be running to build inside a container
I run sudo service docker start before and still get the same error.
I had the same issue. The problem was that docker was installed to run as the root user. AWS SAM is trying to access as your logged in user. You can set docker to run as non-root user (without sudo) by adding your user to the docker group. See https://docs.docker.com/engine/install/linux-postinstall/
If you are running Ubuntu on WSL2, You need to enable integration between Docker and WSL2 in order to run
sam build --use-container
Steps:
Download Docker Desktop https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe
Go to settings => resources => WSL integration.
Check the enable integration with additional distros.

change ec2 instance to use ecr image and docker

I have ec2 instance for testing. I deployed using OpsWorks, and now I'm making new job on Jenkins to deploy automatically. what I want to do is
when someone push to branch
Jenkins server build docker image
push image to ecr
ec2 instance pull ecr image and build docker container and run
I made a job that using ecr and deploy ECS Fargate, but never done using ecr and deploy pre existed ec2 instance.I wonder this is possible to make it.
Pre-requisite
On your EC2 you first have to install docker.
There are many ways you can do it.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will do SSH inside EC2 and pull and run the docker image.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will trigger shell script file on EC2. That sh file having all logic to pull the latest one and stop existing etc.
From Jenkins also you can do it via ansible script.

TeamCity Agent - AWS CLI

I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build

Deploy Docker container in EC2 base on Repository URI

I have an Ec2 on AWS.
I tried
SSH into that box
install Docker
pull Docker image from my : Repository URI
docker pull bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
tag it
docker tag bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
I'm trying to build it, and I don't know what command I should use.
I've tried
docker build bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest .
I kept getting
How would one go about debugging this further?