Deploying my ECR image to my ECS instance via aws cli - amazon-web-services

so far in my buildspec.yml file I can create a docker image and store it in the ECR repository (I am using codepipeline). My question is how do I deploy it to my ECS instance through the buildspec.yml using the aws cli commands?

i am sharing buildspec.yaml file have a look
version: 0.1
phases:
pre_build:
commands:
- echo Setting timestamp for container tag
- echo `date +%s` > timestamp
- echo Logging into Amazon ECR...
- $(aws ecr get-login --region $AWS_DEFAULT_REGION)
build:
commands:
- echo Building and tagging container
- docker build -t $REPOSITORY_NAME .
- docker tag $REPOSITORY_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
post_build:
commands:
- echo Pushing docker image
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
- echo Preparing CloudFormation Artifacts
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_KEY task-definition.template
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_PARAMS_KEY cf-config.json
artifacts:
files:
- task-definition.template
- cf-config.json
You can edit this more command for ECS instance i have return template which goes to cloud formation.
you can write simple awscli command to create cluster and pull images check this aws documentation: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
sharing my own git check it out for more info: https://github.com/harsh4870/ECS-CICD-pipeline

Related

AWS CodePipeline Fails on Build

The CodeBuild portion of my pipeline keeps failing with the following error:
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for 123456789.dkr.ecr.us-east-1.amazonaws.com/diag_test, repository does not exist or may require 'docker login': denied: User: CodeBuild
I did some beginning research and saw that maybe the IAM role it was using didn't have enough permissions so I attached the AmazonEC2ContainerRegistryFullAccess policy to the role and attempted again - same results.
I verified the URI is correct.
What am I missing?
buildspec.yaml below:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 12345678.dkr.ecr.us-east-1.amazonaws.com
- REPOSITORY_URI=12345678.dkr.ecr.us-east-1.amazonaws.com/diag_test
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"diag_test","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
Thanks in advance for the assist! :)
If you pull the ECR image in the CodeBuild pipeline, you should add this line:
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ACCOUNT_NUMBER.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
You need to login like you did with Docker login.
If you use custom image for CodeBuild, you should add ECR policy

CodePipeline unable to find the image definition file

I have created a codebuild whereby the buildspec.yml is as follows (following the standard template given by AWS with minor modifications):
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=xxx.amazonaws.com/projectName
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=test-cicd}
build:
commands:
- echo Building the docker image...
- docker build -t $REPOSITORY_URI:$COMMIT_HASH -t $REPOSITORY_URI:test-cicd .
- echo Finish building the docker image.
post_build:
commands:
- echo Pushing the docker images...
- docker push $REPOSITORY_URI:$IMAGE_TAG
- docker push $REPOSITORY_URI:test-cicd
- echo Finish pushing the docker images.
- echo Writing image definitions file...
- printf '[{"name":"testcicd","imageUri":"%s"}]' $REPOSITORY_URI:test-cicd > imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files: imagedefinitions.json
The codebuild is successfully pushing the new docker image to ECR and creating the output artifact in S3:
Next I tried to create a codepipeline in which the source is ECR and next stage is to perform codedeploy to ECS. This is the codepipeline created:
However, in the codepipeline status, it shows that the output artifact could not be found:
However, I noticed that the output artifact is indeed in S3!?

CodeBuild failing to pull file from S3 in docker build

I have a CodeBuild Project which runs a docker build command with a buildspec.yml like this
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
However during the docker build process I have a sh script that runs the aws s3 cp command. I have given the service role permissions to the bucket and the error I get is.
+ aws s3 cp s3://bucket/file /var/www/html/filelocal
fatal error: Unable to locate credentials
Do roles not propagate through to Docker on Codebuild?
You are building an isolated file system. If you ran the docker build locally with the credentials on your local machine, you would see the same behavior. You would have to add credentials to your container to run those same operations. With that said, you could add credentials to your container via build-args or you could just use the codebuild role to gather the files you need and then copy those into the container during the build process. I would vote for the second way so you don't have to worry about cleaning up the credentials before publishing the container. Although you can query the environment to get the role's temporary credentials, which means you probably wouldn't have to worry about cleaning those up, you could just as easily remove those concerns by just letting the codebuild role handle gathering the files you need to build the container.

Flaky ECS Fargate deploys (An AppSpec file is required, but could not be found)

We have some flaky CodeDeploy errors that are frustrating. In about 10% of our deploys we get the following error : An AppSpec file is required, but could not be found in the revision.
The problem is that when we download the artifact zip file from s3 we clearly see a appspec.yaml file. Our build script doesn't change between deploys and when we rerun the pipeline on the same commit (using the "Release change" button), without changing anything, it works.
The error message isn't helpful and it seems like CodeDeploy isn't 100% reliable.
We use ECS Fargate using Blue/Green Deployment.
Our buildspec.yml file looks like this:
version: 0.2
env:
parameter-store:
BUILD_ENV: key-foo-site-node-env
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-6)
- ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
- REPOSITORY_URI="$ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/foo-site"
- echo Saving source version into version.txt...
- echo $IMAGE_TAG >> version.txt
build:
commands:
- echo Build started on `date`
- echo Building the app Docker image...
- docker build -t $REPOSITORY_URI/app:$IMAGE_TAG .
- echo Building the nginx Docker image...
- docker build -t $REPOSITORY_URI/nginx:$IMAGE_TAG docker/nginx
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI/app:$IMAGE_TAG
- docker push $REPOSITORY_URI/nginx:$IMAGE_TAG
# Create a valid json file that will be used to create a new task definition version
# Using sed we need to replace $APP_IMAGE and $NGINX_IMAGE by image urls
- echo Creating a task definition json
- sed "s+\$APP_IMAGE+$REPOSITORY_URI/app:$IMAGE_TAG+g; s+\$NGINX_IMAGE+$REPOSITORY_URI/nginx:$IMAGE_TAG+g;" taskdef.$BUILD_ENV.json > register-task-definition.json
# Using the aws cli we register a new task definition
# We need to new task definition arn to create a valid appspec.yaml
# If you need debugging, the next line is useful
# - aws --debug ecs register-task-definition --cli-input-json "$(cat register-task-definition.json)" > task-definition.json
- echo Creating an appspec.yaml file
- TASK_DEFINITION_ARN=`aws ecs register-task-definition --cli-input-json "$(cat register-task-definition.json)" --query 'taskDefinition.taskDefinitionArn' --output text`
- sed "s+\$TASK_DEFINITION_ARN+$TASK_DEFINITION_ARN+g" appspec.yml > appspec.yaml
artifacts:
files:
- appspec.yaml
- register-task-definition.json
- task-definition.json
Our appspec.yml file looks like this:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "$TASK_DEFINITION_ARN"
LoadBalancerInfo:
ContainerName: "nginx"
ContainerPort: "80"
Probably not relevant anymore, but it looks like your AppSpec file has a different suffix (.yml) than indicated in the artifcats definition of the buildspec file (.yaml).

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )