Codepipeline with remote ECR source - amazon-web-services

Is it possible to use a remote ECR Repository as a source in CodePipeline?
I get the following error:
The repository with name '12345.dkr.ecr.eu-central-1.amazonaws.com/ecrrepo' does not exist in the registry with id '67890'
(Account IDs have been intentionally changed)
However the remote repository definitely exists.
Whole picture: I have 2 accounts, dev and test. Now that I have a pipeline built and running in dev account, I would like to do the same deployment in test account, but using the same ECR repository.
Just additional info: I am able to deploy to the ECS cluster of test account manually using the dev account's repository.
CodeBuild definitely supports cross account ECR image access, doesn't CodePipeline?
Any hints for solution or workaround? (I can think of Lambda)

At the moment in CodePipeline source stage when ECR is selected you only have option to provide ECR from the current AWS account.
Workaround would be to have a CodeBuild stage in the pipeline which can retrieve cross account ECR source:
https://aws.amazon.com/blogs/devops/how-to-use-cross-account-ecr-images-in-aws-codebuild-for-your-build-environment/
Your pipeline can still be started by CloudWatch Events when the ECR source changes in the other account:
CW Event Bus: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html

Related

Constant Error When Entering the "Deploy" Phase of my CodePipeline with AWS

I am trying to create a CI pipeline with Github, AWS CodeBuild, CodePipelines, and CodeDeploy. I continually get the error As shopwn below
I have my s3 bucket that holds my artifacts that I want to be pushed on an "allow all" policy for troubleshooting purposes and I have full permissions to the github repo I am pulling from. The "Release Change" on pipelines always fails at the deploy phase shown by image 1 below. It also fails relatively quickly if that helps. For context I am trying to create CI to just one ec2 atm and that ec2 has the deploy agent running on it and is working. Thank you all for your help!

How can codeBuild container run aws-cli commands without prior authentication?

Say I use aws-cli locally on my machine, I´d need to authenticate with credentials prior to any operation.
How do AWS services give permission to other services on my behalf? And more specifically, how does a container run aws-cli on my behalf without prior authentication?
I am asking this, after running my first pipeline successfully in codePipeline. My buildspec.yml does run aws s3 sync command flawlessly -which made me then wonder how do aws internally permissions work-.
AWS CodeBuild uses an IAM Service Role to provide AWS permissions to the CodeBuild environment. You should have had to create a service role for your CodeBuild configuration.
When the AWS cli tool runs, and it hasn't been previously configured with API access keys, it will check if it is running in an AWS environment like EC2 or Lambda and if so, it will use the AWS IAM role assigned to that runtime environment.

How to use TerraForm to create pipeline that deploys lambda function

I am trying to create a pipeline using terraform to create a codepipeline in aws to automatically deploy a lambda function.
i have already created 2 stages to get the code from github and build the artifact using codebuild and store the artifact to S3.
But i can't seem to find a terraform configuration for the codedeploy to deploy the artifact from s3 to lambda. I do see there is deployment setting from the console where i can specify the detail of the deployment.

Codepipeline: Insufficient permissions Unable to access the artifact with Amazon S3 object key

Hello I created a codepipeline project with the following configuration:
Source Code in S3 pulled from Bitbucket.
Build with CodeBuild, generating an docker image and storing it into a Amazon ECS repository.
Deployment provider Amazon ECS.
All the process works ok until when it tries to deploy, for some reason I am getting the following error during deployment:
Insufficient permissions Unable to access the artifact with Amazon S3
object key 'FailedScanSubscriber/MyAppBuild/Wmu5kFy' located in the
Amazon S3 artifact bucket 'codepipeline-us-west-2-913731893217'. The
provided role does not have sufficient permissions.
During the building phase, it is even able to create a new docker image in the ECS repository.
I tried everything, changed IAM roles and policies, add full access to S3, I have even setted the S3 bucket as public, nothing worked. I am without options, if someone could help me that would be wonderful, I have poor experience with AWS, so any help is appreciated.
I was able to find a solution. The true issue is that when the deployment provider is set as Amazon ECS, we need to generate an output artifact indicating the name of the task definition and the image uri, for example:
post_build:
commands:
- printf '[{"name":"your.task.definition.name","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
This happens when AWS CodeDeploy cannot find the build artifact from AWS CodeBuild. If you go into the S3 bucket and check the path you would actually see that the artifact object is NOT THERE!
Even though the error says about a permission issue. This can happen due the absent of the artifact object.
Solution: Properly configure artifacts section in buildspec.yml and configure AWS Codepipeline stages properly specifying input and output artifact names.
artifacts:
files:
- '**/*'
base-directory: base_dir
name: build-artifact-name
discard-paths: no
Refer this article - https://medium.com/#shanikae/insufficient-permissions-unable-to-access-the-artifact-with-amazon-s3-247f27e6cdc3
For me the issue was that my CodeBuild step was encrypting the artifacts using the Default AWS Managed S3 key.
My Deploy step uses a Cross-Account role, and so it couldn't retrieve the artifact. Once I changed the Codebuild encryption key to my CMK as it should've been originally, my deploy step succeeded.

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html