CodeBuild (AWS) from CodePipeline (AWS) - amazon-web-services

I'm trying to trigger multiple builds with CodePipeline (AWS) and when the pipeline trigger a CodeBuild, the CodeBuild fail with the next error:
[Container] 2018/02/07 19:30:20 Waiting for DOWNLOAD_SOURCE
Message: Access Denied
Extra information:
The source is coming from Github
If I start the CodeBuild manually works perfectly.

I just discovered this the other day. I'm not sure if it's documented anywhere, but it's definitely not clear in the Code Pipeline UI.
Any CodeBuild project that CodePipeline initiates must have been created through the CodePipeline UI. It cannot be a "standalone" CodeBuild project.
When you create a CodeBuild project from the CodePipeline UI, the "Source Provider" setting is "AWS CodePipeline", which is not an available choice when you create the CodeBuild project yourself.
CodePipeline retrieves it's own source code from GitHub. It then passes that source code to your CodeBuild project. If your project is getting it's own source code from GitHub, then that seems to cause the issue you describe:
[Container] 2018/02/06 14:58:37 Waiting for agent ping
[Container] 2018/02/06 14:58:37 Waiting for DOWNLOAD_SOURCE
To resolve this issue, you must edit your CodePipeline "build" stage, and choose "Create a new build project" under "AWS CodeBuild, Configure Your Project". You can copy most settings from your existing project and reuse the buildspec.yml file in your source code.

I had the same exact error. Codebuild worked fine when I ran it alone, but in order to make it work in CodePipeline I had to update my CodePipeline role to allow access to the S3 bucket.

The way to resolve this issue was creating the CodeBuild with the CodePipeline Wizard creation.
In this way the wizard gives to the CodeBuild the necessary privileges.

Related

Does CodeBuild only uses "Source" for grabbing buildspec if CodePipeline is configured to run the build job?

In my CodeBuild job config, source is connected to my BitBucket.
In my CodePipeline config, source is connected to my BitBucket through CodeStar (Full Clone perms).
I am using CodePipeline to use this build job in my Build stage so I'm under the assumption that CodeBuild build job would no longer need the source stage to be configured since CodePipeline is using source connection itself.
Is the source config in CodeBuild only to grab the buildspec file? (Since CodeBuild requires you to not put a buildspec file name if there is no source (even though WE know that there IS a source through CodePipeline).

Constant Error When Entering the "Deploy" Phase of my CodePipeline with AWS

I am trying to create a CI pipeline with Github, AWS CodeBuild, CodePipelines, and CodeDeploy. I continually get the error As shopwn below
I have my s3 bucket that holds my artifacts that I want to be pushed on an "allow all" policy for troubleshooting purposes and I have full permissions to the github repo I am pulling from. The "Release Change" on pipelines always fails at the deploy phase shown by image 1 below. It also fails relatively quickly if that helps. For context I am trying to create CI to just one ec2 atm and that ec2 has the deploy agent running on it and is working. Thank you all for your help!

How to invoke a pipeline based on another pipeline success using AWS CodeCommit, CodeBuild, CodePipeline

The desired behavior is as follows:
Push code change
Run unit tests for each Serverless component
Provided all tests are successful, deploy the components into Staging environment and mark build as successful
Listen to this change and run acceptance tests suite using Gherkin
Provided all tests are successful, deploy the components into UAT/Prod environment and mark build as successful
The desired solution would have two pipelines, the second one triggered by the first one's success.
If you have any other ideas, I'd be delighted to hear!
Thanks in advance
Assuming both CodePipelines are running in the same account. You can add "post_build" phase in your buildspec.yml.
In the post_build phase you can trigger the second CodePipeline using AWS SDK commands.
build:
commands:
# npm pack --dry-run is not needed but helps show what is going to be published
- npm publish
post_build:
commands:
- aws codepipeline start-pipeline-execution --name <codepipeline_name>
The solution I propose for a second pipeline trigger would be the following:
Have the second pipelines source as S3 (not CodeCommit). This will ensure that only when a specifically named file (object key) is pushed to Amazon S3 will this pipeline start.
At the end of the first CodePipeline add a Lambda function, by this point everything must have been successful to have triggered this.
Have that Lambda copy the artifact you build for your first pipeline and place it in the bucket with the key referenced in the second buckets source.
To keep things clean use a seperate bucket for each pipeline.

How to use TerraForm to create pipeline that deploys lambda function

I am trying to create a pipeline using terraform to create a codepipeline in aws to automatically deploy a lambda function.
i have already created 2 stages to get the code from github and build the artifact using codebuild and store the artifact to S3.
But i can't seem to find a terraform configuration for the codedeploy to deploy the artifact from s3 to lambda. I do see there is deployment setting from the console where i can specify the detail of the deployment.

Codepipeline: Insufficient permissions Unable to access the artifact with Amazon S3 object key

Hello I created a codepipeline project with the following configuration:
Source Code in S3 pulled from Bitbucket.
Build with CodeBuild, generating an docker image and storing it into a Amazon ECS repository.
Deployment provider Amazon ECS.
All the process works ok until when it tries to deploy, for some reason I am getting the following error during deployment:
Insufficient permissions Unable to access the artifact with Amazon S3
object key 'FailedScanSubscriber/MyAppBuild/Wmu5kFy' located in the
Amazon S3 artifact bucket 'codepipeline-us-west-2-913731893217'. The
provided role does not have sufficient permissions.
During the building phase, it is even able to create a new docker image in the ECS repository.
I tried everything, changed IAM roles and policies, add full access to S3, I have even setted the S3 bucket as public, nothing worked. I am without options, if someone could help me that would be wonderful, I have poor experience with AWS, so any help is appreciated.
I was able to find a solution. The true issue is that when the deployment provider is set as Amazon ECS, we need to generate an output artifact indicating the name of the task definition and the image uri, for example:
post_build:
commands:
- printf '[{"name":"your.task.definition.name","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
This happens when AWS CodeDeploy cannot find the build artifact from AWS CodeBuild. If you go into the S3 bucket and check the path you would actually see that the artifact object is NOT THERE!
Even though the error says about a permission issue. This can happen due the absent of the artifact object.
Solution: Properly configure artifacts section in buildspec.yml and configure AWS Codepipeline stages properly specifying input and output artifact names.
artifacts:
files:
- '**/*'
base-directory: base_dir
name: build-artifact-name
discard-paths: no
Refer this article - https://medium.com/#shanikae/insufficient-permissions-unable-to-access-the-artifact-with-amazon-s3-247f27e6cdc3
For me the issue was that my CodeBuild step was encrypting the artifacts using the Default AWS Managed S3 key.
My Deploy step uses a Cross-Account role, and so it couldn't retrieve the artifact. Once I changed the Codebuild encryption key to my CMK as it should've been originally, my deploy step succeeded.