I'm using AWS CodeBuild to build an application, it is configured to push the build artifacts to an AWS S3 bucket.
On inspecting the artifcats/objects in the S3 bucket I realised that the objects has been encrypted.
Is it possible to disable to encryption on the artifcats/objects?
There is now a checkbox named "Disable artifacts encryption" under the artifacts section which allows you to disable encryption when pushing artifacts to S3.
https://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectArtifacts.html
I know this is an old post but I'd like to add my experience in this regard.
My requirement was to get front end assets from a code commit repository, build them and put them in s3 bucket. s3 bucket is further connected with cloudfront for serving the static front end content (written in react in my case).
I found that cloudfront is unable to serve KMS encrypted content as I found KMS.UnrecognizedClientException when I hit the cloudfront Url. I tried to fix that and disabling encryption on aws codebuild artifacts seemed to be the easiest solution when I found this
However, I wanted to manage this using aws-cdk. This code snippet in TypeScript may come handy if you're trying to solve the same issue using aws-cdk
Firstly, get your necessary imports. For this answer it'd be the following:
import * as codecommit from '#aws-cdk/aws-codecommit';
import * as codebuild from '#aws-cdk/aws-codebuild';
Then, I used the following snippet in a class that extends to cdk Stack
Note: The same should work if your class extends to a cdk Construct
// replace these according to your requirement
const frontEndRepo = codecommit.Repository
.fromRepositoryName(this, 'ImportedRepo', 'FrontEnd');
const frontendCodeBuild = new codebuild.Project(this, 'FrontEndCodeBuild', {
source: codebuild.Source.codeCommit({ repository: frontEndRepo }),
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'npm install && npm run build',
],
},
},
artifacts: {
files: 'build/**/*'
}
}),
artifacts: codebuild.Artifacts.s3({
bucket: this.bucket, // replace with s3 bucket object
includeBuildId: false,
packageZip: false,
identifier: 'frontEndAssetArtifact',
name: 'artifacts',
encryption: false // added this to disable the encryption on codebuild
}),
});
Also to ensure that everytime I push a code in the repository, a build is triggered, I added the following snippet in the same class.
// add the following line in your imports if you're using this snippet
// import * as targets from '#aws-cdk/aws-events-targets';
frontEndRepo.onCommit('OnCommit', {
target: new targets.CodeBuildProject(frontendCodeBuild),
});
Note: This may not be a perfect solution, but it's working well for me till now. I'll update this answer if I find a better solution using aws-cdk
Artifact encryption cannot be disabled in AWS CodeBuild
Related
I want to upload a local file in the repository to s3 after it has been processed by a custom docker image with AWS CDK. I don't want to make the docker image public (Its not a big restriction tho). Also, I don't want to build the image for each s3 deployment
Since I don't want to build the docker image for each bucket deployment, I have created a DockerImageAsset, and tried to give image uri as BucketDeployment's bundle property. Code is below:
const image = new DockerImageAsset(this, "cv-builder-image", {
directory: join(__dirname, "../"),
});
new BucketDeployment(this, "bucket-deployment", {
destinationBucket: bucket,
sources: [
Source.asset(join(__dirname, "../"), {
bundling: {
image: DockerImage.fromRegistry(image.imageUri),
command: [
"bash",
"-c",
'echo "heloo" >> /asset-input/cv.html && cp /asset-input/cv.html /asset-output/cv.html',
],
},
}),
],
});
DockerImageAsset is deployed fine. But it throw this during BucketDeployment's deployment
docker: invalid reference format: repository name must be lowercase
I can see the image being deployed to AWS.
Any help is appreciated. Have a nice dayy
As far as I understand - to simplify - you have a Docker image which you use to launch a utility container that just takes a file and outputs an artifact (another file).
Then you want to upload the artifact to S3 using the BucketDeployment construct.
This is a common problem when dealing with compiling apps like Java to .jar artifacts or frontend applications (React, Angular) to static output (HTML, CSS, JS) files.
The way I've approached this in the past is: Split the artifact generation as a separate step in your pipeline and THEN trigger the "cdk deploy" as a subsequent step.
You would have less headache and you control all parts of the process, including having access to the low level Docker commands like docker build ... and docker run ..., and in effect, leverage local layer caching in the best possible way. If you rely on CDK to do the bundling for you - there's a bit of magic behind the scenes that's not always obvious. I'm not saying it's impossible, it's just more "work".
Is it possible to run an external build command as part of a CDK stack sequence? Intention: 1) create a rest API, 2) write rest URL to config file, 3) build and deploy a React app:
import apigateway = require('#aws-cdk/aws-apigateway');
import cdk = require('#aws-cdk/core');
import fs = require('fs')
import s3deployment = require('#aws-cdk/aws-s3-deployment');
export class MyStack extends cdk.Stack {
const restApi = new apigateway.RestApi(this, ..);
fs.writeFile('src/app-config.json',
JSON.stringify({ "api": restApi.deploymentStage.urlForPath('/myResource') }))
// TODO locally run 'npm run build', create 'build' folder incl rest api config
const websiteBucket = new s3.Bucket(this, ..)
new s3deployment.BucketDeployment(this, .. {
sources: [s3deployment.Source.asset('build')],
destinationBucket: websiteBucket
})
}
Unfortunately, it is not possible, as the necessary references are only available after deploy and therefore after you try to write the file (the file will contain cdk tokens).
I personally have solved this problem by telling cdk to output the apigateway URLs to a file and then parse it after the deploy to upload it so a S3 bucket, to do it you need:
deploy with the output file options, for example:
cdk deploy -O ./cdk.out/deploy-output.json
In ./cdk.out/deploy-output.json you will find a JSON object with a key for each stack that produced an output (e.g. your stack that contains an API gateway)
manually parse that JSON to get your apigateway url
create your configuration file and upload it to S3 (you can do it via aws-sdk)
Of course, you have the last steps in a custom script, which means that you have to wrap your cdk deploy. I suggest to do so with a nodejs script, so that you can leverage aws-sdk to upload your file to S3 easily.
Accepting that cdk doesn't support this, I split logic into two cdk scripts, accessed API gateway URL as cdk output via the cli, then wrapped everything in a bash script.
AWS CDK:
// API gateway
const api = new apigateway.RestApi(this, 'my-api', ..)
// output url
const myResourceURL = api.deploymentStage.urlForPath('/myResource');
new cdk.CfnOutput(this, 'MyRestURL', { value: myResourceURL });
Bash:
# deploy api gw
cdk deploy --app (..)
# read url via cli with --query
export rest_url=`aws cloudformation describe-stacks --stack-name (..) --query "Stacks[0].Outputs[?OutputKey=='MyRestURL'].OutputValue" --output text`
# configure React app
echo "{ \"api\" : { \"invokeUrl\" : \"$rest_url\" } }" > src/app-config.json
# build React app with url
npm run build
# run second cdk app to deploy React built output folder
cdk deploy --app (..)
Is there a better way?
I solved a similar issue:
Needed to build and upload react-app as well
Supported dynamic configuration reading from react-app - look here
Released my react-app with specific version (in a separate flow)
Then, during CDK deployment of my app, it took a specific version of my react-app (version retrieved from local configuration) and uploaded its zip file to S3 bucket using CDK BucketDeployment
Then, using AwsCustomResource I generated a configuration file with references to Cognito and API-GW and uploaded this file to S3 as well:
// create s3 bucket for react-app
const uiBucket = new Bucket(this, "ui", {
bucketName: this.stackName + "-s3-react-app",
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
});
let confObj = {
"myjsonobj" : {
"region": `${this.region}`,
"identity_pool_id": `${props.CognitoIdentityPool.ref}`,
"myBackend": `${apiGw.deploymentStage.urlForPath("/")}`
}
};
const dataString = JSON.stringify(confObj, null, 4);
const bucketDeployment = new BucketDeployment(this, this.stackName + "-app", {
destinationBucket: uiBucket,
sources: [Source.asset(`reactapp-v1.zip`)]
});
bucketDeployment.node.addDependency(uiBucket)
const s3Upload = new custom.AwsCustomResource(this, 'config-json', {
policy: custom.AwsCustomResourcePolicy.fromSdkCalls({resources: custom.AwsCustomResourcePolicy.ANY_RESOURCE}),
onCreate: {
service: "S3",
action: "putObject",
parameters: {
Body: dataString,
Bucket: `${uiBucket.bucketName}`,
Key: "app-config.json",
},
physicalResourceId: PhysicalResourceId.of(`${uiBucket.bucketName}`)
}
});
s3Upload.node.addDependency(bucketDeployment);
As others have mentioned, this isn't supported within CDK. So this how we solved it in SST: https://github.com/serverless-stack/serverless-stack
On the CDK side, allow defining React environment variables using the outputs of other constructs.
// Create a React.js app
const site = new sst.ReactStaticSite(this, "Site", {
path: "frontend",
environment: {
// Pass in the API endpoint to our app
REACT_APP_API_URL: api.url,
},
});
Spit out a config file while starting the local environment for the backend.
Then start React using sst-env -- react-scripts start, where we have a simple CLI that reads from the config file and loads them as build-time environment variables in React.
While deploying, replace these environment variables inside a custom resource based on the outputs.
We wrote about it here: https://serverless-stack.com/chapters/setting-serverless-environments-variables-in-a-react-app.html
And here's the source for the ReactStaticSite and StaticSite constructs for reference.
In my case, I'm using the Python language for CDK. I have a Makefile which I invoke directly from my app.py like this:
os.system("make"). I use the make to build up a layer zip file per AWS Docs. Technically you can invoke whatever you'd like. You must import the os package of course. Hope this helps.
I've been calling codebuild and manually overriding the buildspec like this:
aws codebuild start-build --cli-input-json file://servicea/custom.json
and then in custom.json
{
"projectName": "myproject",
"sourceVersion": "master",
"buildspecOverride": "servicea/buildspec.yml"
}
Now I want to use bitbucket trigger (or github if bitbucket is not supported) to build the service automatically after it's being pushed to master.
I've been Googling and found this tutorial https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html
However, I met a roadblock where I couldn't build a specific folder with a specific buildspec.
e.g.
for servicea, the build should run if I push to master and change any files in servicea folder with servicea/buildspec.yaml as the buildspec
for serviceb, the build should run if I push to master and change any files in serviceb folder with serviceb/buildspec.yaml as the buildspec
There is a FILE_PATH filter in the trigger, however there's I couldn't find a way to set the custom buildspec.
Is there any way to achieve this?
Note:
I want to use 1 codebuild project for all of my services
Bitbucket's webhook payload doesn't have the list of files changed in them, unlike GitHub.
Workaround:
Set the "git-credential-helper" to "yes" (or true) in your buildspec. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
You can then fetch the list of file changed for the specific commit using the call mentioned in https://community.atlassian.com/t5/Bitbucket-questions/Bitbucket-How-to-get-modified-files-of-a-commit-in-JSON-format/qaq-p/704126
You can obtain the commit from the environment variable: CODEBUILD_RESOLVED_SOURCE_VERSION and the branch from: CODEBUILD_WEBHOOK_HEAD_REF. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
Context:
I have a CodePipeline set up that uses CodeCommit and CodeBuild as its source and build phases.
My build includes a plugin (com.zoltu.git-versioning) that uses the Git commit history to dynamically create a build version number.
Issue:
This fails on the AWS pipeline because of it cannot find any Git information in the source used to perform the build.
Clearly the action used to checkout the source uses an export which omits the Git metadata and history.
Question:
How do I configure CodeCommit or CodePipeline to do a proper git clone? I've looked in the settings for both these components (as well as CodeBuild) and cannot find any configuration to set the command used by the checkout action.
Has anyone got CodePipeline builds working with a checkout containing full Git metadata?
This is currently not possible with the CodeCommit action in CodePipeline.
https://forums.aws.amazon.com/thread.jspa?threadID=248267
CodePipeline supports git full clone as of October:
https://aws.amazon.com/about-aws/whats-new/2020/09/aws-codepipeline-now-supports-git-clone-for-source-actions/
In your console, go to the source stage and edit.
You will have a new option to fully clone your git history.
full clone option
In Terraform you will have to add it to the source action's configuration:
configuration = {
RepositoryName = var.repository_name
BranchName = "master"
OutputArtifactFormat = "CODEBUILD_CLONE_REF"
}
More info:
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-codecommit-gitclone.html
Yes, CodePipeline supports now a Git Full Clone.
You just need to do some extra steps: https://docs.aws.amazon.com/codepipeline/latest/userguide/troubleshooting.html#codebuild-role-connections
However, CodePipeline does not currently support dynamic branches, Pull Requests. See Dynamically change branches on AWS CodePipeline
Therefore, if you need to extend your pipeline for Pull Requests, I'd recommend the approach posted by Timothy Jones above.
There's one more related thing that's worth mentioning. CodeBuild has the Full Clone option as well.
As long as you do not use the Local Source cache option, the Git history is there.
When I tried to use the above mentioned cache option, I noticed that .git is not a directory. It's a file containing one line of text, e.g.:
gitdir: /codebuild/local-cache/workspace/9475b907226283405f08daf5401aba99ec6111f966ae2b921e23aa256f52f0aa/.git
I don't know why it's currently implemented like this but, it's confusing (at least for me) and I don't consider it to be the expected behavior.
Although CodePipeline doesn't natively support this, you can get the information by cloning the repository in CodeBuild.
To do this, you need to set the permissions correctly, then carefully clone the repository.
Permissions
To give the permissions to clone the repository you need to:
Give your CodeBuild role the codecommit:GitPull permission, with the resource ARN of your CodeCommit repository
Put git-credential-helper: yes in the env part of your buildspec file
Cloning the repo
To clone the repo, you'll need to:
know the clone URL and branch (CodeBuild doesn't know this information)
git reset back to the commit that CodeBuild is
building (otherwise you'll have a race condition between commits and builds).
git reset "$CODEBUILD_RESOLVED_SOURCE_VERSION"
If you'd like examples, I've made a detailed writeup of the process, and published an example CodePipeline stack showing it in action.
I spent too much time on this poorly documented process, that I decided to create some documentation for myself and future developers. I hope it helps.
CodeBuild + CodePipeline
This will connect CodeBuild and CodePipeline such that changes to your GitHub repository triggers CodePipeline to do a Full clone of your repository, that is then passed to CodeBuild which just transforms the local .git folder metadata to be poiting to the correct branch, and then all of the source code plus the Git metadata is deployed to Elastic Beanstalk.
More information about this process can be found here.
Start creating a CodePipeline pipeline. In the middle of its creation, you wull be prompted to create a CodeBuild project; do it.
Feel free to select a specific location for the Artifact store (custom S3 bucket).
Select GitHub (Version 2) as the source provider, check "Start the pipeline on source code change", and select Full cone as the output artifact format.
Select AWS CodeBuild as the Build provider.
For the Project Name, click onthe "Create project" button and select the below options:
a. Environment image: Managed image
b. Operating system: Amazon Linux 2
c. Runtime(s): Standard
d. For the Buildspec, select "Insert build commands" and click on "Switch to editor". Then paste the below Buildspec code.
e. Enable CloudWatch logs.
In the Environment variables, insert:
BranchName: #{SourceVariables.BranchName} as Plaintext
CommitId: #{SourceVariables.CommitId} as Plaintext
Select Single build as the Build type.
Select AWS Elastic Beanstalk as the Deploy provider.
Review operation and create the pipeline.
Create and add a new policy to the newly created CodeBuildServiceRole role. Choose a name, like projectName-connection-permission and attach the following JSON to it (tutorial):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "arn:aws:codestar-connections:eu-central-1:123456789123:connection/sample-1908-4932-9ecc-2ddacee15095"
}
]
}
PS: Change the Resource value arn:aws:codestar-connections:eu-central-1:123456789123:connection/sample-1908-4932-9ecc-2ddacee15095 from the JSON to your connection ARN. To find the connection ARN for your pipeline, open your pipeline and click the (i) icon on your source action.
Create and add a new policy to the newly created CodeBuildServiceRole role. Choose a name, like projectName-s3-access and attach the following JSON to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket-codepipeline",
"arn:aws:s3:::my-s3-bucket-codepipeline/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
]
}
]
}
PS: Change the Resource values my-s3-bucket-codepipeline to match with your S3 bucket name for your CodePipeline.
Edit the inline policy for your CodePipelineServiceRole role by adding the following object to your Statement array:
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "*"
}
Done.
Buildspec code
version: 0.2
#env:
#variables:
# key: "value"
# key: "value"
#parameter-store:
# key: "value"
# key: "value"
#secrets-manager:
# key: secret-id:json-key:version-stage:version-id
# key: secret-id:json-key:version-stage:version-id
#exported-variables:
# - variable
# - variable
#git-credential-helper: yes
#batch:
#fast-fail: true
#build-list:
#build-matrix:
#build-graph:
phases:
#install:
#If you use the Ubuntu standard image 2.0 or later, you must specify runtime-versions.
#If you specify runtime-versions and use an image other than Ubuntu standard image 2.0, the build fails.
#runtime-versions:
# name: version
# name: version
#commands:
# - command
# - command
#pre_build:
#commands:
# - command
# - command
build:
commands:
- echo Branch - $BranchName
- echo Commit - $CommitId
- echo Checking out branch - $BranchName
- git checkout $BranchName
# - command
# - command
#post_build:
#commands:
# - command
# - command
#reports:
#report-name-or-arn:
#files:
# - location
# - location
#base-directory: location
#discard-paths: yes
#file-format: JunitXml | CucumberJson
#artifacts:
#files:
# - location
# - location
#name: $(date +%Y-%m-%d)
#discard-paths: yes
#base-directory: location
artifacts:
files:
- '**/*'
#cache:
#paths:
# - paths
Additional Info
Never edit the inline policy that was created by CodePipeline! Only create and add new policies to a role. See this issue.
The Environment Variables for CodeBuild must be set from CodePipeline -> Edit: Build -> Environment variables - optional. If you set these variables in CodeBuild -> Edit -> Environment -> Additional configuration -> Environment variables it WON'T WORK!
For a bigger list of Environment variables during CodeBuild, see Variables List, Action Variables, and CodeBuild variables.
The Git Full clone option on CodePipeline is not available without CodeBuild. This is a known annoying limitation.
You can include the buildspec.yml in your root (top level) project directory. See this.
The Full clone that CodePipeline does leaves the local repository .git in a detached HEAD state, meaning that in order to get the branch name you will have to either get it with the help of CodeBuild environment variables to retrieve it from CodePipeline, or to execute the following command (see this):
git branch -a --contains HEAD | sed -n 2p | awk '{ printf $1 }'
I've created a pipeline which does the following:
Git changes trigger next action (code build)
Codebuild initiates & builds a docker image from git source
Set latest docker container up on Elasticbeanstalk
The first 2 steps are working fine, git changes initiate a codebuild, the codebuild builds a docker image, and then tries to set it up on Elasticbeanstalk (which fails). The following error is thrown:
Invalid action configuration The action failed because either the
artifact or the Amazon S3 bucket could not be found. Name of artifact
bucket: MY_BUCKET_NAME. Verify that this bucket
exists. If it exists, check the life cycle policy, then try releasing
a change.
In my codebuild project, I've set the artifact location to MY_BUCKET_NAME & named it aws-test-artifact. Is this all I have to do?
I've tried looking around and am unable to find anything on this issue.
I had the same problem. Just changed Input artifacts from BuildArtifact to SourceArtifact in the build stage, and everything worked.
As Adam Loving commented we must add artifacts section.
Adding this section to your buildspec.yml file will make this work.
artifacts:
files:
- '**/*'
From documentation https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.files adding '**/*' will include all files into the build target.
So I found the fix to this issue! What I had to do was goto codebuild => edit project => Show advanced settings => Artifacts packaging
From here I changed Artifacts packaging to Zip!