Cloudformation template for CodePipeline - amazon-web-services

We have an aws setup where we have a test account and a production account.
Our code commit (java lambda's) is in our test account and we want to use CodePipeline to deploy code from here to our test account and production accounts.
I was wondering if anyone is aware of any ready made cloudformation (or cdk) templates that can perform this work?
Thanks
Damien

I have implemented that a few days ago using CDK, the idea is to create an IAM Role on the target environment and assume this role when running the codebuild(which runs as part of the code pipeline).
In my case, since the codebuild creates CDK stacks I gave an AdministratorAccess policy to this role.
Later, create new codebuild and attach permissions to codebuild project role.
// create the codebuild project used by the codepipeline
const codeBuildProject = new codebuild.PipelineProject(scope, `${props.environment}-${props.pipelineNamePrefix}-codebuild`, {
projectName: `${props.environment}-${props.pipelineNamePrefix}`,
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec.yml'),
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2,
privileged: true,
environmentVariables: buildEnvVariables,
computeType: props.computeType
},
})
// attach permissions to codebuild project role
codeBuildProject.addToRolePolicy(new PolicyStatement({
effect: Effect.ALLOW,
resources: [props.deploymentRoleArn],
actions: ['sts:AssumeRole']
}));
Be aware that props.deploymentRoleArn is the ARN of the role you created on the target environment.
Then, create a new pipeline and add codeBuildProject to codepipelineActions.CodeBuildAction as project:
// create codepipeline to deploy cdk changes
const codePipeline = new codepipeline.Pipeline(scope, `${props.environment}-${props.pipelineNamePrefix}-codepipeline`, {
restartExecutionOnUpdate: false,
pipelineName: `${props.environment}-${props.pipelineNamePrefix}`,
stages: [
{
stageName: 'Source',
actions: [
new codepipelineActions.GitHubSourceAction({
branch: props.targetBranch,
oauthToken: gitHubToken,
owner: props.githubRepositoryOwner,
repo: props.githubRepositoryName,
actionName: 'get-sources',
output: pipelineSourceArtifact,
})]
},
{
stageName: 'Deploy',
actions: [
new codepipelineActions.CodeBuildAction({
actionName: 'deploy-cdk',
input: pipelineSourceArtifact,
type: codepipelineActions.CodeBuildActionType.BUILD,
project: codeBuildProject
}),
]
}
]
});
The relevant part from above code snippet is Deploy stage.The other stage is only required in case you want to get sources from github - More info here.
This is the full solution, in case you want to implement something else, Read more about code pipeline actions here.

Related

Adding the role to code build to access the ECR

I want to give the policy to codebuild to access the ecr repository for push.
However to what should I give the policy?
I can do this manually in amazon web console though,
it's quite not clear to me in cdk.
const buildProject = new codebuild.PipelineProject(this, 'buildproject', {
environment: {
buildImage:codebuild.LinuxBuildImage.STANDARD_4_0,
privileged:true,
},
buildSpec: codebuild.BuildSpec.fromSourceFilename("./buildspec.yml")
});
buildProject.addToRolePolicy(new iam.PolicyStatement({
resources: [what should be here?],
actions: ['ecr:GetAuthorizationToken'] }
));
Simply myRepository.grantPullPush(buildProject).
Reference: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecr.Repository.html#grantwbrpullwbrpushgrantee
This will abstract away the content of the policy.

How can I reference an existing codebuild project in codepipeline via CDK?

I am using AWS CDK to deploy codepipeline and codebuild. What I am current doing is to create codebuild in one cloudformation stack and reference the codebuild in the codepipeline in a different cf stack.
Below is my code, I create a codebuild action like:
const action = new actions.CodeBuildAction({
actionName: "MockEventBridge",
type: actions.CodeBuildActionType.BUILD,
input: input,
project: new codebuild.PipelineProject(this, name, {
projectName: mockName,
environment: {
computeType: codebuild.ComputeType.SMALL,
buildImage
privileged: true,
},
role,
buildSpec: codebuild.BuildSpec.fromSourceFilename(
"cicd/buildspec/mockEventbridge.yaml"
),
}),
runOrder: 1,
})
...
const stages = {
stageName, actions: [action]
}
once build the action, I use below code to build codepipeline.
new codepipeline.Pipeline(this, name, {
pipelineName: this.projectName,
role,
stages,
artifactBucket
});
The problem is that both the codebuild project and codepipeline are built into one stack. If I build the codebuild project in a separate cf stack, how can I reference this stack in codepipeline?
when look at the api reference https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-codepipeline.Pipeline.html, I can't find a way to reference the codebuild arn in codepipeline instance.
Use the codebuild.Project.fromProjectArn static method to import an external Project resource using its ARN. It returns an IProject, which is what your pipeline's actions.CodeBuildAction props expect.
You can use the export value to export the resource Codebuild created in another Stack.
The exported CodeBuild from the first Stack can be imported in the new Stack of CodePipeline.
You can see this page for more info https://lzygo1995.medium.com/how-to-export-and-import-stack-output-values-in-cdk-ff3e066ca6fc

Cross-Account AWS CodePipeline cannot access CloudFormation deploy artifacts

I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV.
After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild.
CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:
CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request
id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and
source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP
The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.
Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?
Everything works when executed from within a single account.
Here is my code snippet (deployed in the CI account):
MyCodeBuild:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment: ...
Name: !Sub "my-codebuild"
ServiceRole: !Ref CodeBuildRole
EncryptionKey: !GetAtt KMSKey.Arn
Source:
Type: CODEPIPELINE
BuildSpec: ...
CrossAccountCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: "my-pipeline"
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
...
- Name: StagingDev
Actions:
- Name: create-stack-in-DEV-account
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: "my-dev-stack"
ChangeSetName: !Sub "my-changeset"
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_NAMED_IAM
# this is the artifact I want to access from the next action
# within this CI account pipeline
OutputFileName: "my-DEV-output.json"
TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
RunOrder: 1
- Name: process-DEV-outputs
InputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: "1"
Provider: CodeBuild
Configuration:
ProjectName: !Ref MyCodeBuild
RunOrder: 2
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !GetAtt KMSKey.Arn
Type: KMS
CloudFormation generates output artifact, zips it and then uploads the file to S3.
It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.
workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.
mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.
In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).
import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
# X-Ray
patch_all()
# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))
# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')
# S3 Object ACLs Handler
def s3_acl_handler(event, context):
log.info(f'Received event: {format_json(event)}')
# Get Job
jobId = event['CodePipeline.job']['id']
jobData = event['CodePipeline.job']['data']
# Ensure we return a success or failure result
try:
# Assume IAM role from user parameters
credentials = sts.assume_role(
RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
RoleSessionName='codepipeline',
DurationSeconds=900
)['Credentials']
# Create S3 client from assumed role credentials
s3 = client('s3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
# Set S3 object ACL for each input artifact
for inputArtifact in jobData['inputArtifacts']:
s3.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=inputArtifact['location']['s3Location']['bucketName'],
Key=inputArtifact['location']['s3Location']['objectKey']
)
codepipeline.put_job_success_result(jobId=jobId)
except Exception as e:
logging.exception('An exception occurred')
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
)
I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.
Make sure your S3 bucket is using a CMK, not the default encryption key.
Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).
It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.
CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey
Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.
This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

Getting Commit ID in CodePipeline

I am using CodePipeline with CodeCommit. Builds are triggered automatically with push to master branch. In CodePipeline console it is clearly visible that i am receiving commit id but i need to get it in the build environment so i can add them as a tag to the ECS image when i build it. Is there a way to get in in build environment.
You can use the CODEBUILD_RESOLVED_SOURCE_VERSION environment variable to retrieve the commit hash displayed in CodePipeline at build time.
Adding an answer that explains how to achieve this in CloudFormation, as it took me a while to figure it out. You need to define your stage as:
Name: MyStageName
Actions:
-
Name: StageName
InputArtifacts:
- Name: InputArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
OutputArtifacts:
- Name: OutputArtifact
Configuration:
ProjectName: !Ref MyBuildProject
EnvironmentVariables:
'[{"name":"COMMIT_ID","value":"#{SourceVariables.CommitId}","type":"PLAINTEXT"}]'
In your actions you need to have this kind of syntax. Note that the EnvironmentVariables property of a CodePipeline stage is different from a AWS::CodeBuild::Project property. If you were to add #{SourceVariables.CommitId} as an env variable there, it wouldn't be resolved properly.
CodePipeline now also allows you to configure your pipeline with variables that are generated at execution time. In this example your CodeCommit action will produce a variable called CommitId that you can pass into a CodeBuild environment variable via the CodeBuild action configuration.
Here is a conceptual overview of the feature: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-variables.html
For an example walk through of passing the commit id into your build action you can go here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-variables.html
It would also be worth considering tagging the image with the CodePipeline execution id instead of the commit id, that way it prevents future builds with the same commit from overwriting the image. Using the CodePipeline execution id is also shown in the example above.
Is this what you're looking for?
http://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring-source-revisions-view.html#monitoring-source-revisions-view-cli
Most (if not all) of the language SDKs have this API built in also.
Additionally to #Bar's answer: just adding EnvironmentVariables is not enough, you need to set Namespace also.
For example:
pipeBackEnd:
Type: AWS::CodePipeline::Pipeline
Properties:
...
Stages:
- Name: GitSource
Actions:
- Name: CodeSource
ActionTypeId:
Category: Source
...
Configuration: (...)
Namespace: SourceVariables # <<< === HERE, in Source
- Name: Deploy
Actions:
- Name: BackEnd-Deploy
ActionTypeId:
Category: Build
Provider: CodeBuild (...)
Configuration:
ProjectName: !Ref CodeBuildBackEnd
EnvironmentVariables: '[{"name":"BranchName","value":"#{SourceVariables.BranchName}","type":"PLAINTEXT"},{"name":"CommitMessage","value":"#{SourceVariables.CommitMessage}","type":"PLAINTEXT"}]'
Also, it may be useful: list of CodePipeline variables

AWS Serverless Application Model (SAM) -- How do I change StageName?

I'm using AWS SAM (Serverless Application Model) to create a lambda with an API endpoint.
In my SAM template.yaml I have a getUser lambda with a /user endpoint.
template.yaml
Resources:
GetUser:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./src
Handler: handler.getUser
Timeout: 300
Runtime: nodejs6.10
Events:
GetUser:
Type: Api
Properties:
Path: /user
Method: get
When I deploy this using AWS CLI it successfully creates the lambda and endpoint, but with an API Gateway Stage confusingly named "Stage". I want to change stage name to something else, like "Prod". How do I change stage name?
Here's where stage name is defined in the cloudformation template after it is deployed. I want "StageName": "Stage" to be something like "StageName": "Prod".
"ServerlessRestApiDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"Properties": {
"RestApiId": {
"Ref": "ServerlessRestApi"
},
"StageName": "Stage"
}
I haven't been able to remove the Stage StageName, but when I deploy using SAM, I set a dynamic StageName in my GatewayAPI deployment using:
Properties:
StageName: !Ref "STAGE_VARIABLE"
I have a different stack for each environment, so there is a prod API with a prod stage and a dev API with a dev stage. I found this easier than have multiple stage deployments of the same GatewayAPI
To add another stage to an existing API, use a vanilla CFT stage resource. Docs are here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html
A transform modifies the API to raw CFTs before deployment when using the SAM CLI, but it supports raw resources and you can reference the dynamic deployment resource using a .Deployment suffix. You should be able to just add the resource and reference your API values via the ref intrinsic. Check out the details here: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessapi
# assuming there is a AWS::Serverless::Api resource named Api
ProdApiStage:
Type: AWS::ApiGateway::Stage
Properties:
StageName: prod
RestApiId: !Ref Api
DeploymentId: !Ref Api.Deployment
There was a bug in the SAM CLI that autogenerated a "Stage" stage. To remove the default generated "stage" stage, upgrade your sam cli to the latest, and add a globals section setting the openapi version:
Globals:
Api:
OpenApiVersion: 3.0.1
See https://github.com/awslabs/serverless-application-model/issues/191 for the details. This will prevent new spawns, but you will have to delete the stage manually if it was already deployed as SAM is stateless in nature.