I am using CodePipeline with CodeCommit. Builds are triggered automatically with push to master branch. In CodePipeline console it is clearly visible that i am receiving commit id but i need to get it in the build environment so i can add them as a tag to the ECS image when i build it. Is there a way to get in in build environment.
You can use the CODEBUILD_RESOLVED_SOURCE_VERSION environment variable to retrieve the commit hash displayed in CodePipeline at build time.
Adding an answer that explains how to achieve this in CloudFormation, as it took me a while to figure it out. You need to define your stage as:
Name: MyStageName
Actions:
-
Name: StageName
InputArtifacts:
- Name: InputArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
OutputArtifacts:
- Name: OutputArtifact
Configuration:
ProjectName: !Ref MyBuildProject
EnvironmentVariables:
'[{"name":"COMMIT_ID","value":"#{SourceVariables.CommitId}","type":"PLAINTEXT"}]'
In your actions you need to have this kind of syntax. Note that the EnvironmentVariables property of a CodePipeline stage is different from a AWS::CodeBuild::Project property. If you were to add #{SourceVariables.CommitId} as an env variable there, it wouldn't be resolved properly.
CodePipeline now also allows you to configure your pipeline with variables that are generated at execution time. In this example your CodeCommit action will produce a variable called CommitId that you can pass into a CodeBuild environment variable via the CodeBuild action configuration.
Here is a conceptual overview of the feature: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-variables.html
For an example walk through of passing the commit id into your build action you can go here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-variables.html
It would also be worth considering tagging the image with the CodePipeline execution id instead of the commit id, that way it prevents future builds with the same commit from overwriting the image. Using the CodePipeline execution id is also shown in the example above.
Is this what you're looking for?
http://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring-source-revisions-view.html#monitoring-source-revisions-view-cli
Most (if not all) of the language SDKs have this API built in also.
Additionally to #Bar's answer: just adding EnvironmentVariables is not enough, you need to set Namespace also.
For example:
pipeBackEnd:
Type: AWS::CodePipeline::Pipeline
Properties:
...
Stages:
- Name: GitSource
Actions:
- Name: CodeSource
ActionTypeId:
Category: Source
...
Configuration: (...)
Namespace: SourceVariables # <<< === HERE, in Source
- Name: Deploy
Actions:
- Name: BackEnd-Deploy
ActionTypeId:
Category: Build
Provider: CodeBuild (...)
Configuration:
ProjectName: !Ref CodeBuildBackEnd
EnvironmentVariables: '[{"name":"BranchName","value":"#{SourceVariables.BranchName}","type":"PLAINTEXT"},{"name":"CommitMessage","value":"#{SourceVariables.CommitMessage}","type":"PLAINTEXT"}]'
Also, it may be useful: list of CodePipeline variables
Related
I want to create a separate 'dev' AWS Lambda with my Serverless service.
I have deployed my production, 'prod', environment and tried to then deploy a development, 'dev', environment so that I can trial features without affecting customer experience.
In order to deploy the 'dev' environment I have:
Created a new serverless-dev.yml file
Updated the stage and profile fields in my .yml file:
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-2
profile: dev
memorySize: 128
timeout: 30
Also update the resources.Resources.<Logical Id>.Properties.RoleName value, as if I try to use the same role as my 'prod' Lambda, I get this message: clearbit-lambda-role-prod already exists in stack
resources:
Resources:
<My Logical ID>:
Type: AWS::IAM::Role
Properties:
Path: /my/cust/path/
RoleName: clearbit-lambda-role-dev # Change this name
Run command: sls deploy -c serverless-dev.yml
Is this the conventional method to achieve this? I can't find anything in the documentation.
Serverless Framework has support for stages out of the box. You don't need a separate configuration, you can just specify --stage <name-of-stage> when running .e.g sls deploy and it will automatically use that stage. All resources created by the Framework under the hood are including stage in it's names or identifiers. If you are defining some extra resources in resources section, you need to change them, or make sure they include stage in their names. You can get the current stage in configuration with ${sls:stage} and use that to construct names that are e.g. prefixed with stage.
I wanted to spin up a CodePipeline on AWS with a Snyk Scan action through CloudFormation. The official documentation on how to do this is a little light on details and seems to be missing key bits of information, so I was hoping someone could shed some light on this issue. According to the Snyk action reference, there are only several variables that need to be configured, so I followed along and setup my CodePipeline CF template with the following configuration,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
However, it is unclear how CodePipeline authenticates with Snyk with just this configuration. Sure enough, when I tried to spin up this template, I got the following error through the CloudFormation console,
Action configuration for action 'Scan' is missing required configuration 'ClientId'
I'm not exactly sure what the ClientId is in this case, but I assume it is the Snyk ORG id. So, I added ClientId under the Configuration section of the template. When I spun the new template up, I got the following error,
Action configuration for action 'Scan' is missing required configuration 'ClientToken'
Again, there is no documentation (that I could find) on the AWS side for what this ClientToken is, but I assume it is a Snyk API token, so I went ahead and added that. My final template looks like,
- Name: Scan
Actions:
- Name: Scan
InputArtifacts:
- Name: "source"
ActionTypeId:
Category: Invoke
Owner: ThirdParty
Version: 1
Provider: Snyk
OutputArtifacts:
- Name: "source-scan"
Configuration:
ClientId: <id>
ClientToken: <token>
The CloudFormation now goes up fine and without error, but the CodePipeline itself halts on the Scan stage, stalls for ten or so minutes and then outputs a error that doesn't give you much information,
There was an error in the scan execution.
I assume I am not authenticating with Snyk correctly. I can set up the scan fine through the console, but that includes an OAuth page where I enter my username/password before Snyk can authorize AWS. Anyway, I need to be able to set up the scan through CloudFormation as I will not have console for the project I am working on.
I am looking for a solution and/or some documentation that covers this use case. If anyone could point me in the right direction, I would be much obliged.
I have below cloudformation template:
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: project
ServiceRole: !Ref CodeBuildRole
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
...
The Artifiacts and Source -> Type are CODEPIPELINE. I am translating above code to CDK but couldn't find the right API to specify these values.
I read this doc https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-codebuild.Source.html and https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-codebuild.IArtifacts.html but it doesn't have any method to load source from CODEPIPELINE.
You can use PipelineProject:
A convenience class for CodeBuild Projects that are used in CodePipeline.
An example of how the class can be used is in:
Creating a pipeline using the AWS CDK
I have a CodeBuild project with buildspec that requires database password value to operate. I want this buildspec to be environment-agnostic, but each environment requires different database password. Database password value for each environment is stored in SSM store under it's own key.
What would be the better approach to pass the database password to the CodeBuild project in this scenario?
Using CodeBuild's env.parameter-store
It seems that recommended approach is to use CodeBuild's built-in solution (env.parameter-store), but then I will have to load passwords for each environment and then to select one password in the build script:
# Supported Variables
#---------------------
# - ENVIRONMENT
#
version: 0.2
env:
parameter-store:
DB_PASSWORD_PROD: "/acme/prod/DB_PASSWORD"
DB_PASSWORD_STAGE: "/acme/stage/DB_PASSWORD"
DB_PASSWORD_QA: "/acme/qa/DB_PASSWORD"
phases:
build:
commands:
- |-
case "${ENVIRONMENT}" in
"prod") DB_PASSWORD="${DB_PASSWORD_PROD}" ;;
"stage") DB_PASSWORD=${DB_PASSWORD_STAGE} ;;
"qa") DB_PASSWORD=${DB_PASSWORD_QA} ;;
esac
- echo "Doing something with \$DB_PASSWORD…"
This will require three requests to SSM and it makes buildspec more complex. Such approach looks sub-optimal to me.
Maybe there is a way to somehow construct SSM key using ENVIRONMENT variable in env.parameter-store?
Pass SSM parameters from CodePipeline
The other approach would be to pass the password from the CodePipeline as an environment variable directly to CodeBuild project. This will dramatically simplify the buildspec. But is it safe from the security perspective?
Get SSM parameters manually in CodeBuild script
Would it be better to call SSM from the script manually to load the required value?
# Supported Variables
#---------------------
# - ENVIRONMENT
#
version: 0.2
phases:
build:
commands:
- >-
DB_PASSWORD=$(
aws ssm get-parameter
--name "/acme/${ENVIRONMENT}/DB_PASSWORD"
--with-decryption
--query "Parameter.Value"
--output text
)
- echo "Doing something with \$DB_PASSWORD…"
Is this approach would be more secure?
Using CodeBuild's env.parameter-store
Looking at documentation, there is no way to dynamically construct the SSM parameter key and pre-loading parameters for each environment is just wrong. This would affect performance and have negative effect on API rate limits as well as will make security audit harder.
Get SSM parameters manually in CodeBuild script
I guess this could work, but it will make the script more complex and would also couple it more tightly to SSM parameters store, because it will need to know about SSM store and key name structure.
Pass SSM parameters from CodePipeline
Looking at documentation there is a specific environment variable type called PARAMETER_STORE. This allows to get value from SSM parameter store prior to invoking the CodeBuild build project.
I believe this is a cleanest way to achieve the desired result and it shouldn't affect security in negative way, because parameter would only be resolved by CodePipeline on build project invocation:
- Name: stage-stage
Actions:
- Name: stage-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/stage/DB_PASSWORD"
}]
- Name: prod-stage
Actions:
- Name: prod-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/prod/DB_PASSWORD"
}]
- Name: qa-stage
Actions:
- Name: qa-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/qa/DB_PASSWORD"
}]
I'm doing a serverless app in lambda using CloudFormation.
In my CodeBuild project, I set it to zip up the output and place it in "myBucket\AWSServerless1.zip" and it does correctly.
Now I'm working on my CodePipeline, I reference the original CodeBuild project. However, now instead, it puts it in codepipeline-us-west-#####. That's fine. The issue is that the .zip file has a RANDOM name. CodePipeline ignores the name I gave it in the CodeBuild project.
In the serverless.template, I have to specify the CodeUri (which seems to be the CodeBuild project output for some odd reason). If I reference the AWSServerless1.zip, it works fine (but its not building to there, so its stale code)... but...
Since CodePipeline calling CodeBuild gives it a random name, how am I supposed to reference the ACTUAL BuildArtifact in the serverless.template?
I know this is very weird, I was stuck with this behavior of CodePipeline and then had to rewrite the buildspec to make CodePipeline work. CodePipeline makes it's own zip file even if you create your own zip through CodeBuild as well and that too with a unique name.
But there is one way out, Codepipeline will create one zip file but it will unzip it while giving the artifact to CodeDeploy. So you need not worry about its name. CodeDeploy will get the unzipped version of your code. CodePipeline keeps track of the name and it will always point to the newest one.
Suppose :
CodePipeline creates artifact : some-random-name.zip
some-random-name
|- deploy/lib/lambda-code
|- some-file.yaml
Whenever CodePipeline gives artifact to CodeDeploy, it will unzip it so you can anytime refer the code under some-random-name.zip
So in your case when you give CodeUri in the SAM template just give the folder name which is deploy where your lambda code is present.
Resources:
Hello:
Type: 'AWS::Serverless::Function'
Properties:
Handler: example.MyHandler
Runtime: java8
CodeUri: deploy
Description: ''
MemorySize: 512
Timeout: 15
Hope this helps.
I was facing the same error and I managed to work around it by doing the following:
1- On the build specification (buildspec.yml) add a sam package command (this generates a package.yml that will be used by cloudformation to deploy the lambda).
build:
commands:
- sam package
--template-file ../template.yaml
--output-template-file ../package.yml
--s3-bucket onnera-ci-cd-bucketcode here
2- Add the package.yml to output artifacts
artifacts:
files:
- DeviceProvisioning/package.yml
3- On the template.yaml that will be deployed reference directly the CodeUri (internally this will be resolved to the bucket with the output artificats from codebuild).
Resources:
DeviceProvisioningFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: DeviceProvisioningFunction/target/DeviceProvisioningFunction-1.0.jar
4- On the pipeline make the output of the build phase avaialble on deployment phase:
const buildOutput = [new codepipeline.Artifact()];
const buildAction = new codepipeline_actions.CodeBuildAction({
actionName: 'CodeBuild',
project: deviceProvisioning,
input: sourceOutput,
outputs: buildOutput,
});
5- Use the build output to specify the templatePath on the deploy action:
const deployAction = new codepipeline_actions.CloudFormationCreateUpdateStackAction({
extraInputs: buildAction.actionProperties.outputs,
actionName: "UpdateLambda",
stackName: "DeviceProvisioningStack",
adminPermissions: true,
templatePath: buildOutput[0].atPath("package.yml"),
cfnCapabilities: [CfnCapabilities.AUTO_EXPAND, CfnCapabilities.NAMED_IAM]
});
Make sure that the output artifacts from the build phase are available on the deploy phase.