Passing SSM parameters to CodeBuild project securely - amazon-web-services

I have a CodeBuild project with buildspec that requires database password value to operate. I want this buildspec to be environment-agnostic, but each environment requires different database password. Database password value for each environment is stored in SSM store under it's own key.
What would be the better approach to pass the database password to the CodeBuild project in this scenario?
Using CodeBuild's env.parameter-store
It seems that recommended approach is to use CodeBuild's built-in solution (env.parameter-store), but then I will have to load passwords for each environment and then to select one password in the build script:
# Supported Variables
#---------------------
# - ENVIRONMENT
#
version: 0.2
env:
parameter-store:
DB_PASSWORD_PROD: "/acme/prod/DB_PASSWORD"
DB_PASSWORD_STAGE: "/acme/stage/DB_PASSWORD"
DB_PASSWORD_QA: "/acme/qa/DB_PASSWORD"
phases:
build:
commands:
- |-
case "${ENVIRONMENT}" in
"prod") DB_PASSWORD="${DB_PASSWORD_PROD}" ;;
"stage") DB_PASSWORD=${DB_PASSWORD_STAGE} ;;
"qa") DB_PASSWORD=${DB_PASSWORD_QA} ;;
esac
- echo "Doing something with \$DB_PASSWORD…"
This will require three requests to SSM and it makes buildspec more complex. Such approach looks sub-optimal to me.
Maybe there is a way to somehow construct SSM key using ENVIRONMENT variable in env.parameter-store?
Pass SSM parameters from CodePipeline
The other approach would be to pass the password from the CodePipeline as an environment variable directly to CodeBuild project. This will dramatically simplify the buildspec. But is it safe from the security perspective?
Get SSM parameters manually in CodeBuild script
Would it be better to call SSM from the script manually to load the required value?
# Supported Variables
#---------------------
# - ENVIRONMENT
#
version: 0.2
phases:
build:
commands:
- >-
DB_PASSWORD=$(
aws ssm get-parameter
--name "/acme/${ENVIRONMENT}/DB_PASSWORD"
--with-decryption
--query "Parameter.Value"
--output text
)
- echo "Doing something with \$DB_PASSWORD…"
Is this approach would be more secure?

Using CodeBuild's env.parameter-store
Looking at documentation, there is no way to dynamically construct the SSM parameter key and pre-loading parameters for each environment is just wrong. This would affect performance and have negative effect on API rate limits as well as will make security audit harder.
Get SSM parameters manually in CodeBuild script
I guess this could work, but it will make the script more complex and would also couple it more tightly to SSM parameters store, because it will need to know about SSM store and key name structure.
Pass SSM parameters from CodePipeline
Looking at documentation there is a specific environment variable type called PARAMETER_STORE. This allows to get value from SSM parameter store prior to invoking the CodeBuild build project.
I believe this is a cleanest way to achieve the desired result and it shouldn't affect security in negative way, because parameter would only be resolved by CodePipeline on build project invocation:
- Name: stage-stage
Actions:
- Name: stage-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/stage/DB_PASSWORD"
}]
- Name: prod-stage
Actions:
- Name: prod-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/prod/DB_PASSWORD"
}]
- Name: qa-stage
Actions:
- Name: qa-stage-action
RunOrder: 1
ActionTypeId:
Category: Build
Provider: CodeBuild
Owner: AWS
Version: "1"
Configuration:
ProjectName: !Ref BuildProject
EnvironmentVariables: |-
[{
"type":"PARAMETER_STORE",
"name":"DB_PASSWORD",
"value":"/acme/qa/DB_PASSWORD"
}]

Related

CICD in AWS - GitHub to Lambda

I'm trying to build a CICD pipeline that supports the very simple process laid out below. I am trying to do this all in AWS (ie, avoiding GitHub Actions), and I do not want to have to manually zip code or transfer anything.
Target process:
Git Push code to GitHub Repository.
AWS Updates code within existing Lambda function and updates the $latest alias accordingly.
Progress so far
I have been able to link AWS CodePipeline to GitHub. When code is pushed to the repository, the pipeline triggers and a compressed file that contains the contents from GitHub is added to an S3 bucket.
Long term I will likely be interested in pre- and post-deployment testing, approvals, etc etc... but for now I just want a simple setup as described above.
Challenge
I cannot fathom how then to actually update the Lamda function now I have this compressed file in S3. I've tried various Build/Deploy things from within the CodeDeloy Pipeline, but I get various errors. I'm not even entirely sure if this entire approach is the best way to go about what I want to do?!
Ask
Is this a valid approach to implementing this kind of CICD pipeline? If no, please suggest alternative and justify why you think it's better.
How do you automatically take the code from within the compressed S3 file and get it in to the Lambda function?
Thanks for your help!
Richard
What you could do, is include an AWS SAM (CloudFormation) template in your repository. You could then in a build step, use the build/package step of AWS SAM, which will create a packaged.yaml CloudFormation template. This template is then usable with the CloudFormation deployment actions.
This is part of a CloudFormation template that sets up such a flow, some things are omitted for brevity:
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: codebuildproject
Description: Package and Deploy
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
EnvironmentVariables:
- Name: IAC_BUCKET
Type: PLAINTEXT
Value: !Sub iac-${AWS::Region}-${AWS::AccountId} # Bucket needed for SAM deployment
ServiceRole: !Ref CodeBuildServiceRole
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- 'pip install --upgrade --user aws-sam-cli'
build:
commands:
- sam build
- sam package --s3-bucket $IAC_BUCKET --output-template-file packaged.yaml
artifacts:
files:
- 'packaged.yaml'
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Location: !Sub "codepipeline-${AWS::Region}-${AWS::AccountId}"
Type: S3
Name: deployment-pipeline
RoleArn: !GetAtt PipelineExecutionRole.Arn
Stages:
- Name: Source
Actions:
- YourGithubSourceAction
- Name: Package
Actions:
- Name: SamPackage
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
Configuration:
ProjectName: !Ref CodeBuildProject
InputArtifacts:
- Name: SourceZip
OutputArtifacts:
- Name: samArtifact
RunOrder: 1
- Name: Deployment
Actions:
- Name: CreateChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
ActionMode: "CHANGE_SET_REPLACE"
ChangeSetName: !Sub "${ApplicationName}-${Environment}-changeset"
Capabilities: CAPABILITY_NAMED_IAM
StackName: your-stack-name
RoleArn: !GetAtt PipelineExecutionRole.Arn
ParameterOverrides: !Sub '{ "Environment" : "${Environment}" }'
TemplatePath: 'samArtifact::packaged.yaml'
InputArtifacts:
- Name: samArtifact
RunOrder: 1
- Name: ExecuteChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
ActionMode: CHANGE_SET_EXECUTE
ChangeSetName: !Sub ${ApplicationName}-${Environment}-changeset
StackName: your-stack-name
RunOrder: 2
Be sure to have a look at AWS SAM if you're not familiar with it to see all the possibilities and how to construct your template itself.

AWS Serverless, CloudFormation : Error, Trying to populate non string value into a string for variable

I am using serverless framework for deploying my application on AWS Cloud.
https://serverless.com/
I want to use the value of AWS Account ID in my serverless.yml file and I want to export the acccount ID as the environment variable, so that it can be accessed from Lambda functions.
Based on the value of this lambda function, I want to create some resources ( like IAM roles, etc.), which refer to this accountId variable.
But when I try to deploy the stack, I get the below error,
Trying to populate non string value into a string for variable
${self:custom.accountId}. Please make sure the value of the property
is a string.
My Serverless.yml file is as below
custom:
accountId : !Ref "AWS::AccountId"
provider:
name: aws
runtime: go1.x
stage: dev
region: us-east-1
environment:
ACCOUNT_ID : ${self:custom.accountId}
myRoleArn: arn:aws:iam::${self:custom.accountId}:role/xxxxxxxx
Is there any way to refer to the value of the Account Id in the serverless.yml file?
You can't reference AWS::AccountId in your serverless.yml, because it doesn't quite translate when creating the CloudFormation template.
The workaround is to use the serverless plugin Pseudo Parameters.
You can install the plugin using npm.
npm install serverless-pseudo-parameters
You will also need to add the plugin to the serverless.yml file.
plugins:
- serverless-pseudo-parameters
Then you can reference your AccountId with #{AWS::AccountId}
functions:
helloWorld:
handler: index.handler
events:
- http:
path: /
method: get
environment:
ACCOUNT_ID : #{AWS::AccountId}
Note that the reference begins with a hash instead of a dollar sign.

Issues Creating Environments For AWS Lambda Service In CodeStar And CodePipeline

I used AWS CodeStar to create a new application with the "Express.js Aws Lambda Webservice" CodeStar template. This was great because it set me up with a simple CI/CD pipeline using AWS CodePipeline. By default the pipeline has 3 steps for grabbing the source code from a git repo, running the build step, and then deploying to "dev" environment.
My issue is that I can't set it up so that my pipeline has multiple environments: dev, staging, and prod.
My current deploy step has 2 actions: GenerateChangeSet and ExecuteChangeSet. Here are the configurations for the actions in original dev environment build step which work great:
I've created a new deploy stage at the end of my pipeline to deploy to staging, but honestly I'm not sure how to change the configurations. I'm thinking ultimately I want to be able to go into the AWS Lambda section of the AWS console and see three independent lambda functions: binance-bot-dev, binance-bot-staging, binance-bot-prod. Then each of these I could set as cloudwatch scheduled events or expose with their own api gateway url.
This is the configuration that I tried to use for a new deployment stage:
I'm really not sure if this configuration is correct and what exactly I should change in order to deploy in the way I want.
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
Also, I'm pointing to a different template.yml file in the project. The original template.yml looks like this:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
Dev:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs4.3
Environment:
Variables:
NODE_ENV: dev
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
For template.staging.yml I use the exact same config except I changed "Dev:" to "Staging:" under "Resources", and I also changed the value of the NODE_ENV environment variable. So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
Assuming that everything in the configuration is correct, I then need to troubleshoot this error. With everything set as described above I can run my pipeline, but when it gets to my staging build step the GenerateChage_Staging action fails with this error message:
Action execution failed User:
arn:aws:sts::954459734159:assumed-role/CodeStarWorker-binance-bot-CodePipeline/1524253307698
is not authorized to perform: cloudformation:DescribeStacks on
resource:
arn:aws:cloudformation:us-east-1:954459734159:stack/awscodestar-binance-bot-lambda-staging/*
(Service: AmazonCloudFormation; Status Code: 403; Error Code:
AccessDenied; Request ID: dd801664-44d2-11e8-a2de-8fa6c42cbf86)
It seem to me from this error message that I need to add the "cloudformation:DescribeStacks" for my "CodeStarWorker-binance-bot-CodePipeline" so I go to IAM -> Roles and click on the CodeStarWorker-binance-bot-CodePipeline role. However, when I click on "CodeStarWorker-binance-bot-CodePipeline" and drill into the policy information for CloudFormation it looks like this role already has permissions for "DescribeStacks"!
If anyone could point out what I'm doing wrong or offer any guidance on understanding and thinking about how to do multiple environments with AWS CodePipeline that would be great. thanks!
UPDATE:
I changed the "Stack name" in my Deploy_To_Staging pipeline stage back to "awscodestar-binance-bot-lambda". However, I then get this error form the GenerateChange_Staging action:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
UPDATE 2:
In the root of my project I have the buildspec.yml file that was generated by CodeStar. It looks like this:
version: 0.2
phases:
install:
commands:
# Install dependencies needed for running tests
- npm install
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
# Discover and run unit tests in the 'tests' directory
- npm test
build:
commands:
# Use AWS SAM to package the application using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
artifacts:
type: zip
files:
- template-export.yml
I then added this to the CloudFormation section:
Then I add this to the "build: -> commands:" section:
- aws cloudformation package --template template.staging.yml --s3-bucket $S3_BUCKET --output-template template-export.staging.yml
- aws cloudformation package --template template.prod.yml --s3-bucket $S3_BUCKET --output-template template-export.prod.yml
And I added this to the "files:"
template-export.staging.yml
template-export.prod.yml
HOWEVER, I am still getting an error that "binance-bot-BuildArtifact does not exist".
Here is the full error after making the buildspec.yml change:
Action execution failed Invalid TemplatePath:
binance-bot-BuildArtifact::template-export.staging.yml. Artifact
binance-bot-BuildArtifact doesn't exist
It seems very strange to me that I can access "binance-bot-BuildArtifact" in one stage of the pipeline but not another. Could it be that the build artifact is only available to the one pipeline stage directly after the build stage? Can someone please help me to be able to access this "binance-bot-BuildArtifact"? Thanks!
For example, should I be changing "Stack name", or should I keep that as "awscodestar-binance-bot-lambda" or change it for each environment as I am here?
You should use a unique stack name for each environment. If you didn't, you would be replacing your 'dev' environment with your 'staging' environment, and so forth.
So, I'm basically wondering is this the correct configuration for what I'm trying to achieve?
I don't think so. You should use the exact same template for each environment. In order to change the environment name for each of your deploys, you can use the 'Parameter Overrides' field to choose the correct value for your 'Environment' parameter.
it looks like this role already has permissions for "DescribeStacks"!
Could the issue here be that your IAM role only has DescribeStacks permission for the dev stack? It looks like it does not have permission to describe the staging stack. Maybe you can add a 'wildcard'/asterisk to the policy so that it matches all of your stack names?
Could it be that the build artifact is only available to the one pipeline stage directly after the build stage?
No, that has not been my experience with CodePipeline. Unfortunately I don't know why it's telling you that your artifact can't be found.
robrtsql has already provided some good advice in terms of using the same template in both stages.
You might find this walkthrough useful.
Basically, it describes adding a Cloudformation "template configuration" which allows you to specify parameters to the Cloudformation stack.
This will allow you to deploy the same template in both your dev and prod environments, but also allow you to tell the difference between a dev deployment and a prod deployment, by choosing a different template configuration in each stage.

Getting Commit ID in CodePipeline

I am using CodePipeline with CodeCommit. Builds are triggered automatically with push to master branch. In CodePipeline console it is clearly visible that i am receiving commit id but i need to get it in the build environment so i can add them as a tag to the ECS image when i build it. Is there a way to get in in build environment.
You can use the CODEBUILD_RESOLVED_SOURCE_VERSION environment variable to retrieve the commit hash displayed in CodePipeline at build time.
Adding an answer that explains how to achieve this in CloudFormation, as it took me a while to figure it out. You need to define your stage as:
Name: MyStageName
Actions:
-
Name: StageName
InputArtifacts:
- Name: InputArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
OutputArtifacts:
- Name: OutputArtifact
Configuration:
ProjectName: !Ref MyBuildProject
EnvironmentVariables:
'[{"name":"COMMIT_ID","value":"#{SourceVariables.CommitId}","type":"PLAINTEXT"}]'
In your actions you need to have this kind of syntax. Note that the EnvironmentVariables property of a CodePipeline stage is different from a AWS::CodeBuild::Project property. If you were to add #{SourceVariables.CommitId} as an env variable there, it wouldn't be resolved properly.
CodePipeline now also allows you to configure your pipeline with variables that are generated at execution time. In this example your CodeCommit action will produce a variable called CommitId that you can pass into a CodeBuild environment variable via the CodeBuild action configuration.
Here is a conceptual overview of the feature: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-variables.html
For an example walk through of passing the commit id into your build action you can go here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-variables.html
It would also be worth considering tagging the image with the CodePipeline execution id instead of the commit id, that way it prevents future builds with the same commit from overwriting the image. Using the CodePipeline execution id is also shown in the example above.
Is this what you're looking for?
http://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring-source-revisions-view.html#monitoring-source-revisions-view-cli
Most (if not all) of the language SDKs have this API built in also.
Additionally to #Bar's answer: just adding EnvironmentVariables is not enough, you need to set Namespace also.
For example:
pipeBackEnd:
Type: AWS::CodePipeline::Pipeline
Properties:
...
Stages:
- Name: GitSource
Actions:
- Name: CodeSource
ActionTypeId:
Category: Source
...
Configuration: (...)
Namespace: SourceVariables # <<< === HERE, in Source
- Name: Deploy
Actions:
- Name: BackEnd-Deploy
ActionTypeId:
Category: Build
Provider: CodeBuild (...)
Configuration:
ProjectName: !Ref CodeBuildBackEnd
EnvironmentVariables: '[{"name":"BranchName","value":"#{SourceVariables.BranchName}","type":"PLAINTEXT"},{"name":"CommitMessage","value":"#{SourceVariables.CommitMessage}","type":"PLAINTEXT"}]'
Also, it may be useful: list of CodePipeline variables

How to change aws credentials in Serverless 1.0?

I try to use Serverless 1.0, with several AWS credentials.
(In my PC, 1.3.0 is installed)
I found some descriptions that "admin.env" can change credentials in Stack overflow or github issues, but I can't found how to write and where to put admin.env.
Are there any good document for admin.env?
First create different profiles. Use cli(this works from 1.3.0, won't work in 1.0.0, not sure which you are using since you mention both):
serverless config credentials --provider aws --key 1234 --secret 5678 --profile your-profile-name
Then in your serverless.yml file you can set the profile you want use:
provider:
name: aws
runtime: nodejs4.3
stage: dev
profile: your-profile-name
If you want to automatically deploy to different profiles depending on the stage you define variables and reference them in your serverless.yml file.
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.defaultStage}
profile: ${self:custom.profiles.${self:provider.stage}}
custom:
defaultStage: dev
profiles:
dev: your-profile-name
prod: another-profile-name
Or you can reference your profile name in any other way. Read about variables in serverless-framework. You can get the name of profile to use from another file, from cli or from the same file(like in the example I gave).
More about the variables:
https://serverless.com/framework/docs/providers/aws/guide/variables/