How to handle multiple environments in CodePipeline? - amazon-web-services

I'm using code pipeline to deploy my infrastructure and I would like to be able to deploy it in different environments (dev, staging, prod,...).
I currently have a buildspec.yml file containing some "pip install" instructions and the "aws cloudformation package" command. I also created 2 pipelines, one for production and another for development pointing to 2 different branches on github. The problem I have is that since in both branches the files contains similar resources, I have name collision on the S3 buckets for example.
When using the AWS CLI and cloudformation to create or update a stack you can pass parameters using the --parameters option. I would like to do something similar in the 2 pipelines I've created.
What would be the best solution to tackle this issue?
The final goal is to automate the deployment of our infrastructure. Our infrastructure is composed of Users, KMS keys, Lamdbas (in python), Groups and Buckets.
I've created two pipelines following the tutorial: http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
The first pipeline is linked to the master branch of the repo containing the code and the second one to the staging branch. My goal is to automate the deployment of the master branch in the production environment using the first pipeline and the staging branch in the staging environment using the second one.
My buildspec.yml file look like:
version: 0.1
phases:
install:
commands:
- pip install requests -t .
- pip install simplejson -t .
- pip install Image -t .
- aws cloudformation package --template-file image_processing_sam.yml --s3-bucket package-bucket --output-template-file new_image_processing_sam.yml
artifacts:
type: zip
files:
- new_image_processing_sam.yml
The image_processing_sam.yml file look like:
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Description: Create a thumbnail for an image uploaded to S3
Resources:
ThumbnailFunction:
Type: "AWS::Serverless::Function"
Properties:
Role: !GetAtt LambdaExecutionRole.Arn
Handler: create_thumbnail.handler
Runtime: python2.7
Timeout: 30
Description: "A function computing the thumbnail for an image."
LambdaSecretEncryptionKey:
Type: "AWS::KMS::Key"
Properties:
Description: "A key used to encrypt secrets used in the Lambda functions"
Enabled: True
EnableKeyRotation: False
KeyPolicy:
Version: "2012-10-17"
Id: "lambda-secret-encryption-key"
Statement:
-
Sid: "Allow administration of the key"
Effect: "Allow"
Principal:
AWS: "arn:aws:iam::xxxxxxxxxxxxx:role/cloudformation-lambda-execution-role"
Action:
- "kms:Create*"
- "kms:Describe*"
- "kms:Enable*"
- "kms:List*"
- "kms:Put*"
- "kms:Update*"
- "kms:Revoke*"
- "kms:Disable*"
- "kms:Get*"
- "kms:Delete*"
- "kms:ScheduleKeyDeletion"
- "kms:CancelKeyDeletion"
Resource: "*"
-
Sid: "Allow use of the key"
Effect: "Allow"
Principal:
AWS:
- !GetAtt LambdaExecutionRole.Arn
Action:
- "kms:Encrypt"
- "kms:Decrypt"
- "kms:ReEncrypt*"
- "kms:GenerateDataKey*"
- "kms:DescribeKey"
Resource: "*"
LambdaExecutionRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "LambdaExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: LambdaKMS
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "kms:Decrypt"
Resource: "*"
-
Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "*"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
UserGroup:
Type: "AWS::IAM::Group"
LambdaTriggerUser:
Type: "AWS::IAM::User"
Properties:
UserName: "LambdaTriggerUser"
LambdaTriggerUserKeys:
Type: "AWS::IAM::AccessKey"
Properties:
UserName:
Ref: LambdaTriggerUser
Users:
Type: "AWS::IAM::UserToGroupAddition"
Properties:
GroupName:
Ref: UserGroup
Users:
- Ref: LambdaTriggerUser
Policies:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserPolicy
PolicyDocument:
Statement:
-
Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource:
- !GetAtt DispatcherFunction.Arn
Groups:
- Ref: UserGroup
PackageBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: "package-bucket"
VersioningConfiguration:
Status: "Enabled"
Outputs:
LambdaTriggerUserAccessKey:
Value:
Ref: "LambdaTriggerUserKeys"
Description: "AWSAccessKeyId of LambdaTriggerUser"
LambdaTriggerUserSecretKey:
Value: !GetAtt LambdaTriggerUserKeys.SecretAccessKey
Description: "AWSSecretKey of LambdaTriggerUser"
I've added a deploy action in both pipelines to execute the change set computed during the beta action.
The first pipeline works like a charm and does everything I expect it to do. Each time I push code in the master branch, it's deployed.
The issue I'm facing is that when I push code in the staging branch, everything works in the pipeline until the deploy action is reached. The deploy action try to create a new stack but since it's exactly the same buildspec.yml and the image_processing_sam.yml that is processed, I reach name collision like below.
package-bucket already exists in stack arn:aws:cloudformation:eu-west-1:xxxxxxxxxxxx:stack/master/xxxxxx-xxxx-xxx-xxxx
LambdaTriggerUser already exists in stack arn:aws:cloudformation:eu-west-1:xxxxxxxxxxxx:stack/master/xxxxxx-xxxx-xxx-xxxx
LambdaExecutionRole already exists in stack arn:aws:cloudformation:eu-west-1:xxxxxxxxxxxx:stack/master/xxxxxx-xxxx-xxx-xxxx
...
Is there a way to parameterize the buildspec.yml to be able to add a suffix to the name of the resources in the image_processing_sam.yml? Any other idea to achieve this is welcome.
Best regards.

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html
Template configuration files are applied to the CloudFormation in the CodePipeline via a parameter file like this:
{
"Parameters" : {
"DBName" : "TestWordPressDB",
"DBPassword" : "TestDBRootPassword",
"DBRootPassword" : "TestDBRootPassword",
"DBUser" : "TestDBuser",
"KeyName" : "TestEC2KeyName"
}
}
Place these files in the root of your repo and they can be referenced in at least 2 ways.
In your CodePipeline CloudFormation:
Configuration:
ActionMode: REPLACE_ON_FAILURE
RoleArn: !GetAtt [CFNRole, Arn]
StackName: !Ref TestStackName
TemplateConfiguration: !Sub "TemplateSource::${TestStackConfig}"
TemplatePath: !Sub "TemplateSource::${TemplateFileName}"
Or in the console in the Template configuration field:
It is worth noting the config file format is different from CloudFormation via cli using
-- parameters
--parameters uses this format:
[
{
"ParameterKey": "team",
"ParameterValue": "AD-Student Life Applications"
},
{
"ParameterKey": "env",
"ParameterValue": "dev"
},
{
"ParameterKey": "dataSensitivity",
"ParameterValue": "public"
},
{
"ParameterKey": "app",
"ParameterValue": "events-list-test"
}
]
CodePipeline Cloudformation template configuration files use this format:
{
"Parameters" : {
"DBName" : "TestWordPressDB",
"DBPassword" : "TestDBRootPassword",
"DBRootPassword" : "TestDBRootPassword",
"DBUser" : "TestDBuser",
"KeyName" : "TestEC2KeyName"
}
}

Check the Eric Nord's answer. It is the one you are looking for.
I asked the question on the AWS forum as well here.
Here is the solution provided by AWS:
Hi,
If your goal is to have different bucket names for staging and master, then another option is to use CloudFormation parameters.
When editing an existing pipeline if you edit an action you can expand the "Advanced" panel and enter parameter overrides to specify a different bucket prefix for each stage. You can also enter parameters as a separate .json file in your artifact.
There's more details on doing that here: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html
Here's a full walk through with a different stack configuration for test and production: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html
Tim.
You should definitely follow the documentation provided. Here's the solution I came up with below.
Here is my own solution that I wasn't satisfied with.
I've added a script running at build time and modifying my template given the ARN of the CodeBuild agent building the project.
I've added "BRANCH_NAME" where naming collision can occur. The image_processing_sam.yml is now:
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Description: Create a thumbnail for an image uploaded to S3
Resources:
ThumbnailFunction:
Type: "AWS::Serverless::Function"
Properties:
Role: !GetAtt LambdaExecutionRole.Arn
Handler: create_thumbnail.handler
Runtime: python2.7
Timeout: 30
Description: "A function computing the thumbnail for an image."
LambdaSecretEncryptionKey:
Type: "AWS::KMS::Key"
Properties:
Description: "A key used to encrypt secrets used in the Lambda functions"
Enabled: True
EnableKeyRotation: False
KeyPolicy:
Version: "2012-10-17"
Id: "lambda-secret-encryption-keyBRANCH_NAME"
Statement:
-
Sid: "Allow administration of the key"
Effect: "Allow"
Principal:
AWS: "arn:aws:iam::xxxxxxxxxxxxx:role/cloudformation-lambda-execution-role"
Action:
- "kms:Create*"
- "kms:Describe*"
- "kms:Enable*"
- "kms:List*"
- "kms:Put*"
- "kms:Update*"
- "kms:Revoke*"
- "kms:Disable*"
- "kms:Get*"
- "kms:Delete*"
- "kms:ScheduleKeyDeletion"
- "kms:CancelKeyDeletion"
Resource: "*"
-
Sid: "Allow use of the key"
Effect: "Allow"
Principal:
AWS:
- !GetAtt LambdaExecutionRole.Arn
Action:
- "kms:Encrypt"
- "kms:Decrypt"
- "kms:ReEncrypt*"
- "kms:GenerateDataKey*"
- "kms:DescribeKey"
Resource: "*"
LambdaExecutionRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "LambdaExecutionRoleBRANCH_NAME"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: LambdaKMSBRANCH_NAME
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "kms:Decrypt"
Resource: "*"
-
Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "*"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
UserGroup:
Type: "AWS::IAM::Group"
LambdaTriggerUser:
Type: "AWS::IAM::User"
Properties:
UserName: "LambdaTriggerUserBRANCH_NAME"
LambdaTriggerUserKeys:
Type: "AWS::IAM::AccessKey"
Properties:
UserName:
Ref: LambdaTriggerUser
Users:
Type: "AWS::IAM::UserToGroupAddition"
Properties:
GroupName:
Ref: UserGroup
Users:
- Ref: LambdaTriggerUser
Policies:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserPolicyBRANCH_NAME
PolicyDocument:
Statement:
-
Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource:
- !GetAtt DispatcherFunction.Arn
Groups:
- Ref: UserGroup
PackageBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: "package-bucketBRANCH_NAME"
VersioningConfiguration:
Status: "Enabled"
Outputs:
LambdaTriggerUserAccessKey:
Value:
Ref: "LambdaTriggerUserKeys"
Description: "AWSAccessKeyId of LambdaTriggerUser"
LambdaTriggerUserSecretKey:
Value: !GetAtt LambdaTriggerUserKeys.SecretAccessKey
Description: "AWSSecretKey of LambdaTriggerUser"
The script.sh replacing the "BRANCH_NAME" in the template is:
#!/bin/bash
echo $CODEBUILD_AGENT_ENV_CODEBUILD_BUILD_ARN
if [[ "$CODEBUILD_AGENT_ENV_CODEBUILD_BUILD_ARN" == *"master"* ]]; then
sed "s/BRANCH_NAME//g" image_processing_sam.yml > generated_image_processing_sam.yml;
fi
if [[ "$CODEBUILD_AGENT_ENV_CODEBUILD_BUILD_ARN" == *"staging"* ]]; then
sed "s/BRANCH_NAME/staging/g" image_processing_sam.yml > generated_image_processing_sam.yml;
fi
The buildspec.yml is now:
version: 0.1
phases:
install:
commands:
# Install required module for python
- pip install requests -t .
- pip install simplejson -t .
- pip install Image -t .
- bash ./script.sh
# To be able to see any issue in the generated template
- cat generated_image_processing_sam.yml
# Package the generated cloudformation template in order to deploy
- aws cloudformation package --template-file generated_image_processing_sam.yml --s3-bucket piximate-package-bucket --output-template-file new_image_processing_sam.yml
artifacts:
type: zip
files:
- new_image_processing_sam.yml
I hope it can somehow help you. I would be glad if anyone can provide any improvement or documentation that could help.

Related

Getting "Elasticsearch responded with an error: Forbidden" when connecting to ElasticSearch/OpenSearch datasource from Appsync

I am implementing GraphQL endpoint in AWS AppSync to search data within ElasticSearch/OpenSearch.
I've implemented this using the Serverless Framework.
serverless.yml
service: SearchAPI
frameworkVersion: "2"
useDotenv: true
custom:
stage: ${opt:stage, self:provider.stage}
accountId: "#{AWS::AccountId}"
appSync:
name: "searchapi-${self:provider.stage}"
authenticationType: API_KEY
mappingTemplates:
- dataSource: SearchDataSource
type: Query
field: searchTitles
serviceRole: "AppSyncServiceRole"
dataSources:
- type: AMAZON_ELASTICSEARCH
name: SearchDataSource
description: "AWS OpenSearch Data Source"
config:
endpoint: "${env:OPEN_SEARCH_END_POINT}"
serviceRoleArn: { Fn::GetAtt: [AppSyncESServiceRole, Arn] }
awsRegion: "us-east-1"
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: dev
region: us-east-1
resources:
Resources:
AppSyncESServiceRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "appsync-es-role-${self:provider.stage}"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "appsync.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: "appsync-es-role-${self:provider.stage}-Policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- es:*
Resource:
- ${env:OPEN_SEARCH_ARN}/*
plugins:
- serverless-bundle
- serverless-appsync-plugin
I followed exact steps in this example github url
When trying to get data form graphql endpoint, i am getting this error in Cloudwatch logs...
Did anyone face this issue? Would be great if someone can help figuring this out.

AWS CloudFormation CodeBuild: Can i use ECR image as Environment image

I have created the following Script where i am trying to use a custom image from ECR:
Adminrole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join
- "."
- - !Ref "AWS::StackName"
- !Ref "AWS::Region"
- "codebuild"
Path: "/"
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [codebuild.amazonaws.com]
Version: '2012-10-17'
Policies:
- PolicyName: "root"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "*"
Resource: "*"
ProjectTerrafom:
Type: AWS::CodeBuild::Project
Properties:
Name: !Join
- "_"
- - !Ref "AWS::StackName"
- !Ref "AWS::Region"
- "ProjectTerrafom"
Description: Terraform deployment
ServiceRole: !Ref Adminrole
Artifacts:
Type: no_artifacts
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: 111111.cer.ecr.eu-center-1.amazonaws.com/my_terraform
Source:
Location: !Ref "FullPathRepoNameTerraform"
Type: GITHUB_ENTERPRISE
TimeoutInMinutes: 10
Tags:
- Key: Project
Value: "Run Terraform From CodeBuild"
When i run the CodeBuild i get the following error:
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image.
CannotPullContainerError: Error response from daemon: pull access denied for
111111.cer.ecr.eu-center-1.amazonaws.com/my_terraform, repository does
not exist or may `enter code here`require 'docker login': denied: User: CodeBuild
Is this a permission issue or are we not allowed to use the ECR images for CodeBuild?
i have solved the issue by adding following line:
ImagePullCredentialsType: SERVICE_ROLE
The full part looks like this:
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: 111111.cer.ecr.eu-center-1.amazonaws.com/my_terraform
ImagePullCredentialsType: SERVICE_ROLE
You can use ECR images on code build but you need verification according to that error. So I suggest you crosscheck your docker credentials, confirm login then cross-check image URL

"An error occurred: GraphQlDsUsersRole - Syntax errors in policy." when using serverless-appsync-plugin

Following alongside the tutorials, but changing some semantic things, I'm getting this error from serverless when I deploy:
An error occurred: GraphQlDsUsersRole - Syntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: 4b486514-e57d-4828-9edc-f9150d8806b4; Proxy: null).
Doing a search on my directory, this role seems to be autogenerated into the .serverless file. How is this generated, and what could I have done to mess it up?
My serverless.yml file:
service: graphql-api
plugins:
- serverless-appsync-plugin
- serverless-pseudo-parameters
package:
exclude:
- node_modules/**
- ./node_modules/**
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
custom:
stage: dev
appSync:
name: ${self:service}-${self:custom.stage}
authenticationType: API_KEY
mappingTemplates:
- dataSource: Users
type: Query
field: getUsers
request: 'getUsers-request-mapping-template.txt'
response: 'getUsers-response-mapping-template.txt'
schema: schema.graphql
dataSources:
- type: AMAZON_DYNAMODB
name: Users
description: User Table
config:
tableName: { Ref: UserTable }
serviceRoleARN: { Fn::GetAtt: [AppSyncDynamoDBServiceRole, Arn]}
iamRoleStatements:
- Effect: Allow
Action:
- 'dynamodb:*'
Resources:
- 'arn:aws:dynamodb:${self:provider.region}:#{AWS::AccountId}:table/Users'
- 'arn:aws:dynamodb:${self:provider.region}:#{AWS::AccountId}:table/Users/*'
resources:
- ${file(resources/roles.yml)}
- ${file(resources/dynamodb.yml)}
My roles.yml file:
AppSyncDynamoDBServiceRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "Dynamo-${self:service}-Role"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "appsync.amazonaws.com"
- "dynamodb.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: "Dynamo-${self:service}-Policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "dynamodb:Query"
- "dynamodb:BatchWriteItem"
- "dynamodb:GetItem"
- "dynamodb:DeleteItem"
- "dynamodb:PutItem"
- "dynamodb:Scan"
- "dynamodb:UpdateItem"
Resource:
- "arn:aws:dynamodb:${self:provider.region}:#{AWS::AccountId}:table/Users"
- "arn:aws:dynamodb:${self:provider.region}:#{AWS::AccountId}:table/Users/*"
Sorry for late response. Not sure if you found a solution yet, but I will answer anyway if that helps anyone
I notice 2 things here:
serviceRoleARN is misspelled. It should be serviceRoleArn
serviceRoleArn and iamRoleStatements are mutually exclusive. serviceRoleArn takes the ARN of a full already-existing role, while iamRoleStatements auto-generates the role for you with the provided policy statement. When both are provided, serviceRoleArn takes precedence.
So here, what what used was iamRoleStatements (because serviceRoleArn was misspelled).
The name of the generated resource confirms that: GraphQlDsUsersRole
As for why the policy is malformed, this is because Resource is misspelled too.
It has to be singular (even if you pass an array).

AWS CodePipeline error: Cross-account pass role is not allowed

I am trying to create an AWS CodePipeline that deploys the production code to a separate account. The code consists of a lambda function which is setup using a sam template and cloudformation. I have it currently deploying to the same account without error. I added another stage that has a manual approval action and after approval it should deploy to the other account. It fails with the following error:
Cross-account pass role is not allowed (Service: AmazonCloudFormation; Status Code: 403; Error Code: AccessDenied; Request ID: d880bdd7-fe3f-11e7-8a8c-7dcffeae19ae)
I have a role in the production account that has a trust relationship back to the dev account that has the pipeline. I gave the pipeline role and the production role administrator policies just to make sure it was not a policy issue. I edited the pipeline using the technique in this walkthrough. I am following the walkthrough loosely since they are setting their scenario up just slightly different from what I am doing.
The deploy section in my pipeline looks like:
{
"name": "my-stack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "ProductionChangeSet",
"RoleArn": "arn:aws:iam::000000000000:role/role-to-assume",
"StackName": "MyProductionStack",
"TemplatePath": "BuildArtifact::NewSamTemplate.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifact"
}
]
}
I am able to assume into the role in the production account using the console. I am not sure how passrole is different but from everything I have read it requires the same assume role trust relationship.
How can I configure IAM for cross account pipelines?
Generally, if you want to do anything across multiple accounts you have to allow this on the both sides. This is done via role-assuming.
The pipeline distributed parts communicate via pipeline artifacts which are saved in a S3 bucket and de/encrypted with a KMS encryption key. This key must be accesible from all the accounts where the pipeline is distributed in.
key in the CI account
KMSKey:
Type: AWS::KMS::Key
Properties:
EnableKeyRotation: true
KeyPolicy:
Version: "2012-10-17"
Id: pipeline-kms-key
Statement:
- Sid: Allows admin of the key
Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
Action: ["kms:*"]
Resource: "*"
- Sid: Allow use of the key from the other accounts
Effect: Allow
Principal:
AWS:
- !Sub "arn:aws:iam::${DevAccountId}:root"
- !GetAtt CodePipelineRole.Arn
Action:
- kms:Encrypt
- kms:Decrypt
- kms:ReEncrypt*
- kms:GenerateDataKey*
- kms:DescribeKey
Resource: "*"
KMSAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: !Sub alias/codepipeline-crossaccounts
TargetKeyId: !Ref KMSKey
The S3 bucket must allow the access from different accounts via a policy:
pipeline stack in the CI account
S3ArtifactBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref S3ArtifactBucket
PolicyDocument:
Statement:
- Action: ["s3:*"]
Effect: Allow
Resource:
- !Sub "arn:aws:s3:::${S3ArtifactBucket}"
- !Sub "arn:aws:s3:::${S3ArtifactBucket}/*"
Principal:
AWS:
- !GetAtt CodePipelineRole.Arn
- !Sub "arn:aws:iam::${DevAccountId}:role/cross-account-role"
- !Sub "arn:aws:iam::${DevAccountId}:role/cloudformation-role"
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !Ref KMSKey
Type: KMS
...
The pipeline (CI account) has to have a permission to assume a role in the other (DEV) account:
pipeline stack in the CI account
CodePipelinePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: ["sts:AssumeRole"]
Resource: !Sub "arn:aws:iam::${DevAccountId}:role/cross-account-role
Effect: Allow
...
And that role has to allow to be assumed to the pipeline:
pipeline stack in the DEV account
CrossAccountRole:
Type: AWS::IAM::Role
Properties:
RoleName: cross-account-role
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
AWS: !Sub "arn:aws:iam::${CIAccountId}:root"
Action: sts:AssumeRole
CrossAccountPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: CrossAccountPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- cloudformation:*
- codebuild:*
- s3:*
- iam:PassRole
Resource: "*"
- Effect: Allow
Action: ["kms:Decrypt", "kms:Encrypt"]
Resource: !Ref KMSKey
Roles: [!Ref CrossAccountRole]
The pipeline (managed and executed from the CI account) must assume a role from the other account to execute the action from within the account:
pipeline stack in the CI account
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: pipeline
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
...
- Name: StagingDev
Actions:
- Name: create-changeset
InputArtifacts:
- Name: BuildArtifact
OutputArtifacts: []
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: app-stack-dev
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: app-changeset-dev
Capabilities: CAPABILITY_NAMED_IAM
TemplatePath: "BuildArtifact::template.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/cloudformation-role" # the action will be executed with this role
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/cross-account-role" # the pipeline assume this role to execute this action
...
The code above shows how to execute a CloudFormation action in a different account, the approach is the same for different actions like CodeBuild or CodeDeploy.
There is a nice sample https://github.com/awslabs/aws-refarch-cross-account-pipeline from AWS team.
Another example is here https://github.com/adcreare/cloudformation/tree/master/code-pipeline-cross-account
Or you can take a look at my whole working code here https://github.com/ttulka/aws-samples/tree/master/cross-account-pipeline
I think the issue is that your CloudFormation role is in the other account but your action role is not. Only the pipeline role is allowed to assume an action role in a different account.
The action role is the one located directly under the ActionDeclaration.
Basically your roles should be configured as follows:
Pipeline role: Account A
Action role: Account B
CloudFormation role: Account B
There's some information on setting up cross-account actions here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
Here's where the action role is defined: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ActionDeclaration.html

What is the best way to get a CloudFormation Stack to self-delete?

I'm trying to get my CloudFormation stack to delete itself when it is complete. When I try the following code in my template, the logs show me that the file or command was not found.
When I use runuser to execute other AWS CLI commands I have no problem (as long as the command doesn't require options that start with "--").
I am using the basic AWS IAM.
"06_delete_stack": { "command": { "Fn::Join": [ "", [
"runuser -u fhwa 'aws cloudformation delete-stack --stack-name ", { "Ref": "StackName" }, "'"
] ] },
"cwd": "/var/log"}
Expanding upon Joel's answer, here's a minimal CloudFormation stack that self-destructs from an EC2 instance by running aws cloudformation delete-stack, with an AWS::IAM::Role granting least privilege to delete itself:
Description: Cloudformation stack that self-destructs
Mappings:
# amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
RegionMap:
us-east-1:
"64": "ami-9be6f38c"
Resources:
EC2Role:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "EC2Role-${AWS::StackName}"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: [ ec2.amazonaws.com ]
Action: [ "sts:AssumeRole" ]
Path: /
Policies:
- PolicyName: EC2Policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- "cloudformation:DeleteStack"
Resource: !Ref "AWS::StackId"
- Effect: Allow
Action: [ "ec2:TerminateInstances" ]
Resource: "*"
Condition:
StringEquals:
"ec2:ResourceTag/aws:cloudformation:stack-id": !Ref AWS::StackId
- Effect: Allow
Action: [ "ec2:DescribeInstances" ]
Resource: "*"
- Effect: Allow
Action:
- "iam:RemoveRoleFromInstanceProfile"
- "iam:DeleteInstanceProfile"
Resource: !Sub "arn:aws:iam::${AWS::AccountId}:instance-profile/*"
- Effect: Allow
Action:
- "iam:DeleteRole"
- "iam:DeleteRolePolicy"
Resource: !Sub "arn:aws:iam::${AWS::AccountId}:role/EC2Role-${AWS::StackName}"
RootInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [ !Ref EC2Role ]
WebServer:
Type: AWS::EC2::Instance
Properties:
ImageId: !FindInMap [ RegionMap, !Ref "AWS::Region", 64 ]
InstanceType: m3.medium
IamInstanceProfile: !Ref RootInstanceProfile
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash
aws cloudformation delete-stack --stack-name ${AWS::StackId} --region ${AWS::Region}
Note that if you add any additional resources to the template, you'll need to add the corresponding 'delete' IAM permission to the EC2Policy statement list.
I was able to get the stack to delete itself.
I had the stack build an additional shell script that contained the AWS CLI command to delete the stack. I then adjusted the runuser command to execute the shell script.
I then had to add the IAM permissions to delete the stack to the role for the generated user.