Use the AWS CLI in a CDK ShellStep (pipeline) step - amazon-web-services

I have a CDK Pipeline stack that synths and deploys some infrastructure. After the infrastructure is created, I want to build a frontend react app that knows the URL to the newly constructed API Gateway. Once the app is built, I want to move the built files to a newly created S3 bucket.
I have the first two steps working no problem. I use a CfnOutput to get the API URL and the bucket name. I then use envFromCfnOutputs in my shell step to build the react app with the right env variable set up.
I can't figure out how to move my files to a s3 bucket. I've tried for days to figure out something using s3deploy, but run into various permission issues. I thought I could try to just use the aws cli and move the files manually, but I don't know how to give the CLI command permission to add and delete objects. To make things a bit more complicated, My infrastructure is deployed to a separate account from where my pipeline lives.
Any idea how I can use the CLI or another thought on how I can move the built files to a bucket?
// set up pipeline
const pipeline = new CodePipeline(this, id, {
crossAccountKeys: true,
pipelineName: id,
synth: mySynthStep
});
// add a stage with all my constructs
const pipelineStage = pipelineAddStage(myStage)
// create a shellstep that builds and moves the frontend assets
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'npm install -g aws-cli',
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build',
'aws s3 sync ./dist/ s3://$AWS_FRONTEND_BUCKET_NAME/ --delete'
],
envFromCfnOutputs: {
AWS_API_BASE_URL: myStage.apiURL,
AWS_FRONTEND_BUCKET_NAME: myStage.bucketName
}
})
// add my step as a poststep to my stage.
pipelineStage.addPost(frontendApp);

I want to give this a shot and also suggest a solution for cross account pipelines.
You figured out the first half of how to build the webapp, this works by passing the output of the cloudformation to the environment of a shell action building the app with the correct outputs (e.g. API endpoint url).
You now could add permissions to a CodeBuildStep and attach a policy there to allow the step to do call certain actions. That should work if your pipeline and your bucket are in the same account (and also cross account with a lot more fiddling). But there is a problem with scoping those permissions:
The Pipeline and the Bucket are created in an order where first the Pipeline is created or self-updated, so you do not know the Bucket Name or anything else at this point. It then deploys the resources to its own account or to another account. So you need to assign a name which is known beforehand. This is a general problem and broadens if you e.g. also need to create a Cloudfront Invalidation and so on.
My approach is the following (in my case for a cross account deployment):
Create a Role alongside the resources which allows the role to do things (e.g. ReadWrite S3 bucket, create Cloudfront Invalidation, ...) with a predefined name and allow a matching principal to assume that role (In my case an Account principal)
Code snippet
const deploymentRole = new IAM.Role(this, "DeploymentRole", {
roleName: "WebappDeploymentRole",
assumedBy: new IAM.AccountPrincipal(pipelineAccountId),
});
// Grant permissions
bucket.grantReadWrite(deploymentRole);
2. Create a `CodeBuildStep` which has permissions to assume that role (by a pre-defined name)
Code snippet
new CodeBuildStep("Deploy Webapp", {
rolePolicyStatements: [
new PolicyStatement({
actions: ["sts:AssumeRole"],
resources: [
`arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName}`,
],
effect: Effect.ALLOW,
}),
],
...
}
3. In the `commands` i do call `aws sts assume-role` with the predefined role name and save the credentials to the environment for following calls to use
Code snippet
envFromCfnOutputs: {
bucketName: devStage.webappBucketName,
cloudfrontDistributionID: devStage.webbappCloudfrontDistributionId,
},
commands: [
"yarn run build-webapp",
// Assume role, see https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command
`export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws sts assume-role --role-arn arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName} --role-session-name WebappDeploySession --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" --output text))`,
`aws s3 sync ${webappPath}/build s3://$bucketName`,
`aws cloudfront create-invalidation --distribution-id $cloudfrontDistributionID --paths \"/*\"`,
],
4. I do call other aws cli actions like `aws s3 sync ...` with the credentials from step 3. which are now correctly scoped to the actions needed

The ShellStep is likely running under the IAM permissions/role of the Pipeline. Add additional permissions to the Pipeline's role and that should trickle down the AWS CLI call.
You'll also need to probably call buildPipeline before you try to do this:
pipeline.buildPipeline();
pipeline.pipeline.addToRolePolicy(...)

Related

How to use a single IAM role to deploy resources in multiple accounts using CDK CLI

Is it possible to use a single IAM role (which can access another role) to deploy resources with environment variables CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION?
For example: Below is a piece of code from Jenkinsfile , which uses a role to deploy resources in the account of which it a part.
script
{
withCredentials([string(credentialsId: "sample-role-arn", variable: 'ARN'), string(credentialsId: "sample-role-extid", variable: 'EXT_ID')])
{
withAWS(role: "${ARN}", externalId: "${EXT_ID}", region: "${AWS_REGION}"){
sh '''
cdk deploy --all
'''
}
}
}
In this code sample-role-arn is defined in the account in which cdk deploy --all will deploy the resources. If the CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION are set to different values of which sample-role-arn is not a part, the cdk deploy --all will through error: Could not assume role in target account using current credentials (which are for account xxxxxx) User: arn:aws:sts::xxxx:assumed-role/sample-role-arn/xxx is not authorized to perform: sts:AssumeRole on resource is is expected.
However, if role is created in account set by CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION and made sample-role-arn as trusted entity, yet the same error as mentioned above is encountered despite the fact that sample-role-arn is a trusted entity.
Could someone please advise, if this is possible?

How to run integrationtest with aws cdk Codepipline

iam using CodePipeline to deploy a lambda function.
After deployment, i would like to run integrationtests.
i have a Codepipeline created and deployed my lambdastack successfully but running integrationtests fails because of wrong permissions. My pipeline is in account A and the deployment is for account B. That works, but if i add a ShellStep or a CodeBuildStep as post step with stage.add_post to the StageDeployment object, invoking the lambda function through my tests.sh is not possible, because the CodeBuildStep/ShellStep has different permission settings although i deployed it in the previous step. To access the lambda function arn in my tests, i set CfnOutput in my lambdaStackStage and passed it to my CodeBuildStep/Shellstep as env_from_cfn_outputs.
lambdaStackStage=Deploy(
self,
id='Deployment',
env={
'account': 123456789,
'region': region
},
)
stage=pipe.add_stage(
stage=lambdaStackStage
)
stage.add_post(
CodeBuildStep(
"IntegrationTest",
commands=["./tests.sh"],
env_from_cfn_outputs={"LAMBDA_FUNCTION": lambdaStackStage.lambda_arn},
)
)
how can i set the CodeBuildStep to use the same role as it is used in the deployment step? or why isnt this step using the same role?
the lambdaStackStage was created with the account number for account B and the region. But passing this env to my CodeBuildStep did not change the permission problem.
stage.add_post(
CodeBuildStep(
"IntegrationTest",
commands=["./tests.sh"],
env_from_cfn_outputs={"LAMBDA_FUNCTION": lambdaStackStage.lambda_arn},
env={
'account': 123456789,
'region': region
}
)
)
here is a possible solution where it is necessary to give the CodeBuildStep an assume role policy and then assume the role using the AWS CLI.

Need to perform AWS calls for account xxx, but no credentials have been configured

I'm trying to deploy my stack to aws using cdk deploy my-stack. When doing it in my terminal window it works perfectly, but when im doing it in my pipeline i get this error: Need to perform AWS calls for account xxx, but no credentials have been configured. I have run aws configure and inserted the correct keys for the IAM user im using.
So again, it only works when im doing manually in the terimal but not when the pipeline is doing it. Anyone got a clue to why I get this error?
I encountered the same error message on my Mac.
I had ~/.aws/config and credentials files set up. My credentials file had a user that didn't exist in IAM.
For me, the solution was to go back into IAM in the AWS Console; create a new dev-admin user and add AdministratorAccess privileges like this ..
Then update my ~/.aws/credentials file with the new [dev-admin] user and added the keys which are available under the "Security Credentials" tab on the Summary page shown above. The credentials entry looks like this..
[dev-admin]
aws_access_key_id=<your access key here>
aws_secret_access_key=<your secret access key here>
I then went back into my project root folder and ran
cdk deploy --profile dev-admin -v
Not sure if this is the 'correct' approach but it worked for me.
If you are using a named profile other than 'default', you might want to pass the name of the profile with the --profile flag.
For example:
cdk deploy --all --profile mynamedprofile
If you are deploying a stack or a stage you can explicitly specify the environment you are deploying resources in. This is important for cdk-pipelines because the AWS Account where the Pipeline construct is created can be different from where the resources get dployed. For example (C#):
Env = new Amazon.CDK.Environment()
{
Account = "123456789",
Region = "us-east-1"
}
See the docs
If you get this error you might need to bootstrap the account in question. And if you have a tools/ops account you need to trust this from the "deployment" accounts.
Here is an example with dev, prod and tools:
cdk bootstrap <tools-account-no>/<region> --profile=tools;
cdk bootstrap <dev-account-no>/<region> --profile=dev;
cdk bootstrap <prod-account-no>/<region> --profile=prod;
cdk bootstrap --trust <tools-account-no> --profile=dev --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=prod --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=tools --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
Note that you need to commit the changes to cdk.context.json
The only way worked with me is to make sure that ~/.aws/config and ~/.aws/credentials files they both can't have a default profile section.
So if you removed the default profile from both files, it should work fine with you :)
Here is a sample of my ~/.aws/config ====> (Note: i don't use default profile at all)
[profile myProfile]
sso_start_url = https://hostname/start#/
sso_region = REPLACE_ME_WITH_YOURS
sso_account_id = REPLACE_ME_WITH_YOURS
sso_role_name = REPLACE_ME_WITH_YOURS
region = REPLACE_ME_WITH_YOURS
output = yaml
And this is ~/.aws/credentials ====> (Note: i don't use default profile at all)
[myProfile]
aws_access_key_id=REPLACE_ME_WITH_YOURS
aws_secret_access_key=REPLACE_ME_WITH_YOURS
aws_session_token=REPLACE_ME_WITH_YOURS
source_profile=myProfile
Note: if it still doesn't work, so try to use one profile only in config and credentials holding your AWS configurations and credentials.
I'm also new to this. I was adding sudo before cdk bootstrap command. Removing sudo made it work.
You can also do aws configure list to list down all the profiles to check if credentials are being created and stored in a proper manner.
If using a CI tool, check the output of cdk <command> --verbose for hints at the root cause for credentials not found.
In one case, the issue was simply the ~/.aws/credentials file was missing (although not technically required if running on EC2) - more details in this answer.
I too had this issue.
when I checked ~/.aws/credentials, it was having some older account details. So I just deleted that file.
and
==> aws configure
==> cdk bootstrap aws://XXXXXX/ap-south-1
it worked.

Invalid Terraform AWS provider credentials when passing AWS system manager parameter store variables

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo to apply IaC using Terraform. To access the credentials needed for the Terraform AWS provider, I used AWS system manager parameter store to retrieve the access and secret key within the buildspec.yml.
Problem:
The system manager parameter store masks the access and secret key env value so when they are inherited by the Terraform AWS provider, the provider outputs that the credentials are invalid:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: xxxx
To reproduce the problem:
Create system manager parameter store variables (TF_VAR_AWS_ACCESS_KEY_ID=access, TF_AWS_SECRET_ACCESS_KEY=secret)
Create AWS CodeBuild project with:
"source": {
"type": "NO_SOURCE",
}
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL"
}
buildspec.yml with the following: (modified to create .tf files instead of sourcing from github)
version: 0.2
env:
shell: bash
parameter-store:
TF_VAR_AWS_ACCESS_KEY_ID: TF_AWS_ACCESS_KEY_ID
TF_VAR_AWS_SECRET_ACCESS_KEY: TF_AWS_SECRET_ACCESS_KEY
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "provider "aws" {\n\taccess_key = var.AWS_ACCESS_KEY_ID\n\tsecret_key = var.AWS_SECRET_ACCESS_KEY\n\tversion = \"~> 3.2.0\"\n}" >> provider.tf
- printf "variable "AWS_ACCESS_KEY_ID" {}\nvariable "AWS_SECRET_ACCESS_KEY" {}" > vars.tf
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
Attempts:
Passing creds through terraform -vars option:
terraform plan -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID" -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_SECRET_ACCESS_KEY"
but I get the same invalid credentials error
Export system manager parameter store credentials within buildspec.yml:
commands:
- export AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$TF_VAR_AWS_SECRET_ACCESS_KEY
which results in duplicate masked variables and the same error above. printenv output within buildspec.yml:
AWS_ACCESS_KEY_ID=***
TF_VAR_AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
TF_VAR_AWS_SECRET_ACCESS_KEY=***
Possible solution routes:
Somehow pass the MASKED parameter store credential values into Terraform successfully (preferred)
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Unmask the parameter store variables to pass into the aws provider (probably defeats the purpose of using aws system manager in the first place)
I experienced this same issue when working with Terraform on Ubuntu 20.04.
I had configured the AWS CLI using the aws configure command with an IAM credential for the terraform user I created on AWS.
However, when I run the command:
terraform plan
I get the error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 17268b96-6451-4527-8b17-0312f49eec51
Here's how I fixed it:
The issue was caused as a result of the misconfiguration of my AWS CLI using the aws configure command. I had inputted the AWS Access Key ID where I was to input AWS Secret Access Key and also inputted AWS Secret Access Key where I was to input AWS Access Key ID:
I had to run the command below to re-configure the AWS CLI correctly with an IAM credential for the terraform user I created on AWS:
aws configure
You can confirm that it is fine by running a simple was cli command:
aws s3 ls
If you get an error like the one below, then you know you're still not setup correctly yet:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
That's all.
I hope this helps
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Generally you wouldn't need to hard-code AWS credentials for terraform to work. Instead CodeBuild IAM role should be enough for terraform, as explain in terraform docs.
Having this in mind, I verified that the following works and creates the bucket requested using terraform from CodeBuild project. The default CB role was modified with S3 permissions to allow creation of the bucket.
version: 0.2
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test-43242-efdfdfd-4444334\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
- terraform apply -auto-approve
Well my case was quite foolish but it might help:
So after downloading the .csv file I copy paste the keys with aws configure.
In the middle of the secret key there was a "+". In the editor I use the double click to copy, however will stop when meeting a non alphanumeric character, meaning that only the first part of the secret access key was copied.
Make sure that you had dutifully copied the full secret key.
I had a 403 error.
Issue is - you should remove {} from example code.
provider "aws" {
access_key = "{YOUR ACCESS KEY}"
secret_key = "{YOUR SECRET KEY}"
region = "eu-west-1"
}
it should look like,
provider "aws" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
region = "eu-west-1"
}
i have faced this issue multiple times
the solution is to create user in AWS from IAM Management console and
the error will be fixed

How to deploy AWS CDK stacks to multiple accounts?

AWS CDK stacks target an account or region based on an evironment, details here. Here is an example of an app that deploys one stack into multiple target accounts:
const envEU = { account: '2383838383', region: 'eu-west-1' };
const envUSA = { account: '8373873873', region: 'us-west-2' };
new MyFirstStack(app, 'first-stack-eu', { env: envEU });
new MyFirstStack(app, 'first-stack-us', { env: envUSA });
My question is how to deploy these 2 stacks - is it possible to deploy them as a single operation? If so, what credentials are used and what roles are required on the 2 accounts?
Ideally, I'd like to be able to do a single command to deploy all stacks across all accounts:
cdk deploy ...
Or is the deployment only possible via 2 steps?
cdk deploy first-stack-eu --profile=profile_for_account_2383838383
cdk deploy first-stack-us --profile=profile_for_account_8373873873
I ended up using the cdk-assume-role-credential-plugin to perform the task. The description of that plugin states:
This plugin allows the CDK CLI to automatically obtain AWS credentials
from a stack's target AWS account. This means that you can run a
single command (i.e. cdk synth) with a set of AWS credentials, and the
CLI will determine the target AWS account for each stack and
automatically obtain temporary credentials for the target AWS account
by assuming a role in the account.
I wrote up a detailed tutorial on how to use this plugin to perform AWS cross-account deployments using CDK here: https://johntipper.org/aws-cdk-cross-account-deployments-with-cdk-pipelines-and-cdk-assume-role-credential-plugin/
In cloudformation you can use Stack Sets for multi-account and multi-region deployments.
However, this is not yet supported in CDK according to the GitHub issue:
Support for CloudFormation StackSets #66
As of v2 of CDK this is available by default:
Now by default when you bootstrap an AWS account it will create a set of IAM roles for you, which the CDK will assume when performing actions in that account.
If you have multiple stacks in your app you have to pass every stack into the cdk deploy command e.g. cdk deploy WmStackRouteCertStack004BE231 WmStackUploadStackF8C20A98
I don't know of a way to deploy all stacks in an app, I don't like this behavior and it's the reason I try to avoid creating multiple stacks