Purpose and scope of AWS CDK bootstrap stack? - amazon-web-services

The docs on AWS CDK boostrapping state of the cdk bootstrap command:
cdk bootstrap
Deploys a CDKToolkit CloudFormation stack into the specified environment(s), that provides an S3 bucket that cdk deploy will use to store synthesized templates and the related assets, before triggering a CloudFormation stack update. The name of the deployed stack can be configured using the --toolkit-stack-name argument.
$ # Deploys to all environments
$ cdk bootstrap --app='node bin/main.js'
$ # Deploys only to environments foo and bar
$ cdk bootstrap --app='node bin/main.js' foo bar
However, how often does CDK need to be bootstrapped? Is it:
once for each AWS account?
once for each application in each AWS account?
once for each application in each AWS account that requires assets?
something else?

background:
cdk bootstrap is a tool in the AWS CDK command-line interface
responsible for populating a given environment (that is, a combination
of AWS account and region) with resources required by the CDK to
perform deployments into that environment.
When you run cdk bootstrap cdk deploys the CDK toolkit stack into an AWS environment.
The bootstrap command creates a CloudFormation stack in the environment passed on the command line. Currently, the only resource in that stack is An S3 bucket that holds the file assets and the resulting CloudFormation template to deploy.
cdk bootstrap command is running one time per account/ region.
Simple scenario to sum-up:
Run cdk bootstrap - create a new s3 bucket, IAM roles, etc.
Run cdk deploy - to deploy your stack for the first time, new template added to bootstrap s3 bucket.
Apply any change to cdk stack.
Run cdk diff - to view differences -
Behind the scenes, CDK generates the new template and compare it with the CDK template that exists in the bootstrap bucket.
More about cdk bootstrap.

Related

AWS SAM deploy command with `sam pipeline init` failed to persist CodePipeline

Following the guide in AWS SAM documentation, step by step created pipeline as instructed in the documentation till the final step where I copied the command from step 4 to connect to CodeCommit, I ran;
sam deploy -t codepipeline.yaml --stack-name prod --capabilities=CAPABILITY_IAM
I can see that CloudFormation events being generated in the shell (with Successfully created/updated stack - prod in None), as well as seeing the CodePipeline being generated and running the deployment stages.
However, as soon as the deployment is done, that pipeline is missing from AWS Developer Tool Console.
Shouldn't the pipeline be retained and when a new commit to the branch is merged, it automatically run the pipeline every time? why is my pipeline got removed right after the deployment is done?
After trying to understand the whole logic of the deploy command and their internals, I have managed to retain the CodePipeline in AWS console to run for my automated CI / CD pipeline.
Just to put it out there so if anyone facing the similar issue, can refer to this. Also, suggest an update on the guide to reflect.
Here is what i found.
sam pipeline init
define the two stages stack-name that you be use when sam deploy is executed
in this case, i i named it "prod" and "stage"
the stack is not created yet, it will be created after the execution of sam deploy
sam deploy -t codepipeline.yaml
generate two CloudFormation stack name for the two stages you defined in sam pipeline init
generate a CloudFormation stack name you defined in sam deploy but this time with template.yaml
in this case, i defined "prod"
therefore, then sam deploy is called, it finds a CloudFormation stack that you indicate and modify
when sam deploy with codepipeline.yaml template, a CloudFormation stack is created on the fly from your terminal to create the AWS CodePipeline, AWS CodeBuild and any other required resources for CI / CD with the stack name you defined in sam deploy
once it is successfully created/updated, the CodePipeline will run, which includes a creation / updating of the CloudFormation stack you defined during sam pipeline init
however at this step, if you already have a CloudFormation stack with the same name, it will be modified.
In this case, it modified the stack I created for the CI / CD, but CodePipeline is mostly only deleted last, therefore it is removed as soon as it's done with the deployment
The moral of this story is, do not name your stack for sam pipeline init and sam deploy the same!

How to create cloudformation template from SAm project?

I am trying to convert a SAM project to a cloudformation template in order to call
cloudformation.createStack()
to create multiple stacks when a lambda is invoked. So far I can upload the SAM project with
sam build
sam package
But the size of the S3 is to big and I am getting errors. What are the steps to correctly upload the cloudformation template?
These pre-reqs need to be met before continuing:
Install the SAM CLI.
Create an Amazon S3 bucket to store the serverless code artifacts that the SAM template generates. At a minimum, you will need permission to put objects into the bucket.
The permissions applied to your IAM identity must include iam:ListPolicies.
4.You must have AWS credentials configured either via the AWS CLI or in your shell's environment via the AWS_* environment variables.
5.Git installed.
6.Python 3.x installed.
(Optional) Install Python's virtualenvwrapper.
Source Link:- https://www.packetmischief.ca/2020/12/30/converting-from-aws-sam-to-cloudformation/

Does AWS CloudFormation have an equivalent Terraform destroy command

Having deployed an application using Amazons AWS SAM framework (CloudFormation under the hood) I would now like to destroy all the resources it has created.
This is easy enough to do had I been using Terraform with the Terraform destroy command. Is there an equivalent command using AWS SAM or even CloudFormation?
Thanks in adv.
Michael McD.
You can delete the cloudformation stack (and therefore all resources contained within it) either through the cli (https://docs.aws.amazon.com/cli/latest/reference/cloudformation/delete-stack.html) or through the console (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html).

How to use TerraForm to create pipeline that deploys lambda function

I am trying to create a pipeline using terraform to create a codepipeline in aws to automatically deploy a lambda function.
i have already created 2 stages to get the code from github and build the artifact using codebuild and store the artifact to S3.
But i can't seem to find a terraform configuration for the codedeploy to deploy the artifact from s3 to lambda. I do see there is deployment setting from the console where i can specify the detail of the deployment.

How to re-use codepipeline to deploy different lambdas without replacing an existing lambda

I followed this tutorial https://aws.amazon.com/blogs/devops/aws-building-a-secure-cross-account-continuous-delivery-pipeline/ to set up cross account deployments of our lambdas using Cloudformation as my automation tool.
I'm using the pipeline in this repo: https://github.com/awslabs/aws-refarch-cross-account-pipeline/blob/master/ToolsAcct/code-pipeline.yaml (pipeline starts at line 207) and the pipeline in question is in the ToolsAccount/ directory;
I am able to successfully deploy the first lambda; however, any subsequent deployment replaces the old lambda but I want to have lambda_1 and lambda_2 both present in the console not just the latest one.
To deploy the second lambda, out of all 6 steps from the tutorial, I rerun step 4 and 5 of the tutorial like below:
4.In the Tools account, which hosts AWS CodePipeline, execute this CloudFormation template. This creates a pipeline, but does not add permissions for the cross accounts (Dev, Test, and Prod)
aws cloudformation deploy --stack-name sample-lambda-pipeline \
--template-file ToolsAcct/code-pipeline.yaml \
--parameter-overrides DevAccount=ENTER_DEV_ACCT TestAccount=ENTER_TEST_ACCT \
ProductionAccount=ENTER_PROD_ACCT CMKARN=FROM_1st_STEP \
S3Bucket=FROM_1st_STEP--capabilities CAPABILITY_NAMED_IAM
5.In the Tools account, execute this CloudFormation template, which give access to the role created in step 4. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket. This is the same template that was used in step 1, but with different parameters.
aws cloudformation deploy --stack-name pre-reqs \
--template-file ToolsAcct/pre-reqs.yaml \
--parameter-overrides CodeBuildCondition=true
After running both of these steps to deploy the second lambda, it successfully deploys it but replaces the other lambda that was already deployed earlier in the console. *
How can I keep the existing lambda while deploying new ones and
have all lambdas present in the console and not just the latest one
that was deployed?
*
My guess would be that by rerunning step 4 and 5, I'm creating a changeset of the previously deployed lambda and thus it will keep replacing the old lambda in the console.
If my guess is correct, then how can I re-use the same pipeline but deploy different lambdas with it without replacing the previously deployed lambdas?
Is there an attribute of the cloudformation pipeline resource that I'm
missing?
It sounds like you're trying to use a single pipeline to deploy multiple different independent services / projects. This will cause problems when you "switch" projects because the template won't contain resources from the other project and therefore CloudFormation will think these resources need to be removed.
You can either:
Add all the lambda functions together in a single template
Setup a separate pipeline per set of functions