How many times do I need to run cdk bookstrap? - amazon-web-services

I am trying to understand how cdk bootstrap works. I have read the doc: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md and tried to run the command in my AWS account. I can see a new cf stack is created CDKToolkit which includes s3 bucket, iam roles etc.
My question is whether I need to run this command for every cdk project I have? Or is it just one time execution?
If I have projects using different cdk version v1 and v2, do I use the same cf stack? Will it cause version conflicts?

It's typically a one time thing per account per region. The infrastructure in that stack is shared among your CDK apps.
There was a change in format a while ago that required an update of the stack, but since then it has remained largely unchanged.
The docs on bootstrap are probably more helpful than the Github Link: CDK Bootstrapping.
Each CloudFormation stack created by a CDK app only belongs to one CDK app, they shouldn't be shared. The outputs can be referenced from other apps, but each stack should belong to one app.
That's why you can mix and match CDK versions across different stacks. Usually each CDK app maps to one or more CloudFormation stacks.

Related

AWS Lambda "Applications" and Deployment Environments

I have an AWS Lambda "Application" which was created from an AWS Lambda app template. This in turn created two stacks. The serverlessrepo-??-toolchain stack and the Lambda Application stack that has the actual application...
I've done the development and added lambdas, permission, and such. Really evolved the template.yml and the buildspec.yml.
It all works and properly rebuilds the stack.
But in an AWS Lambda Application that is using CodeDeploy/CodePipeline, what is the best strategy for deploying additional environments? Let's assume the first one - the one made by the serverlessrepo-??-toolchain stack -- is Dev. How do I create a QA and Prod from my template.yml?
They need to be new stacks, yes? As in each environment is its own stack.
Thank you.

Cloudformation Multi-Step Stack Deployment

We've built our entire platform using Cloudformation stacks and have dozens of services that follow the same deployment pattern:
Each service is broken out into two stacks. The first stack provisions an ECR repo for the app image and CodeBuild project to build the image. The second stack provisions the services using the $latest image from the previously provisioned ECR repo.
Before the ECR repo can have an image, the first stack needs to be deployed, and the CodeBuild project must be ran. Only once this is done can the second stack be deployed (otherwise the service cannot be provisioned because there is no starter image to use).
Having it broken out into these parts also gives us the ability to provision the same service with different configurations. Say I have an image processing service that creates image thumbnails. I want to configure that service differently if it is meant to handle real-time requests from users vs if it is a service that handles batch jobs. So my ImageProcessingService may be split into a UserImageProcessingService and BatchImageProcessingService but both services will use the shared ImageProcessingServiceRepo from ECR.
As a result, redeploying the same service to- say- another region cannot be easily automated without custom tools.
My question is:
How are you meant to automate deployment of Cloudformation stacks when multiple steps, especially steps involving tasks like submitting a CodeBuild job, are needed?
Is there a better way to structure my Cloudformation stacks to achieve my goal or is it not possible to do what I need to do without something like Teraform?

CDK deployment and least privilege principle

We're (mostly happily ;)) using the AWS CDK to deploy our application stack to multiple environments (e.g. production, centralized dev, individual dev).
Now we want to increase the security by applying the least privilege principle to the deployment role. As the CDK code already has all the information about which services it will touch, is there a best practice as to how to generate the role definition?
Obviously it can't be a part of the stack as it is needed to deploy the stack.
Is there any mechanism built in to the CDK (e.g. construct CloudFrontDistribution is used thus the deployment role needs to have the permission to create, update and delete CloudFrontDistributions - possibly even after the CloudFrontDistribution is mapped to only do that to that one distribution).
Any best practices as how to achieve that?
No. Sadly there isn't currently (2022-Q3) a way to have the CDK code also provide a IAM policy that would grant you access to run that template and nothing more.
However, everything is there to do it, and thanks to aspects it could probably be done relatively easily if you wanted to put in the leg work. I know many people in the community would love to have this.
You run into a chicken and an egg problem here. (We encounter a similar issue with Secret Manager and initializing secrets) pretty much the only solution I've found that works is a first time setup script that uses an SDK or the CLI to run the necessary commands for that first time setup. Then you can reference that beyond there.
However, it also depends on what roles you're taking about. Cdk deploy pretty much needs access to any given resource you may be setting up - but you can limit it through users. Your kept in a secret lock box root admin setup script can setup a single power user, that can then be used for initial cdk deploys. You can set up additional user groups that have the ability to deploy cdk or have that initial setup create a cdk role that cdk deploy can assume.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

nested stacks in AWS CloudFormation are not changing

I have more than 20 different services in AWS which are stacks defined in one main file (there are reference to template in .json), so these stacks are nested. Update of this stack is triggered by Codepipeline which is well configured with Github and production site. My problem is with during updating CF script because only main level resources are updated, unfortunately, I cannot see any changes which are linked with nested stacks. Why?