I am trying to understand how cdk bootstrap works. I have read the doc: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md and tried to run the command in my AWS account. I can see a new cf stack is created CDKToolkit which includes s3 bucket, iam roles etc.
My question is whether I need to run this command for every cdk project I have? Or is it just one time execution?
If I have projects using different cdk version v1 and v2, do I use the same cf stack? Will it cause version conflicts?
It's typically a one time thing per account per region. The infrastructure in that stack is shared among your CDK apps.
There was a change in format a while ago that required an update of the stack, but since then it has remained largely unchanged.
The docs on bootstrap are probably more helpful than the Github Link: CDK Bootstrapping.
Each CloudFormation stack created by a CDK app only belongs to one CDK app, they shouldn't be shared. The outputs can be referenced from other apps, but each stack should belong to one app.
That's why you can mix and match CDK versions across different stacks. Usually each CDK app maps to one or more CloudFormation stacks.
I'm learning SAM, and I created two projects.
The first one, example1, I created it from the AWS web console, by going to Lambda, Applications, and choosing this template:
After the wizard finishes creating the app, it looks like this:
I'm interested in the yellow-highlighted area because I don't understand it yet.
I tried to replicate this more or less manually by using sam init and created example2. It's easy to look at the template.yml it creates and see how the stuff in Resources are created, but how is the stuff in Infrastructure created.
When I deploy example2 with sam deploy --guided, indeed there's nothing in Infrastructure:
Given example2, how should I go about creating the same infrastructure as example1 had out of the box (and then changing it, for example, I want several environments, prod, staging, etc). Is this point and click in the AWS console or can it be done with CloudFormation?
I tried adding a permission boundary to example2, on of the things example1 has in Infrastructure, I created the policy in IAM (manually, in the console), added it to the template.yml, and deployed it but it didn't show up in "Infrastructure".
Part 1: In which I answer your question
Where are these infrastructure entries coming from in AWS SAM?
I replicated your steps in the Lambda console to create a "Serverless API Backend" called super-app. When you press create, AWS creates
two CloudFormation Stacks, each with a YAML template. You can view the stack resources and the YAML templates in the CloudFormation console under Stacks > Templates Tab.
super-app: the "Resources" stack with the lambda and dynamo resources you managed to replicate.
serverlessrepo-super-app-toolchain: the mystery stack with the "Infrastructure" CI/CD resources1.
Is this point and click in the AWS console or can it be done with CloudFormation?
Yes and Yes. You can use sam deploy (or aws cloudformation deploy) to update the stacks. Or point and click.
Example: update the serverlessrepo-super-app-toolchain template with the SAM CLI:
# compile
sam build -t cicd_template.yaml --region us-east-1 --profile sandbox
# send changes to the cloud
sam deploy --stack-name serverlessrepo-super-app-toolchain --capabilities CAPABILITY_NAMED_IAM --region us-east-1 --profile sandbox
You must pass in values for the template parameters at deploy-time. The current values for the parameters are in the console under CloudFormation > Stack > Parameters Tab. You can pass them using the --parameter-overrides param in the deploy command. If the
parameters are static, I find it easier to pass SAM parameter values in samconfig.toml, which sam deploy will use by default:
# samconfig.toml
version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
# template default parameters - fill in the template blanks
# Where do the values come from? the CloudFormation console, Parameters tab
AppId = "super-app"
AppResourceArns = "arn:aws:lambda:us-east-1:1xxxxxx:function..."
ConnectionArn = "arn:aws:codestar-connections:us-east-1:xxxxxx:connection/xxxx3c5c-f0fe-4eb9-8164-d3c2xxxxx6e2"
GitHubRepositoryOwner = "mygithuborg"
RepositoryName = "super-app"
SourceCodeBucketKey = "sample-apps/nodejs/14.x/javascript/sam/web-backend.zip"
SourceCodeBucketName = "prodiadstack-subsystemsn-apptemplatesbucket03axxx-96eem3xxxxxx"
UseCodeCommit = false
If there were changes made in the template, they will deploy. Success!
Part 2: In which I try to convince you to use the CDK instead
SAM and YAML templates are far from dead, but I think it's safe to say that for proficient developers starting out with AWS, the newer AWS Cloud Development Kit is a natural first choice for ambitious applications that need CI/CD and testing. For most of us, editing a 800-line YAML file is not a fun experience.
AWS Infrastructure-As-Code
There are lots AWS and 3rd Party IaaC tools to deploy infra on AWS. Each abstraction is best for somebody sometime. The important thing to remember is that no matter what higher-level IaaC toolset you use, it ends up being deployed as a CloudFormation template. Here are the AWS approaches, oldest to newest:
CloudFormation YAML2 templates
The OG, all-powerful, lowest-level approach is to hand-code YAML templates. The Cfn template reference
docs are indespensible no
matter what tool you use, because that's what gets deployed.
SAM YAML templates
With AWS SAM, you
still handcode YAML, but less3. A SAM template is a superset of CloudFormation with some higher-level abstractions for the main serverless components like Lambdas, DynamoDB tables and Queues. The SAM CLI compiles the SAM template to Cfn. It has nifty features like local testing and deploy conveniences.
Cloud Development Kit
The newest, shiniest IaaC approach is the CDK, now on V2. With the CDK, we write Typescript/Python/Java/etc. instead of YAML. The CDK CLI compiles your language code to Cfn and deploys with cdk deploy. It has a bigger set of high-level infra abstractions that goes beyond serverless, and escape hatches to expose low-level Cfn constructs for advanced use cases. It natively supports testing and CI/CD.
AWS CDK workshop including testing and pipelines. Lots of AWS CDK example apps.
Note that CloudFormation is the ultimate soure of this info. The lambda console makes a cloudformation.DescribeStack API call to fetch it.
YAML or JSON
SAM also has a marketplace-like repository with reusable AWS and 3rd party components
Edit :
If I understand correctly, you want to reproduce the deployment on the SAM app. If that's the case, there is an AWS sample that covers the same approach.
It seems you are using either CodeStar/CodeCommit/CodePipeline/CodeDeploy/Code... etc. from AWS to deploy your SAM application on example1.
At deploy time, these resources under infrastructure are created by the "Code" services family in order to authorize, instantiate, build, validate, store, and deploy your application to CloudFormation.
On the other hand, on example2, whenever you build your project in your local machine, both instantiation, build, validation, storage (of the upload-able built artifacts) are leveraged by your own device, hence not needed be provisioned by AWS.
To shortly answer your question: No. Your can't recreate these infrastructure resources on your own. But again, you wouldn't need to do so while deploying outside of AWS' code services.
I have a SAM project to deploy my app, I deploy this stack using sam build and sam deploy
I recently added a codepipeline (with all its resources) to the template. The problem is that when I deployed the app, code pipeline created another stack.
Is there a way to mantain only 1 stack?
If not I must separate them as nested stack o different stacks?
TLDR; Adding CodePipeline to a SAM app necessitates an additional CloudPipeline stack.
The Codepipeline stack is independent of the "app stacks". This loose coupling is helpful:
Can deploy the app manually via sam deploy for testing, while using the pipeline for prod.
Can clone the app to multiple regions or accounts with pipeline stages
Can add fancy test or approval actions in the pipeline without touching the app code
(It also seems like this setup helps AWS avoid tricky chicken-and-egg dependency problems of having to bootstrap the pipeline before deploying on the app resources.)
There is a following setup:
2 lambda functions, deployed using serverless.yml
custom domain (e.g. api.mydomain.com) attached to API Gateway
2 stages (dev and prod)
CNAME configuration in my domain to point to abcdefg.cloudfront.net
There's a high-level task to update two lambda functions without the downtime for the API that they are serving. How to do it using serverless framework?
Note: there are two ways to manage lambda deployments: stages and aliases (versions). Currently aliases do not work in serverless (there's a fork that hotfixes the issue, but it does not matter atm).
There is no downtime when updating a lambda function using the Serverless Framework, simply by running sls deploy.
The function code is zipped and uploaded to Lambda, and when that is completed, CloudFormation will update the Lambda configuration to point to the new code. There is no downtime in this process.
My current project is depending hard on AWS, using different services: SQS, Lambda, DynamoDB, S3, API gateway ... and they interact with each other to perform a specific task (for example, a sqs trigger a lambda and store processed data to dynamodb and s3). After deploying and testing, all are working together very well, I am planning to clone the current working environment for a new project.
The normal way I can think of is creating new services one by one (because I took note all configurations for each service so far)
My question: Is there any good and automatical way to clone the working AWS environment configuration?
Any suggestion is very appreciated.
You can use CloudFormation template to design your architecture and then reuse it as needed.
Since you already have your architecture running, instead of doing it manually, you can run CloudFormer which will allow you create CloudFormation template out of your existing infrastructure. You can pick which parts to include in CloudFormation template via web interface and once done, you can launch another such architecture via that newly created CloudFormation template.