Do not delete existing resources when destroying a stack in AWS-CDK - amazon-web-services

often times one must import existing resources into a stack when working with aws-cdk. When we "destroy" the stack we take it for granted that the existing resources we imported are not deleted along with everything else.
Is it possible to explicitly not destroy a resource during the destroy process?

Imported resources won't actually be a part of your new stack (i.e. they won't be resources in the generated CloudFormation). So if you are only concerned with those resources you don't need to worry.
If you are wanting to make sure something in the stack is not being deleted when the stack is deleted you can call the applyRemovalPolicy(RemovalPolicy.RETAIN) on the resource.

Jason Wadsworth gives a good answer above re applyRemovalPolicy().
You can apply policies at the resource level and at the stack level.
You can also take care to set appropriate IAM policies for your users (including perhaps the API user that you use for the cdk) such that they couldn't delete your protected resources even if they wanted to.
You might want to look into the --enable-termination-protection flag supported by aws-cli.
Finally, a cheap and easy way to ensure that a given resource won't get inadvertently deleted that requires minimal aws knowledge + cdk experience is to simply define the resource outside the cdk, e.g. via the console, aws-cli, etc.
Starting out, this might help offer some peace of mind that you or a colleague won't accidentally return something like an EIP to Amazon's pool if, for example, there were a bunch of external dependencies and considerations like whitelists and third-party firewall rules tied to it.
Welcome to StackOverflow, don't forget to "accept" the answer that you feel provides the best solution to your problem :).

Related

AWS CDK: What is the best way to implement multiple Stacks?

I have a few things to get clear, specifically regarding modeling architecture for a serverless application using AWS CDK.
I’m currently working on a serverless application developed using AWS CDK in TypeScript. Also as a convention, we follow the below rules too.
A stack should only have one table (dynamo)
A stack should only have one REST API (api-gateway)
A stack should not depend on any other stack (no cross-references), unless its the Event-Stack (a stack dedicated to managing EventBridge operations)
The reason for that is so that each stack can be deployed independently without any interferences of other stacks. In a way, our stacks are equivalent to micro-services in a micro-service architecture.
At the moment all the REST APIs are public and now we have decided to make them private by attaching custom Lambda authorizers to each API Gateway resource. Now, in this custom Lambda authorizer, we have to do certain operations (apart from token validation) in order to allow the user's request to proceed further. Those operations are,
Get the user’s role from DB using the user ID in the token
Get the user’s subscription plan (paid, free, etc.) from DB using the user ID in the token.
Get the user’s current payment status (due, no due, fully paid, etc.) from DB using the user ID in the token.
Get scopes allowed for this user based on 1. 2. And 3.
Check whether the user can access this scope (the resource user currently requesting) based on 4.
This authorizer Lambda function needs to be used by all the other Stacks to make their APIs private. But the problem is roles, scopes, subscriptions, payments & user data are in different stacks in their dedicated DynamoDB tables. Because of the rules, I have explained before (especially rule number 3.) we cannot depend on the resources defined in other stacks. Hence we are unable to create the Authoriser we want.
Solutions we could think of and their problems:
Since EventBridge isn't bi-directional we cannot use it to fetch data from a different stack resource.
We can invoke a Lambda in a different stack using its ARN and get the required data from its' response but, AWS has discouraged this as a CDK Anti Pattern
We cannot use technology like gRPC because it requires a continuously running server, which is out of the scope of the server-less architecture.
There was also a proposal to re-design the CDK layout of our application. The main feature of this layout is going from non-crossed-references to adopting a fully-crossed-references pattern. (Inspired by layered architecture as described in this AWS best practice)
Based on that article, we came up with a layout like this.
Presentation Layer
Stack for deploying the consumer web app
Stack for deploying admin portal web app
Application Layer
Stack for REST API definitions using API Gateway
Stack for Lambda functions running business-specific operations (Ex: CRUDs)
Stack for Lambda functions runs on event triggers
Stack for Authorisation (Custom Lambda authorizer(s))
Stack for Authentication implementation (Cognito user pool and client)
Stack for Events (EvenBuses)
Stack for storage (S3)
Data Layer
Stack containing all the database definitions
There could be another stack for reporting, data engineering, etc.
As you can see, now stacks are going to have multiple dependencies with other stacks' resources (But no circular dependencies, as shown in the attached image). While this pattern unblocks us from writing an effective custom Lambda authorizer we are not sure whether this pattern won't be a problem in the long run, when the application's scope increases.
I highly appreciate the help any one of you could give us to resolve this problem. Thanks!
Multiple options:
Use Parameter Store rather than CloudFormation exports
Split stacks into a layered architecture like you described in your
answer and import things between Stacks using SSM parameter store like the other answer describes. This is the most obvious choice for breaking inter-stack dependencies. I use it all the time.
Use fixed resource names, easily referencable and importable
Stack A creates S3 bucket "myapp-users", Stack B imports S3 bucket by fixed name using Bucket.fromBucketName(this, 'Users', 'myapp-users'). Fixed resource names have their own downsides, so this should be used only for resources that are indeed shared between stacks. They prevent easy replacement of the resource, for example. Also, you need to enforce the correct Stack deployment order, CDK will not help you with that anymore since there are no cross-stack dependencies to enforce it.
Combine the app into a single stack
This sounds extreme
and counter intuitive, but I found that most real life teams don't
actually have a pressing need for multi-stack deployment. If your only concern is
separating code-owners of different parts of the application - you
can get away by splitting the stack into multiple Constructs,
composed into a single stack, where each team takes care of their
Construct and its children. Think of it as combining multiple Git repos into a Monorepo. A lot of projects are doing that.
A strategy I use to avoid hard cross-references involves storing shared resource values in AWS Systems Manager.
In the exporting stack, we can save the name of an S3 Bucket for instance:
ssm.StringParameter(
scope=self,
id="/example_stack/example_bucket_name",
string_value=self.example_bucket.bucket_name,
parameter_name="/example_stack/example_bucket_name",
)
and then in the importing stack, retrieve the name and create an IBucket by using a .from_ method.
example_bucket_name = ssm.StringParameter.value_for_string_parameter(
scope=self,
parameter_name="/example_stack/example_bucket_name",
)
example_bucket = s3.Bucket.from_bucket_name(
scope=self,
id="example_bucket_from_ssm",
bucket_name=example_bucket_name,
)
You'll have to figure out the right order to deploy your stacks but otherwise, I've found this to be a good strategy to avoid the issues encountered with stack dependencies.

AWS CDK multi stack or single stack

I use CDK to deploy a lambda function (along some IAM role & queue) and monitoring resources about the lambda, lambda log group and queue earlier. What i have right now is basically 2 class, 1 class to create all the lambda related resource and another to create monitoring resource and they are added all into 1 deployment stack.
Recently im deploying this to a new account and i realized my stack fail to create because some of the monitoring stuff is looking for the lambda log group and cant find it since its not created yet.
So what is the better option:
have 2 deployment group, 1 for lambda related resource and 1 for monitoring resource
use dependencies to create some ordering in my stack.
seems like both possible solution but what is a better long term solution?
Assuming you mean a Stack for your two classes, then you are better off making them both cdk.NestedStacks and instantiating them in a single common stack. You can then expose constructs as class attributes in one stack and pass them into the other as parameters to the second. Of course, this only works one way - if you have to go both ways you need to re-evaluate how you have your stacks organized.
The advantage of doing this is great: exposing constructs as an attribute is the best practice as it gives you direct access to that construct before it creates the CloudFormation data for it. you have complete access to every part of that construct from various arns (like dynamodb stream arns which are difficult to import) and automatically know the layer versions for lamdba layers - among many other things.
In addition, you never run into a stack dependency - if they are different top level stacks and you share constructs between them you can very run into lock situations where attempting to change something in one stack creates a dependency lock and prevents the stack from deploying.
The downside is that they all are part of the deployment. So there is a potential for something to be updated when you didnt expect it too - though CDK does use the Cloudformation Changeset system so it should not update things that have no changes applied to them (but sometimes, changes occur because of the way CDK generates tokens and such that you may not be aware of)
IF you do not go this route you are stuck using the various from* methods in cdk constructs to import the existing construct into your stack. This causes some issues, as it it can't import everything about a given construct at synth time (layer version and dynamo stream arns are two notable ones i mentioned already). Plus, you need to know the name of the construct - and Best Practices says you shouldn't deliberately name your constructs so you can easily spin up adhoc versions of your app without naming issues.

How to create a Stack in AWS via Terraform?

My goal is to be able to create a 'stack' in AWS, i.e a grouping of related resources I can update and change using Terraform.
I've been attempting to read the documentation but I'm a little confused as to how I could accomplish this in terraform.
I understand the concept of possibly writing modules which are reusable, but I'm used to dealing with CF stacks when using AWS.
Is there an idiomatic way to do this in terraform? It seems that maybe the concept of a stack is abstracted away somewhat. i.e if I want to get and output from a resource.. eg a RDS url, I can reference that in the Terraform code and it will evaluate and determine it at runtime rather than reading a CF stack output value in AWS?
Is this correct?
From what I understand you are wanting to understand how to write a replica of a "stack" in Terraform and want to understand the concepts.
There are great number of resources for seeing example stacks, take a look at the official Terraform AWS examples to get a feel for notation.
You're describing modules etc which are best practice, however start small. Add to your main.tf file a simple infrastructure and then build on that.
The best way to learn will be through doing, but take it at a steady pace.
And yes you can reference your resources, generally before you run terraform apply everything is evaluated. Any resource dependencies will be created in order.

How to delete everything in an AWS account?

Is there a way to nuke all existing settings in an AWS account to begin again on a clean slate?
I an AWS beginner and after getting tangled up and my web site no longer loading, I need a clean slate to start afresh i.e. delete all AIM, ECS, S3, Load balancers etc all in one go.
I would suggest https://github.com/rebuy-de/aws-nuke - it can clean everything from every region and is the best tool I've found yet!
Probably not. This is a common safety + security mechanism in such complex and important systems: nobody should be able, by accident or otherwise, to quickly and easily delete everything. Using an infrastructure as code process, however, you would be able to do this by simply declaring the entire stack as unwanted. This is relatively safe for the simple reason that you can usually bring this kind of infrastructure back up again in a short time span, as long as care was taken during development to make sure that any permanent state cannot be irrevocably destroyed by infrastructure declaration changes.

Is CloudFormation idempotent?

I read in many places on internet that CloudFormation is not idempotent, but I cannot find any example that proves this fact.
Could you please provide me an example that runs a resource to prove that CloudFormation is not idempotent ?
The definition of idempotent according to Wikipedia is as follows:
In computer science, the term idempotent is used more comprehensively
to describe an operation that will produce the same results if
executed once or multiple times.
CloudFormation is considered not idempotent in several aspects of its behavior:
Calling the create API for a stack that already exists will result in an error
Calling the update API with an unchanged CloudFormation stack results in an error
Creating and deleting the same stack again will result in creating resources with different ARNs for IAM Users, Security Group IDs, EC2 Instance IDs, VPC IDs, etc...
Resources modified outside of CloudFormation will not be changed back to original values if existing stack is updated with existing content
However, from a high level one of the main reasons to use CloudFormation is so you represent your infrastructure as code so you can use it to produce the same infrastructure repeatedly. That is almost identical to the original definition of idempotent, but the distinction is on the multiple times part here. As listed above when using the same stack and applying on top of it or deleting a stack and recreating it, technically you are not getting the exact same results, but from a practical standpoint this is completely understandable and often perfectly acceptable.
I am not sure whether this answer will be useful as the question has been posted 2 years ago. Better late than never.
AWS CloudFormation has changed a lot in these 2 years. Right now, I can say for sure that it's API calls are idempotent.
Have a look at these API calls:
CreateStack
UpdateStack
DeleteStack
You will find that there is an optional parameter called ClientRequestToken. This provides idempotency to the API calls. It's the token that client provides to tell the CloudFormation service that it is not making a new API call. As long as you use the same token and keep making the call with rest of the parameters same, CloudFormation knows that you are only retrying the call.
Cloudformation is idempotent provided you have not made updates to an already completed stack, if there are changes then it will update ,now updating a resource might require deletion of it and creation or an update w/o creation of a new resource
To know more read about cfn-hup process, this will help you