Using template.yaml, Cloudformation service created stack having three s3 buckets.
I deleted one s3 resource manually.
Say the stack name is stack1
On running the same template.yaml again(unchanged), with stack name stack1, following this update_procedure
does CloudFormation service update the same stack? with that
missing bucket....it is not updating with missing s3 bucket in my case
You can't create two stacks with the same name in the same region. If you were to do this in another region it would create the bucket you deleted but fail to create the other buckets, all assuming you named your buckets in the template. If the buckets we not named (so CloudFormation created the names for you) then it will create all three buckets, but the names will not be the same as they were before.
CloudFormation will not update a stack when you tell it to create a stack.
EDIT:
Based on your updated question, it seems you are asking if the bucket will be recreated. The answer to that is no. CloudFormation sees that nothing has changed in what you've asked for, so no action is taken. As a matter of fact you should get an error when updating, saying something along the lines of "no changes".
There are exceptions to the above "no", but for your purposes here I think it's sufficient.
The easiest solution for you is to remove the S3 bucket, that you deleted, from the template, run the update (it will "delete" it even though it's already gone) and then add it back to the template and update it again. That will cause it to be created again.
If you are worried about this sort of thing happening in the future consider using Drift Detection with CloudFormation.
Related
I am using removalPolicy: cdk.RemovalPolicy.DESTROY.
The other two options are RETAIN and SNAPSHOT.
If I delete my table from the console and try to create using the cdk it gives an error say could not find resource.
Question -- what option I can use if the script is unable to find the table then it should create ?
The RemovalPolicy has nothing to do with things you remove yourself outside of CDK. This is a bad idea (and in some cases not permitted) as you are supposed to delete resources by updating your CDK code to remove that resource, and then redeploying it.
The RemovalPolicy tells CloudFormation what to do if you change your CDK code so that the resource is no longer part of your CDK stack.
For example if you have an S3 bucket you cannot rename it, but you can still change its name in your CDK stack. If you do, CDK will need to remove the old S3 bucket and create a new one with the new name. The same applies for a number of other resources that can't be renamed, such as DynamoDB tables.
How CDK handles removing this old resource is what the RemovalPolicy is for. If you set it to RETAIN it will just forget about it and leave it up to you to clean up manually later. Using the DESTROY policy tells CDK to try to delete your resource automatically along with all the data it contains.
Usually you would use DESTROY if the data is not important and can be easily recreated (e.g. cache data), and RETAIN if the data is important and you would not want to lose it (e.g. user data).
Often it is a good idea just to always use RETAIN. This way if you accidentally make a typo in your CDK stack and rename a resource by mistake, all the data in it won't get deleted!
what option I can use if the script is unable to find the table then it should create ?
You just create the resource normally. When you write a CDK stack, you are not telling it what to do (create this S3 bucket, create this DynamoDB table) but rather you are telling it what you want (I want an S3 bucket with this name, and a DynamoDB table with that name). CDK will figure out which resources need to be created to meet your request, and if CDK has already created those resources in an earlier deploy, it will just update them if changes are needed or leave them untouched if no changes are required.
The reason you got an error after you deleted the resource manually is because CDK was trying to find it to figure out whether it needed to be updated or not. This is why you should never change any AWS resources manually that were configured by CDK - always update the CDK template and redeploy. If you fiddle with the resources manually it is very easy to break CDK, and in that situation the only solution is to destroy the stack, manually clean up any resources that couldn't be destroyed, then redeploy it from scratch (and then also reupload any user data you might have, often a big deal!)
CDK RemovalPolicy is equivalent to Cloudformatoin DeletePolicy which takes into effect when a resource is removed from CDK/Cloudformation.
DESTROY: This is default option, removes actual resource if code is removed in CDK.
RETAIN: This will retain the actual resource, if resource code is removed from CDK.
SNAPSHOT: This will also deletes the resource if resource code is removed from CDK but will create SNAPSHOT before deleting. Ex: RDS cluster or EC2 Volume.
These options are applicable when actual resource is removed from CDK code, but not from AWS. if a resource created by CDK/Cloudformation is manually deleted, it can no longer be maintained by CDK and will result in errors unless id is changed ex: MyQueue is changed to MyQueueSomething. This will result in creation of new queue and removal of old queue. since old queue is not existing it will be ignored.
new sqs.Queue(this, 'MyQueue', {
encryption: sqs.QueueEncryption.KMS_MANAGED
});
If we mistakenly delete a resource manually outside cdk/cf and we want to continue managing it via cdk/cf, we have to manually create the resource with same physical id. Here are some more details.
Goal:
I need to create an AWS ManagedPolicy that contains ALLOW permissions for API actions on resources created in a pre existing stack. No I cannot modify the existing stack template and simply add a policy to it. I need to create a new stack that deploys a policy that enables actions on the existing stacks resources
Solution:
Create a CDK project to generate and deploy this policy stack. Within this CDK project I want to load the existing stack and iterate over its resources adding permissions to my new stack's policy.
Problem:
I don't see any way to load an existing stack in CDK. I was hunting around for a "Stack.fromArn(...)" but don't see anything even similar.
Question:
Is there some obsucre way to do this? Or is it simply not supported?
I did not tried it, however it looks like if you can access/lookup at least one construct from the existing stack, you can use the method Stack.of(construct) https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_core.Stack.html#static-ofconstruct to lookup the first stack scope in which the construct is defined. Not sure however how you could iterate resources in the looked up stack construct.
It might be not be the best answer, however one option could be to export the outputs for resources in existing stack which you want to include in the policy, and import these values in the new stack where you create the policy.
I am running a stack on cloudformation that creates some resources like Route53, etc...
I want to be able to recreate only some of the resources with the same value.
for example, One of the stack events creates image on ECR and I want to rebuild it. Run rollback on that event and then create it again with the same parameters.
How can I do that?
It is not possible to specify parts of a stack to 'rebuild'.
For some resources, you can modify an attribute to trigger a redeployment. The documentation will say Update requires: Replacement.
For other resources, you could:
Remove the resource from the template file
Update the stack with the template, which will cause CloudFormation to attempt to remove the resource (if it still exists)
Restore the template to the previous contents
Update the stack again, which will cause CloudFormation to deploy the 'new' resources
Fairly new to cloudformation templating but all I am looking to create a template where I create a S3 bucket and import contents into that bucket from another S3 bucket from a different account (that is also mine). I realize CloudFormation does not natively supports importing contents into S3 bucket, and I have to utilize custom resource. I could not find any reference/resources that does such tasks. Hoping someone could point out some examples or maybe even some guidance as to how to tackle this.
Thank you very much!
Can't provide full code, but can provide some guidance. There are few ways of doing this, but I will list one:
Create a bucket policy for the bucket in the second account. The policy should allow the first account (one with cfn) to read it. There are many resources on doing this. One from AWS is here.
Create a standalone lambda function in the first account with execution role allowing it to the read bucket from the second account. This is not a custom resource yet. The purpose of this lambda function is to test the cross-account permissions, and your code which reads objects from it. This is like a test function to sort out all the permissions and polish object copying code from one bucket to other.
Once your lambda function works as intended, you modify it (or create new one) as a custom resource in CFN. As a custom resource, it will need to take your newly created bucket in cfn as one of its arguments. For easier creation of custom resources this aws helper can be used.
Note, that the lambda execution timeout is 15 minutes. Depending on how many objects you have, it may be not enough.
Hope this helps.
If Custom Resources scare you, then a simpler way is to launch an Amazon EC2 instance with a startup script specified via User Data.
The CloudFormation template can 'insert' the name of the new bucket into the script by referencing the bucket resource that was created. The script could then run an AWS CLI command to copy the files across.
Plus, it's not expensive. A t3.micro instance is about 1c/hour and it is charged per second, so it's pretty darn close to free.
I have a cloudformation template that creates an S3 bucket as part of a cloudformation stack. On the new version of my template, I 'm planning to migrate my application from S3 to EFS.
Is there a way to remove the S3 bucket resource from the template, without having it deleted? Ideally, I would like my older users to have the s3 bucket available after they upgrade, but for the new users to not have it at all. It looks like DeletionPolicies could help here, but the documentation on it says that it only applies to stack deletion, but not upgrades.
Going to elaborate on user3470009's answer.
The main, advertised purpose of the DeletionPolicy is to keep a resource when a stack is deleted. It's mentioned almost as an afterthought in the AWS docs for DeletionPolicy that it also functions during resource removal from a stack:
Note that this capability also applies to stack update operations that
lead to resources being deleted from stacks. For example, if you
remove the resource from the stack template, and then update the stack
with the template.
So the workflow to remove a resource from a stack without deleting the actual resource is:
Add "DeletionPolicy" : "Retain" to the resource declaration in your CF template
Apply changes by either saving in the UI or running aws cloudformation on the CLI or whatever other tool you use
Check in the UI that your resource has the correct changes. There are some gotchas about when CF doesn't update the metadata. See the docs link above
Remove the resource from your template
Apply changes. Watch the events log to see that it says DELETE_SKIPPED:
2018-10-15T15:32:32.956Z HostedZone DELETE_SKIPPED
Setting a DeletionPolicy of "Retain" will cause the bucket itself to remain after a stack update that deletes the resource.
I came across this question requiring a slight variation. I needed to extract my bucket to another stack and can not delete it in the move. This method worked well:
create a new stack with the bucket in question. (note: you now have 2 stacks referencing the same bucket)
remove the bucket from the original stack. The resource is deleted from the original stack but not from S3 since it is still referenced in your new stack.
I also tested Houser's response above and confirmed the bucket will not be deleted if it contains files. While this works, it does attempt to delete the bucket 3 times before it completes (and reports errors each time). migrating to a new stack will not throw any errors.
When you remove a resource from your template, and update a stack from this template, the resources will be deleted. There is no way to avoid that.
Since your existing users will continue using the S3 bucket, I would recommend preserving the bucket in your template. Remove it when the bucket has been removed from your product completely.
If needed, you could version your template (old vs. new).
If you absolutely need to remove the bucket from your template, you may be able to use a loophole. When CloudFormation deletes a bucket, the bucket must be empty. If it's not empty, then the bucket should be preserved and removed from your stack. You could experiment and see if it works for you. If it works in testing, then you can try using it in production.