I did some clean up in my S3 buckets and deleted S3 bucket with weird names. Now my CDK stacks are in weird states.
I have some CDK stacks running.
$cdk ls shows
LambdaHoroscrape
I destroy the stack with those commands
cdk destroy
cdk destroy LambdaHoroscrape
Are you sure you want to delete: LambdaHoroscrape (y/n)? y
LambdaHoroscrape: destroying...
LambdaHoroscrape: destroyed
However the stack LambdaHoroscrape is still present, cdk ls confirms it.
How can I properly delete this CDK stack ?
Context: I wanted to delete the stack because my deployment ( cdk deploy ) showed this cryptic message
[%] fail: No bucket named 'cdktoolkit-stagingbucket-zd83596pa2cm'. Is account xxxxx bootstrapped?
I boostrapped my account with
cdk bootstrap aws://{account_number}/{region}
Others encountered this cryptic error as well
https://github.com/aws/aws-cdk/issues/6808
In the end, because of this error and eagerness to destroy the stack, I lost my DynamoDB data collected since 2 weeks.
The message is caused by the fact that you deleted the CDK asset bucket created during bootstrapping. You'll need to re-bootstrap your environment to deploy there.
As for deleting, CDK deploys cloudformation stacks, so a sure way to delete something is to go to the cloudformation console and delete the stack.
Related
I am following this tutorial and i have problem with cleaning the infra in target account. The flow detail
1 : developer -commit-> github -> DeplopmentAccount:Pipeline pull the code then deploy the Aws Cloudformation stack to TargetAccount:cloudformation .
Test Cdk Pipeline will deploy the stack in TestAccount which is the good thing
2.But when we want to clean up with cdk destroy --all, it only destroys the Test CDK Pipeline , the stacks in Test Account still remains.
So my question is how do we destroy all stacks ?
The solution is mentioned in the tutorial you linked:
Clean up
Delete stacks using the command cdk destroy --all. When you see the following text, enter y, and press enter/return.
ProdDataLakeCDKBlogInfrastructurePipeline,
DevDataLakeCDKBlogInfrastructurePipeline (y/n)?
Note: This operation deletes stacks only in central deployment account
To delete stacks in development account, log onto Dev account, go to AWS CloudFormation console and delete the following stacks:
Dev-DevDataLakeCDKBlogInfrastructureVpc
Dev-DevDataLakeCDKBlogInfrastructureS3BucketZones
Dev-DevDataLakeCDKBlogInfrastructureIam
Note:
Deletion of Dev-DevDataLakeCDKBlogInfrastructureS3BucketZones will delete the S3 buckets (raw, conformed, and purpose-built). This
behavior can be changed by modifying the retention policy in s3_bucket_zones_stack.py
To delete stacks in test account, log onto ~~Dev~~ Test account, go to AWS CloudFormation console and delete the following stacks:
Test-TestDataLakeCDKBlogInfrastructureVpc
Test-TestDataLakeCDKBlogInfrastructureS3BucketZones
Test-TestDataLakeCDKBlogInfrastructureIam
Note:
The S3 buckets (raw, conformed, and purpose-built) have retention policies attached and must be removed manually when they are
no longer needed.
To delete stacks in prod account, log onto ~~Dev~~ Prod account, go to AWS CloudFormation console and delete the following stacks:
Prod-ProdDataLakeCDKBlogInfrastructureVpc
Prod-ProdDataLakeCDKBlogInfrastructureS3BucketZones
Prod-ProdDataLakeCDKBlogInfrastructureIam
Note:
The S3 buckets (raw, conformed, and purpose-built) have retention policies attached and must be removed manually when they are
no longer needed.
It erroneously says that you have to log into the Dev account for Test and Prod, I have corrected it in the quote.
Alternatively, you can call cdk destroy --all with the --profile flag and set it to the dev/test/prod AWS credentials profile.
I was testing the AWS SAM functionality and encountered an issue.
If by manually delete a resource that was originally created by the SAM template, then subsequent SAM deployment will fail. I do understand that deleting resource manually that was created by SAM is not a good practice. But this was just a test only
Error
Is there any way to fix this?
AWS SAM uses Cloudformation underneath to create various resources.
How do I update an AWS CloudFormation stack that's failing because of a resource that I manually deleted?
If you delete a resource from an AWS CloudFormation stack, then you must remove the resource from your AWS CloudFormation template. Otherwise, your stack fails to update, and you get an error message.
similar post : Function not found after manually deleting a function in a SAM CloudFormation stack
I'm currently trying to destroy a workspace, I know that there are some buckets that have a 'do not destroy' type tag applied to them, so when I run terraform destroy for the first time, I got Instance cannot be destroyed error for two buckets:
Resource module.xxx.aws_s3_bucket.xxx_bucket has
lifecycle.prevent_destroy set, but the plan calls for this resource to be
destroyed. To avoid this error and continue with the plan, either disable
lifecycle.prevent_destroy or reduce the scope of the plan using the -target
flag.
so I navigate to the AWS console and delete them manually then tried to run terraform destroy again, then it's complaining about one of the buckets that I've removed manually: Failed getting S3 bucket: NotFound: Not Found, the other one seems fine.
Does anyone know how to resolve this please? Thanks.
If you removed the resource with an action external to a modification in the Terraform state (in this situation a bucket removed manually through the console), then you need to update the Terraform state correspondingly. You can do this with the terraform state subcommand. Given your listed example of a resource named module.xxx.aws_s3_bucket.xxx_bucket, it would appear like:
terraform state rm module.xxx.aws_s3_bucket.xxx_bucket
You can find more info in the documentation.
I'm finding that cdk tries to recreate S3 buckets every time I deploy. If I don't specify a bucket name, it generates a new junk bucket name every time. If I do specify a name, it refuses to deploy because the bucket already exists. How can I make it "upsert" a bucket?
Here's the code I'm using:
const dataIngestBucket = new Bucket(this, 'data-lake', {
bucketName: `${this.props.environmentName}-my-company-data-lake`
});
As long as I do not see the language you want to use, I give an answer using python. It can be easily traced and converted to any other languages.
Please refer to aws_cdk.aws_s3.Bucket class.
There you will find parameters to specify during class creating which allow you reach your goal, namely auto_delete_objects=True and removal_policy=cdk.RemovalPolicy.DESTROY.
CDK would do an update on your stack resources automatically if CDK code is updated.
For example, when you execute a CDK stack that creates a bucket for the first time, bucket would be created with provided configuration.
When you update your CDK code to say update lifecycle policy of the bucket or add a CORS, as part of the same stack, the update of the stack would automatically update the bucket - it would not recreate the bucket as Cloud Formation knows that there is an update on an existing stack.
In your case, it seems the stack is being re-created after a removal when the stack resources still exists. This causes Cloud Formation to create a new stack and its resources which were not removed when stack was destroyed.
Generally, issues occur when stack update fails and it is in rollback state, for example. In that case, redeploy would try to create the bucket again and fail.
In that case, possible option could be:
Delete the buckets
Delete the stack
Redeploy to re-create
Many times, we do not want to delete the resources, as they contain data; in that case, you can use another library such as boto3 for python in the CDK code, to check if the resource exists - if not create via CDK. This would cause CDK code to be not attempt creating the bucket if it exists ( CDK itself cannot be used to see if say S3 resource exists already - at least have not seen how to achieve this)
Another important point is the removal policy associated with the resource
troubleshooting_resource_not_deleted
My S3 bucket, DynamoDB table, or other resource is not deleted when I
issue cdk destroy
By default, resources that can contain user data have a removalPolicy
(Python: removal_policy) property of RETAIN, and the resource is not
deleted when the stack is destroyed. Instead, the resource is orphaned
from the stack. You must then delete the resource manually after the
stack is destroyed. Until you do, redeploying the stack fails, because
the name of the new resource being created during deployment conflicts
with the name of the orphaned resource.
If you set a resource's removal policy to DESTROY, that resource will
be deleted when the stack is destroyed.
However, even with the removal policy as DESTROY, Cloud formation cannot delete a non-empty bucket. Extract from the same link below -
AWS CloudFormation cannot delete a non-empty Amazon S3 bucket. If you
set an Amazon S3 bucket's removal policy to DESTROY, and it contains
data, attempting to destroy the stack will fail because the bucket
cannot be deleted. You can have the AWS CDK delete the objects in the
bucket before attempting to destroy it by setting the bucket's
autoDeleteObjects prop to true.
Best Practice is to
Design stack resources in such a manner that they have minimal updates being applied which can cause failure. So a stack can be created with say mostly static resources such as ECR, S3 which do not change much and is independent generally of the main application deployment stack which is more likely to fail.
Avoid manually deleting the stack resources which breaks a stack's inconsistency
If a stack is deleted, ensure stack's owned resources are also deleted.
Get rid of having fix names!
With
final IBucket myBucket = Bucket.Builder.create(this, "mybucket")
.bucketName(PhysicalName.GENERATE_IF_NEEDED).build();
(Java, but doesn´t matter)
Do you get a "random-Named" Bucket.
Described here: https://docs.aws.amazon.com/cdk/latest/guide/resources.html
Use it like this in your template (here nested stack)
#Nullable NestedStackProps templateProps = NestedStackProps.builder()
.parameters(new HashMap<String, String>(){{
put("S3Bucket", myBucket.getBucketName());
}})
.build();
Or you still have a fix name (get rid of!!) then get them with:
final IBucket myBucket = Bucket.fromBucketName(this, "mybucket", "my-hold-bucket-name");
But you can not doing things like:
if (!myBucket) then create
(pseudo code)
No ressource-check at compile/runtime!
When I deploy using cloudformation aws cloudformation deploy --region $region --stack-name ABC
I get the error:
An error occurred (ValidationError) when calling the CreateChangeSet
operation:
Stack:arn:aws:cloudformation:stack/service/7e1d8c70-d60f-11e9-9728-0a4501e4ce4c
is in ROLLBACK_COMPLETE state and can not be updated.
This happens when stack creation fails. By default the stack will remain in place with a status of ROLLBACK_COMPLETE. This means it's successfully rolled back (deleted) all the resources which the stack had created. The only thing remaining is the empty stack itself. You cannot update this stack; you must manually delete it, after which you can attempt to deploy it again.
If you set "Rollback on failure" to disabled in the console (or set --on-failure to DO_NOTHING in the CLI command, if using create-stack), stack creation failure will instead result in a status of CREATE_FAILED. Any resources created before the point of failure won't have been rolled back.
If instead you were deploying updates to an existing (successfully created) stack, and the updates failed but were successfully rolled back, it will go back into its previous valid state (with a status of UPDATE_ROLLBACK_COMPLETE), allowing you to reattempt updates.
As #SteffenOpel points out, you can now specify that a stack should be deleted on failure by setting the --on-failure option (for create-stack only, not deploy) to DELETE in the CLI. This option is not yet available in the console at the time of writing (13/11/20).
Run the following AWS CLI command to delete your stack:
aws cloudformation delete-stack --stack-name <<stack-name>>
It may take less than a minute to delete your stack, and then try re-deploying it.
2 solutions
1.you have to manually delete all the objects in the s3
(if still th error occurs ,Stack:arn:aws:cloudformation:eu-west-3:624140032431:stack/as*****cbucket/f57c54f0-618a-11ec-afd7-06fc90426f3e is in ROLLBACK_COMPLETE state and can not be updated., move to second solution)
2.create a new bucket to continue
the case is that the S3 bucket is unique globally, same happened to me I was getting the same error while I was using the CloudFormation.
in my case, S3 bucket name was not unique in my case, it was already created, i change then name of the bucket and it worked.