Why do Amazon suggest including the region in AWS IAM resource names? - amazon-web-services

IAM resources are global, meaning they aren't isolated within specific AWS regions. However, the documentation for an IAM role includes a warning:
Important
Naming an IAM resource can cause an unrecoverable error if you reuse the same template in multiple regions. To prevent this, we recommend using FN::Join and AWS::Region to create a region-specific name, as in the following example ...
What kind of "unrecoverable error" are they talking about? Will cloudformation just fail to create the resource, or will things get stuck in some weird state?
The stacks we have which include IAM resources only contain IAM resources, so I suspect I may be able to ignore this warning.

The problem they want you to be aware of is that IAM is a global namespace, which can result in problems, if you don't manually namespace resources.
Here's an example:
Stack 1 in eu-central-1 creates a role with the name AppAdmin
Stack 2 in eu-west-1 creates a role with the name AppAdmin - this stack will fail to create or update
This failure is usually nothing life-threatening, it just means, your deployment will be broken.
If it's an update of an existing stack, a rollback will be performed.
If it's a new stack, the stack will fail to create and you need to manually delete the stack afterwards before rolling it out again (by default the orphaned resources will be deleted).
You can simply avoid this problem by namespacing the resources as suggested:
Stack 1 in eu-central-1 creates a role with the name AppAdmin-eu-central-1
Stack 2 in eu-west-1 creates a role with the name AppAdmin-eu-west-1
Now everybody is happy!
(The same thing is true for S3 Bucket names and other resources with a global namespace)

Related

CREATE_FAILED | AWS::S3::Bucket, the invisible bucket is exist?

I am using aws-cdk and $cdk deploy to deploy some stacks.
However, there comes error like this
21:12:30 | CREATE_FAILED | AWS::S3::Bucket | S3BucketStaticResourceB341FA19
si2-s3-sbu-mytest-xxx-static-resource-5133297d-91 already exists
Normally this kind of error, I can find the item exists in AWS console.
However in this case, $aws s3 ls doesn't show the bucket named this.
Why does this occur or where should I fix??
Generally, it is a good idea to avoid explicitly providing physical names for resources you create with CDK.
The documentation explains the reasoning:
Assigning physical names to resources has some disadvantages in AWS CloudFormation. Most importantly, any changes to deployed resources that require a resource replacement, such as changes to a resource's properties that are immutable after creation, will fail if a resource has a physical name assigned. If you end up in that state, the only solution is to delete the AWS CloudFormation stack, then deploy the AWS CDK app again. See the AWS CloudFormation documentation for details.
So if you introduce a change that requires your bucket to be replaced, you'll see the aforementioned error.
In your specific case, it is probably an S3-specific issue that bucket names are globally unique - across all accounts and regions, as stated by #Omar Rosadio in the comments. This makes naming your buckets yourself an especially bad idea.
If you don't pass the bucketName property when creating the bucket, CDK will generate a unique name for you, so you don't have to worry about this, and I suggest doing so.

Find or create s3 bucket in CDK?

I'm finding that cdk tries to recreate S3 buckets every time I deploy. If I don't specify a bucket name, it generates a new junk bucket name every time. If I do specify a name, it refuses to deploy because the bucket already exists. How can I make it "upsert" a bucket?
Here's the code I'm using:
const dataIngestBucket = new Bucket(this, 'data-lake', {
bucketName: `${this.props.environmentName}-my-company-data-lake`
});
As long as I do not see the language you want to use, I give an answer using python. It can be easily traced and converted to any other languages.
Please refer to aws_cdk.aws_s3.Bucket class.
There you will find parameters to specify during class creating which allow you reach your goal, namely auto_delete_objects=True and removal_policy=cdk.RemovalPolicy.DESTROY.
CDK would do an update on your stack resources automatically if CDK code is updated.
For example, when you execute a CDK stack that creates a bucket for the first time, bucket would be created with provided configuration.
When you update your CDK code to say update lifecycle policy of the bucket or add a CORS, as part of the same stack, the update of the stack would automatically update the bucket - it would not recreate the bucket as Cloud Formation knows that there is an update on an existing stack.
In your case, it seems the stack is being re-created after a removal when the stack resources still exists. This causes Cloud Formation to create a new stack and its resources which were not removed when stack was destroyed.
Generally, issues occur when stack update fails and it is in rollback state, for example. In that case, redeploy would try to create the bucket again and fail.
In that case, possible option could be:
Delete the buckets
Delete the stack
Redeploy to re-create
Many times, we do not want to delete the resources, as they contain data; in that case, you can use another library such as boto3 for python in the CDK code, to check if the resource exists - if not create via CDK. This would cause CDK code to be not attempt creating the bucket if it exists ( CDK itself cannot be used to see if say S3 resource exists already - at least have not seen how to achieve this)
Another important point is the removal policy associated with the resource
troubleshooting_resource_not_deleted
My S3 bucket, DynamoDB table, or other resource is not deleted when I
issue cdk destroy
By default, resources that can contain user data have a removalPolicy
(Python: removal_policy) property of RETAIN, and the resource is not
deleted when the stack is destroyed. Instead, the resource is orphaned
from the stack. You must then delete the resource manually after the
stack is destroyed. Until you do, redeploying the stack fails, because
the name of the new resource being created during deployment conflicts
with the name of the orphaned resource.
If you set a resource's removal policy to DESTROY, that resource will
be deleted when the stack is destroyed.
However, even with the removal policy as DESTROY, Cloud formation cannot delete a non-empty bucket. Extract from the same link below -
AWS CloudFormation cannot delete a non-empty Amazon S3 bucket. If you
set an Amazon S3 bucket's removal policy to DESTROY, and it contains
data, attempting to destroy the stack will fail because the bucket
cannot be deleted. You can have the AWS CDK delete the objects in the
bucket before attempting to destroy it by setting the bucket's
autoDeleteObjects prop to true.
Best Practice is to
Design stack resources in such a manner that they have minimal updates being applied which can cause failure. So a stack can be created with say mostly static resources such as ECR, S3 which do not change much and is independent generally of the main application deployment stack which is more likely to fail.
Avoid manually deleting the stack resources which breaks a stack's inconsistency
If a stack is deleted, ensure stack's owned resources are also deleted.
Get rid of having fix names!
With
final IBucket myBucket = Bucket.Builder.create(this, "mybucket")
.bucketName(PhysicalName.GENERATE_IF_NEEDED).build();
(Java, but doesn´t matter)
Do you get a "random-Named" Bucket.
Described here: https://docs.aws.amazon.com/cdk/latest/guide/resources.html
Use it like this in your template (here nested stack)
#Nullable NestedStackProps templateProps = NestedStackProps.builder()
.parameters(new HashMap<String, String>(){{
put("S3Bucket", myBucket.getBucketName());
}})
.build();
Or you still have a fix name (get rid of!!) then get them with:
final IBucket myBucket = Bucket.fromBucketName(this, "mybucket", "my-hold-bucket-name");
But you can not doing things like:
if (!myBucket) then create
(pseudo code)
No ressource-check at compile/runtime!

Cloudformation template fails due to S3Bucket resource already exists

I have created an S3 Bucket, with the cloud formation, Lets Say Bucket Name is S3Bucket,
I don't want this bucket getting deleted if I delete stack, so added Deletion Policy to Retain,
Now the problem here is, If run the stack again, it complains S3Bucket name already exists.
If a bucket already exists, it should not complain.
What to do for this.
Please help
I faced this in the past and what i did in order to resolve this is that i created a common AWS cloudformation template/stack which will create all our common resources which are static(Handle it like a bootstrap template).
Usually i am adding in this template the creation of s3 buckets,VPC, networking, databases creation, etc.
Then you can create other AWS cloudformation templates/stacks for your rest resources which are dynamic and changing usually like lambdas,ec2, api gateway etc.
S3 names are globally unique. (e.g if I have s3 bucket in my AWS account s3-test, you cannot have a bucket with the same name).
The only way to use same name is to delete the bucket, or retype your cloud formation template and use new cloud formation feature to import resource:
https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

Only allow CloudFormation to delete resources created by CloudFormation

I want to create permissions for AWS CloudFormation.
I have to provide delete permission. How can I restrict in a way so that it can only delete resources which were created by CloudFormation?
AWS CloudFormation will only delete resources that it originally created.
When deploying a stack, CloudFormation will create resources using the permissions associated with the credentials that created the stack. Or, if an IAM Role is specified when the stack is created, it will use those credentials to create resources.
When deleting resources, it will use the same credentials.
It is not possible to create permissions that say "only delete resources that were created by CloudFormation" because the permissions are defined outside of CloudFormation.
I know that CloudFormation adds tags to most (all?) of the resources it creates, so you might be able to do some fancy stuff with tags, but it generally shouldn't be necessary because CloudFormation will only delete resources it originally created.

CloudFormation resource AWS::S3::Bucket doesn't show up in S3 console

Using my cloudformation template, i was able to create two buckets and one bucket policy in my stack. Turns out my bucket policy had the wrong permissions so i decided to delete the buckets and recreate them with a new template.
It doesn't look like cloudformation has detected my deleted s3 buckets. The buckets still show up in my stack resources but are marked as "Deleted"
My stack is also marked as drifted. When i try to access the s3 buckets via the link in cloudformation, i get "Error Data not found"
My stack has been in this state for about 16 hours. Any idea on how to get cloudformation to sync up with s3?
Your template isn't telling CloudFormation what resources to create, its telling CloudFormation the state that you want.
It sounds like you created a stack with a template with a resource for a bucket.
You then realized a problem and deleted the bucket manually.
You then updated the stack with an updated template with the same resource for the bucket (but with correct permissions)
When CloudFormation processed this updated template, it determined that it had already created the bucket and as a result it didn't recreate it.
You likely could have achieved your desired result without deleting the bucket by just updating the template.
Because you deleted the bucket, your stack is in a bad state. If you have the flexibility to do so, you could delete your stack and recreate it. When you delete it, it may complain about not being able to delete the bucket, you may have to retry once, then get the option to ignore it.