I'm trying to create S3 bucket through CDK by using the following code
const myBucket = new Bucket(this, 'mybucket', {
bucketName: `NewBucket`
});
Since S3 bucket names are unique stack deployment fails when I try to upload to another account.
I can change bucketname manually everytime I deploy but Is there a way for me to add 'NewBucket-${Stack.AWSaccountId}' dynamically so that whenever stack is deployed to any aws account bucket gets created without any error
You can prepend the AWS account ID like:
const myBucket = new Bucket(this, `${id}-bucket`, {
bucketName: `${this.account}-NewBucket`
});
But generally and recommend extending the default props, pass into your stack and provide a prefix/name if you wanted something specific for each environment as the account ID is regarded by AWS as sensitive.
For example:
export interface Config extends cdk.StackProps {
readonly params: {
readonly environment: string;
};
}
Then you can use ${props.params.environment} in your bucket name.
If you do not specify the bucket name, it will generate one for you that will be unique among accounts.
Otherwise, generate your own hash and append to the end of your bucket name string.
Edit: While you could programmatically pull the account number and feed that into the stack as a variable for your bucket name append, I wouldn't recommend attaching an account number to an S3 bucket name for security reasons.
I name my buckets projectprefix-name-stage and my cloudformation resource ProjectprefixNameStage (CamelCase) and omit names with randoms.
So cloudformation name MyProjectDataBucketProduction becomes my-project-data-bucket-production
Related
Since the global uniqueness requirements of S3 bucket names, using the optional BucketName property in the AWS::S3::Bucket resource is problematic. Essentially, if I insist on using BucketName I need some way to attach a GUID there.
I can avoid this pain if I omit the BucketName property entirely, so CloudFormation reliably generates a unique name for me.
However, I face another problem: how do I work with this random bucket name in AWS Lambda/SAM/serverless.com/other code? I understand that CloudFormation templates can export the name, but how do I pass it to the Lambda code?
Is there a standard/recommended way of working with CloudFormation exports in AWS Lambda? The problem is not unique to S3 - e.g., AWS Amplify uses randomly generated DynamoDB table names, too.
If your Lambda is created through CloudFormation, you can pass the bucket name using Environment variables (Environment key in SAM and CloudFormation). You can refer to the bucket name using !Ref if the bucket is in the same spec and cross stack references if using different stacks. If you use cross stack references, you won't be able to modify or delete the output value in the original stack until you remove all references to it. If you are using Ref, the Lambda will also be updated if the bucket name changes.
If your Lambda isn't created through CloudFormation, you can use SSM parameter store as mentioned by Ervin in his comment. You can create a SSM Parameter and read it's value in your Lambda code.
I am saving terraform state to s3 bucket by this doc: https://www.terraform.io/docs/language/settings/backends/s3.html
But it mentioned that I can't use variables A backend block cannot refer to named values (like input variables, locals, or data source attributes).
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
The problem is that I need to run terraform in different AWS account and regions. My s3 bucket name includes accountId and region. How can I make it work without manually update the configuration file?
I think of using Ansible for this scenario. This may not be the most efficient way to do it or the correct use of Ansible but you can make use of Jinja2 templating. So you can create a Jinja2 template file like below.
terraform.tf.j2
terraform {
backend "s3" {
bucket = "{{ aws-account-name/id }}-{{ aws-region }}-terraform"
key = "path/to/my/key"
region = "us-east-1"
}
}
Then when you create the infrastucture, you can feed necessary values to terraform.tf.j2 file and create terraform.tf file dynamically.
I have created an S3 Bucket, with the cloud formation, Lets Say Bucket Name is S3Bucket,
I don't want this bucket getting deleted if I delete stack, so added Deletion Policy to Retain,
Now the problem here is, If run the stack again, it complains S3Bucket name already exists.
If a bucket already exists, it should not complain.
What to do for this.
Please help
I faced this in the past and what i did in order to resolve this is that i created a common AWS cloudformation template/stack which will create all our common resources which are static(Handle it like a bootstrap template).
Usually i am adding in this template the creation of s3 buckets,VPC, networking, databases creation, etc.
Then you can create other AWS cloudformation templates/stacks for your rest resources which are dynamic and changing usually like lambdas,ec2, api gateway etc.
S3 names are globally unique. (e.g if I have s3 bucket in my AWS account s3-test, you cannot have a bucket with the same name).
The only way to use same name is to delete the bucket, or retype your cloud formation template and use new cloud formation feature to import resource:
https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/
I was trying to create a remote backend for my S3 bucket.
provider "aws" {
version = "1.36.0"
profile = "tasdik"
region = "ap-south-1"
}
terraform {
backend "s3" {
bucket = "ops-bucket"
key = "aws/ap-south-1/homelab/s3/terraform.tfstate"
region = "ap-south-1"
}
}
resource "aws_s3_bucket" "ops-bucket" {
bucket = "ops-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
tags {
Name = "ops-bucket"
Environmet = "devel"
}
}
I haven't applied anything yet, the bucket is not present as of now. So, terraform asks me to do an init. But when I try to do so, I get a
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: BucketRegionError: incorrect region, the bucket is not in 'ap-south-1' region
status code: 301, request id: , host id:
Terraform will initialise any state configuration before any other actions such as a plan or apply. Thus you can't have the creation of the S3 bucket for your state to be stored in be defined at the same time as you defining the state backend.
Terraform also won't create an S3 bucket for you to put your state in, you must create this ahead of time.
You can either do this outside of Terraform such as with the AWS CLI:
aws s3api create-bucket --bucket "${BUCKET_NAME}" --region "${BUCKET_REGION}" \
--create-bucket-configuration LocationConstraint="${BUCKET_REGION}"
or you could create it via Terraform as you are trying to do so but use local state for creating the bucket on the first apply and then add the state configuration and re-init to get Terraform to migrate the state to your new S3 bucket.
As for the error message, S3 bucket names are globally unique across all regions and all AWS accounts. The error message is telling you that it ran the GetBucketLocation call but couldn't find a bucket in ap-south-1. When creating your buckets I recommend making sure they are likely to be unique by doing something such as concatenating the account ID and possibly the region name into the bucket name.
I created S3 bucket using Cloud formation template script.
Now i want to access S3 bucket name and end point from instance metadata.
Any help?
To enable applications running on an Amazon EC2 instance to access Amazon S3 (or any AWS service), you can create an IAM Role for the EC2 instance and assign it to the instance.
Applications on that instance that use the AWS SDK to make API calls to AWS will automatically have access to credentials with permissions described in the assigned role.
Your particular situation is slightly difficult because the CloudFormation template will create a bucket with a unique name, while your IAM Role will want to know the exact name of the Amazon S3 bucket. This can be accomplished by referring to the S3 bucket that was created within the CloudFormation template.
The template would need to create these resources:
AWS::S3::Bucket
AWS::IAM::Role to define the permissions
AWS::IAM::InstanceProfile to link the role to the EC2 instance
AWS::EC2::Instance that refers to the IAM Role
Within the definition of the IAM Role, the ARN for the S3 bucket would need to refer to the bucket created elsewhere in the template. This would require a bit of string manipulation to insert the correct value into the policy.
If you run into difficulty, feel free to create another StackOverflow question showing the template that you have been working on, highlighting the part that is causing difficulty.