Working with S3 buckets having no BucketName in AWS Lambda - amazon-web-services

Since the global uniqueness requirements of S3 bucket names, using the optional BucketName property in the AWS::S3::Bucket resource is problematic. Essentially, if I insist on using BucketName I need some way to attach a GUID there.
I can avoid this pain if I omit the BucketName property entirely, so CloudFormation reliably generates a unique name for me.
However, I face another problem: how do I work with this random bucket name in AWS Lambda/SAM/serverless.com/other code? I understand that CloudFormation templates can export the name, but how do I pass it to the Lambda code?
Is there a standard/recommended way of working with CloudFormation exports in AWS Lambda? The problem is not unique to S3 - e.g., AWS Amplify uses randomly generated DynamoDB table names, too.

If your Lambda is created through CloudFormation, you can pass the bucket name using Environment variables (Environment key in SAM and CloudFormation). You can refer to the bucket name using !Ref if the bucket is in the same spec and cross stack references if using different stacks. If you use cross stack references, you won't be able to modify or delete the output value in the original stack until you remove all references to it. If you are using Ref, the Lambda will also be updated if the bucket name changes.
If your Lambda isn't created through CloudFormation, you can use SSM parameter store as mentioned by Ervin in his comment. You can create a SSM Parameter and read it's value in your Lambda code.

Related

Refer to the latest Version ID of a S3 object in CloudFormation

I want to refer to the latest Version id of a s3 obejct in my CloudFormation template, how should I refer it?
I have the below variables as my parameters in the CloudFormation template
the stored S3 bucket: LambdaS3
the stored S3 object name(a zip file): Lambdafilename
The below code is my existing intrinsic function reference, how should I fix it?
Version: !GetAtt
- !Sub "arn:${AWS::Partition}:s3::::${LambdaS3}/${IoTProvisioningLambdafilename}.zip.Versions[?IsLatest].[VersionId]"
You can't use Sub with GetAtt. From docs:
For the Fn::GetAtt logical resource name, you can't use functions. You must specify a string that's a resource's logical ID.
You must hardcode the full name of your S3 object. The only way around that is to use macros or custom resource.

Enable Amazon S3 bucket to Trigger Lambda function using SAM

I want to trigger a Lambda function whenever a file is uploaded to an Amazon S3 bucket with a certain prefix and suffix using SAM. Right now I'm using this code but it's giving error
"The ARN is not well formed(Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument"
Edit:
This is working but it's not giving an option to add suffix or prefix.
In the NotificationConfiguration you're simply using !Ref HelloWorld to reference your function, however, as the documentation for AWS::Serverless::Function states:
When the logical ID of this resource is provided to the Ref intrinsic function, it returns the resource name of the underlying Lambda function.
If we look at the documentation for the LambdaConfiguration it states:
The Amazon Resource Name (ARN) of the AWS Lambda function that Amazon S3 invokes when the specified event type occurs.
If you simply change the !Ref HelloWorld to !GetAtt HelloWorld.Arn it should pass in the correct value.
Beware however of the remark that is made on the NotificationConfiguration documentation that if you create the bucket at the same time as you're creating the notification configuration, you might end up with a circular dependency, since you also need to add a AWS::Lamdba::Permission (or use the Events of AWS::Serverless::Function) to allow S3 to invoke your lambda function.

AWS CDK update/add lifecycle to existing S3 bucket using custom source

I created an S3 bucket but now I would like to update/add lifecycle policies to it using CDK.
Currently I can import this bucket to a new stack file.
const testBucket = s3.Bucket.fromBucketAttributes(this, 'TestBucket', {
bucketArn: 'arn:aws:s3:::xxxxxx'});
How can I use AwsCustomResource to update/add lifecycle policies? For example, for prefix = long, I want those objects to expire in 7 days, and for prefix = short, I want them to expire in 3 days.
Or is there a general way of updating an existing S3 bucket in a new stack with CDK?
The best option for this is probably to add it to the stack using resource importing. Don't confuse a custom resource with a CDK construct. A custom resource involves deploying a lambda function and calling that function when the CloudFormation custom resource is present in your stack. A CDK construct is used to generate a CloudFormation template. It allows you to combine different resources into a single bit of code, and allows some logical decisions based on the input values.
Steps to import:
Make sure your current CDK is what is deployed. You cannot import resources when there are other changes.
Add the S3 bucket to your CDK code as if you were creating it for the first time. Make sure that all the settings are the same as what is currently deployed. This is important because the import process doesn't validate that what you have configured matches what you are importing.
Run cdk synth to generate the CloudFormation template that includes the S3 bucket.
In the CloudFormation console locate the stack that represents the CDK stack you are working with.
Select Import resources into stack from the Stack actions menu.
Follow the prompts. When asked for the template select Upload a template file and select the template created by the cdk synth command (hint: it will be in cdk-out). You will be prompted to add the bucket name. This should be the bucket that you are wanting to add to this stack.
Once that is done you can modify the bucket as if it were created by your stack.
One thing to note. CDK includes some meta-data in the CloudFormation template. This value must be the same as what is currently deploy or it will be seen as a change and you won't be able to perform the import. You can copy the values from what is currently deployed and manually edit the template created by cdk synth to match that.
Need to access the CfnBucket reference of the testBucket and add lifecycle rules to it.
const testBucket = s3.Bucket.fromBucketAttributes(this, 'TestBucket', {
bucketArn: 'arn:aws:s3:::xxxxxx')
testBucket.node.root.addLifecycleRule({prefix: 'short', expiration:3, enabled:true})
testBucket.node.root.addLifecycleRule({prefix: 'long', expiration:7, enabled:true})

Fetching secrets by just name when using AWS SecretManager in cdk

I am trying to fetch pre existing secrets from the aws-secretsmanager module on CDK, and from the documentation here, the suggestion is
If you need to use a pre-existing secret, the recommended way is to
manually provision the secret in AWS SecretsManager and use the
Secret.fromSecretArn or Secret.fromSecretAttributes method to make it
available in your CDK Application
However, both the methods demand the use of the arn to fetch the secrets. I am not sure if it is a good idea to hardcode arns and check them into the git repo. Instead is there a way to just fetch the secrets by just using the name, since we already have the account details available in the profile for cdk.
At least until this current version (1.38.0), it’s not possible. An alternative is to save the secret arn in the SSM parameter store and use the ssm key in the code.
Putting full ARNs in CFN should not be a concern. Since you are creating these secrets ahead of time, their name, account, and region will be know. If you wish, however, you could still use the CFN psuedo parameters for partition, region, and account (AWS::Partition, AWS::Region, AWS::AccountId or the CDK equivelent).

Cloudformation template fails due to S3Bucket resource already exists

I have created an S3 Bucket, with the cloud formation, Lets Say Bucket Name is S3Bucket,
I don't want this bucket getting deleted if I delete stack, so added Deletion Policy to Retain,
Now the problem here is, If run the stack again, it complains S3Bucket name already exists.
If a bucket already exists, it should not complain.
What to do for this.
Please help
I faced this in the past and what i did in order to resolve this is that i created a common AWS cloudformation template/stack which will create all our common resources which are static(Handle it like a bootstrap template).
Usually i am adding in this template the creation of s3 buckets,VPC, networking, databases creation, etc.
Then you can create other AWS cloudformation templates/stacks for your rest resources which are dynamic and changing usually like lambdas,ec2, api gateway etc.
S3 names are globally unique. (e.g if I have s3 bucket in my AWS account s3-test, you cannot have a bucket with the same name).
The only way to use same name is to delete the bucket, or retype your cloud formation template and use new cloud formation feature to import resource:
https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/