I am creating S3 buckets using AWS SAM and I want them to be populated with files after SAM deploy. Is there a way to populate the S3 buckets by default from SAM with files? An idea I has was if there is a way to trigger a lambda when the SAM application is deployed that can populate the bucket.
Look into using a Cloudformation Custom resource.
This allows you to invoke your own code (i.e. Lambda) during a cloudformation stack creation, update and deletion event. I have seen people use this to populate an S3 bucket as well as ensure all files are deleted from the bucket when you tear down (since CF will fail to delete a bucket if it has files in it).
Related
I have created a S3 bucket made it public using SAM template, is there any way I cloud upload objects to bucket from the template
I'm not familiar with SAM, but I know that CloudFormation cannot populate the contents of a bucket.
One workaround is to create a CloudFormation Custom Resource, which triggers an AWS Lambda function during the stack deployment. The Lambda function can then copy files, such as copying them between S3 buckets.
I have written such a function. If you do it well, you could a list of the files to copy as parameters within the CloudFormation template, so that the same function can be used in multiple templates. Writing your first Custom Resource function can be challenging, since it needs to 'return' differently to normal functions.
we have pre-existing Cloudformation Stack which created few ec2 Instances and couple of s3 buckets with its policies stuff. But the default encryption was not set.
All i was trying do is to up update the existing stack to set Default Encryption to AES-256 using below code. But it is failing stating "test-encryption-sbox4 already exists in stack". Im not trying to create s3 but just trying update existing buckets policy.
Is it valid to update the S3 encryption via stack after it got created ? or do we need to take care of it which its creation time ?
Can anyone please suggest how to updates existing bucket policy via CF.
Code which i used.
How do you set SSE-S3 or SSE-KMS encryption on S3 buckets using Cloud Formation Template?
You are getting this error because your bucket is not under control of CFN. Thus, CFN tries to re-create this bucket.
If the bucket has been created outside of CFN, e.g. manually in console, then you have to import it into CloudFormation stack first. Only, after that you can updated it from CFN.
Without that, CFN will try to create the same bucket, which obviously results in your error.
I need to create an Amazon S3 bucket and a lambda function using CloudFormation. I have the jar file in my local. If I write resources for the S3 bucket and a lambda function in a single template, I have to provide S3bucket and key in the lambda resource. Stack creation fails, as the jar file doesn't exist in the bucket. So, does this mean that I have to create a bucket separately using a template, upload the jar file, and then create a lambda function using another template?
Is there a way to create both the resources using one template?
For some resource properties that require an Amazon S3 location (a bucket name and filename), you can specify local references. Instead of manually uploading the files to an S3 bucket and then adding the location to your template, you can specify local artifacts in your template and then use the aws cloudformation package command to quickly upload them.
You can find more info here: Uploading Local Artifacts to an S3 Bucket
So, does this mean that I have to create a bucket separately using a template, upload the jar file, and then create a lambda function using another template?
Yes and no.
Normally when you create a bucket it will be empty. You can't populate it with vanilla CloudFormation. Normally you would have to manually (e.g. using AWS CLI, SDK or using console) upload the jar file.
For more advanced solution to keep everything inside CloudFormation you would have to look at creating your own Custom Resources in CloudFormation which would upload the jar file for you in CloudFormation. For this, your jar would need to be available online so that it can be downloaded into your bucket.
So if you are just starting with CloudFormation it will be probably to difficult to create a custom resource at first.
Is there a way to add a trigger to a Lambda function in Cloudformation for s3 events, where the s3 bucket already exists? (i.e, is not created by said template)
I have tried to find an example of this online, but it appears that the only way to set this trigger in CF is by using the bucket notification configuration.
Cloudformation cannot do this directly. However, Cloudformation Custom Resources can call Lambda functions, and Lambda functions can do whatever you program them to do. You could write a Lambda function which creates or deletes some resource based on whatever logic you want.
See more:
AWS Lambda-backed Custom Resources - AWS CloudFormation
I want to automate the whole process, whenever a new image or video file comes into my s3 bucket I want to move those files to akamai netstorage using lambda and python boto or whatever best possible way.
You can execute a lambda based on s3 notifications (including file creation or deletion).
See aws walkthrough: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
Indeed, the lambda function can be automatically executed as your file is dropped into the s3 bucket - there is a boto3 template and a trigger configurable at the lambda creation. You can further access the content from s3 bucket and propagate it to Akamai Netstorage, using this API: https://pypi.org/project/anesto/