Is there a way to add a trigger to a Lambda function in Cloudformation for s3 events, where the s3 bucket already exists? (i.e, is not created by said template)
I have tried to find an example of this online, but it appears that the only way to set this trigger in CF is by using the bucket notification configuration.
Cloudformation cannot do this directly. However, Cloudformation Custom Resources can call Lambda functions, and Lambda functions can do whatever you program them to do. You could write a Lambda function which creates or deletes some resource based on whatever logic you want.
See more:
AWS Lambda-backed Custom Resources - AWS CloudFormation
Related
I am trying to setup a Lambda function that scans for a lifecycle policy on every new S3 bucket that is created. If the function finds there is no lifecycle policy set, it will add a default policy I have defined in the function. The aim is to use the CloudWatch S3 createbucket event as the trigger.
I am able to run tests successfully, but when I create new S3 buckets, it is not placing a default lifecycle policy on the bucket as annotated in the function. I have added full admin-access to the Lambda function IAM Role hoping to mitigate any permission issues (as a test). But when I create new S3 buckets, the CloudWatch event is failing to trigger the function.
It seems like I am missing something small, any suggestions? Thank you!
The problem was that the CloudTrail API was not configured for logging. I had to use my root account to create a trail for S3. Once CloudTrail was configured, CloudWatch was able to send the logged S3 events to Lambda as a trigger.
I'm working in an environment where anyone with the necessary access is allowed to create an S3 bucket; however, it's getting to a point where we have a lot of buckets and it is hard to keep track of who created the bucket. I know it is possible to tag the buckets with the owner name, but I am looking for a more automated solution.
Is it possible to invoke a lambda function every time a bucket is created? Or is it possible to track bucket creation with cloudtrail where system administrators would get an sns notification when an s3 bucket is created?
I know it is possible to configure s3 event notification inside a bucket to trigger lambda functions/cloudwatch metrics, but I need a trigger for the entire s3 application.
Cloudtrail tracks all API Actions occurring within an account. What you want to do is create a cloudwatch event rule that triggers off the CreateBucket action then have it invoke Lambda or trigger a sns notification.
See: Creating a CloudWatch Events Rule That Triggers on an AWS API Call Using AWS CloudTrail
You can use EventBridge to get at these events, via CloudTrail. The example on here is of a CreateBucket request.
I am creating S3 buckets using AWS SAM and I want them to be populated with files after SAM deploy. Is there a way to populate the S3 buckets by default from SAM with files? An idea I has was if there is a way to trigger a lambda when the SAM application is deployed that can populate the bucket.
Look into using a Cloudformation Custom resource.
This allows you to invoke your own code (i.e. Lambda) during a cloudformation stack creation, update and deletion event. I have seen people use this to populate an S3 bucket as well as ensure all files are deleted from the bucket when you tear down (since CF will fail to delete a bucket if it has files in it).
I'm trying to create a Lambda function that will be triggered by any change made to any bucket in the S3 console. Is there a way to tie all create events from every bucket in S3 to my Lambda function?
It appears that in the creation of a Lambda function, you can only select one S3 bucket. Is there a way to do this programmatically, if not in the Lambda console?
There is at least one way: you can setup an s3 event notifications, for each bucket you want to monitor, all pointing to a single SQS queue.
That SQS queue can then be the event source for your lambda function.
If you are using any aws-sdk to upload to s3 there is a workaround by setting up an API gateway endpoint to trigger lambda whenever the upload to s3 succeeded.
passing the bucket-name & object-key to lambda you may also specify the dest bucket dynamically.
This also will be helpful with nested prefixes.
e.g.
bucket/users/avatars/user1.jpg
bucket/users/avatars/thumbnails/user1-thumb.jpg
Yes you can, assume that you only want to trigger Lambda if there're new created objects in a few buckets, you can do it via AWS Console, cli, boto3 & other SDK.
If over time there're new bucket created & you also want to add it as event source for Lambda, you can create a Cloudtrail API event source to trigger another Lambda to programmaticallyy dd these new buckets as event sources for the original Lambda.
I want to tag AWS resources like dynamodb tables or EC2 objects right at the time of creation.
I will be using id or name fields of the objects to tag the resources.
Is there any 'post-create' trigger available?
--
The current problem is that even if I run my script to tag AWS resources, I can't run it immediately after every resource creation, I end up seeing a lot of billing untagged.
You can do this through AWS Service Catalog service. this has a capability of auto tagging of provisioned resources. AWS reference link. AutoTags are tags that identify the portfolio, product, and user that launched a product, and are automatically applied by AWS Service Catalog to provisioned resources
You can configure a lambda function to write a tag based on the CloudTrail event that is generated whenever a resource is created.
To get Lambda to run against a CloudTrail event you need to setup your CloudTrail to write events to a S3 bucket, then trigger the Lambda on the object creation event in the bucket.
The lambda uses the bucket key in its context object to read the event and determine if a tag needs to be applied.
Check the AWS Documentation for further detail about triggering Lambda from CloudTrail.
Also GorillaStack has published an example on Github of using lambda to auto-tag newly created resources. You could use this as a basis for your solution.