I'm working in an environment where anyone with the necessary access is allowed to create an S3 bucket; however, it's getting to a point where we have a lot of buckets and it is hard to keep track of who created the bucket. I know it is possible to tag the buckets with the owner name, but I am looking for a more automated solution.
Is it possible to invoke a lambda function every time a bucket is created? Or is it possible to track bucket creation with cloudtrail where system administrators would get an sns notification when an s3 bucket is created?
I know it is possible to configure s3 event notification inside a bucket to trigger lambda functions/cloudwatch metrics, but I need a trigger for the entire s3 application.
Cloudtrail tracks all API Actions occurring within an account. What you want to do is create a cloudwatch event rule that triggers off the CreateBucket action then have it invoke Lambda or trigger a sns notification.
See: Creating a CloudWatch Events Rule That Triggers on an AWS API Call Using AWS CloudTrail
You can use EventBridge to get at these events, via CloudTrail. The example on here is of a CreateBucket request.
Related
Scenario:
We have a S3 bucket in Region 1 (e.g. Oregon)
We have a Lambda function in Region 2 (e.g. Frankfurt)
We have configured S3 to send event notification whenever object is added to the bucket
Problem:
We need to invoke the Lambda which is in Region 2 using the S3 event notification which is generated in Region 1.
We are aware that cross account S3 event configuration with Lambda is allowed. But how to implement cross Region event and lambda invocation!
What we are thinking:
We thought of using SNS in between S3 and Lambda, but not sure if there is any other alternative available or is this the only way. Any help is appreciated.
As you say, the most straightforward approach is use SNS in the middle:
Region1 S3 notification has a Region1 SNS Topic Destination
Region2 Lambda has a cross-region subscription to the Region1 SNS Topic
S3 cannot send notifications directly cross-region.
You could also have S3 replicate the created items to Region2, at which point Region2 S3 can directly notify the Region2 Lambda, which may work if the items are small and latency not a big problem.
I am trying to setup a Lambda function that scans for a lifecycle policy on every new S3 bucket that is created. If the function finds there is no lifecycle policy set, it will add a default policy I have defined in the function. The aim is to use the CloudWatch S3 createbucket event as the trigger.
I am able to run tests successfully, but when I create new S3 buckets, it is not placing a default lifecycle policy on the bucket as annotated in the function. I have added full admin-access to the Lambda function IAM Role hoping to mitigate any permission issues (as a test). But when I create new S3 buckets, the CloudWatch event is failing to trigger the function.
It seems like I am missing something small, any suggestions? Thank you!
The problem was that the CloudTrail API was not configured for logging. I had to use my root account to create a trail for S3. Once CloudTrail was configured, CloudWatch was able to send the logged S3 events to Lambda as a trigger.
I'm trying to create a Lambda function that will be triggered by any change made to any bucket in the S3 console. Is there a way to tie all create events from every bucket in S3 to my Lambda function?
It appears that in the creation of a Lambda function, you can only select one S3 bucket. Is there a way to do this programmatically, if not in the Lambda console?
There is at least one way: you can setup an s3 event notifications, for each bucket you want to monitor, all pointing to a single SQS queue.
That SQS queue can then be the event source for your lambda function.
If you are using any aws-sdk to upload to s3 there is a workaround by setting up an API gateway endpoint to trigger lambda whenever the upload to s3 succeeded.
passing the bucket-name & object-key to lambda you may also specify the dest bucket dynamically.
This also will be helpful with nested prefixes.
e.g.
bucket/users/avatars/user1.jpg
bucket/users/avatars/thumbnails/user1-thumb.jpg
Yes you can, assume that you only want to trigger Lambda if there're new created objects in a few buckets, you can do it via AWS Console, cli, boto3 & other SDK.
If over time there're new bucket created & you also want to add it as event source for Lambda, you can create a Cloudtrail API event source to trigger another Lambda to programmaticallyy dd these new buckets as event sources for the original Lambda.
I am trying to figure out a way to trigger a Lambda on the creation or update or a Role in AWS.
The use case is that when a Role is created, we need to update our Identity Server with the new or changed Role.
I'm looking at cloud trail, and having mixed results. I could schedule a lambda to run, but I'd prefer to make it more real time.
Any ideas?
Sounds like going CloudTrail's way is exactly what AWS suggested.
What issue you got into?
AWS CloudTrail saves logs to an S3 bucket (object-created event).
Amazon S3 detects the object-created event.
Amazon S3 publishes the s3:ObjectCreated:* event to AWS Lambda by
invoking the Lambda function, as specified in the bucket notification
configuration. Because the Lambda function's access permissions policy
includes permissions for Amazon S3 to invoke the function, Amazon S3
can invoke the function.
AWS Lambda executes the Lambda function by assuming the execution role
that you specified at the time you created the Lambda function.
The Lambda function reads the Amazon S3 event it receives as a
parameter, determines where the CloudTrail object is, reads the
CloudTrail object, and then it processes the log records in the
CloudTrail object.
You can Ensure a log metric filter and alarm exist for IAM policy changes and similar.
Alarm can put message to SNS for example.
Lambda can be triggered by that SNS message.
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies.
I want to tag AWS resources like dynamodb tables or EC2 objects right at the time of creation.
I will be using id or name fields of the objects to tag the resources.
Is there any 'post-create' trigger available?
--
The current problem is that even if I run my script to tag AWS resources, I can't run it immediately after every resource creation, I end up seeing a lot of billing untagged.
You can do this through AWS Service Catalog service. this has a capability of auto tagging of provisioned resources. AWS reference link. AutoTags are tags that identify the portfolio, product, and user that launched a product, and are automatically applied by AWS Service Catalog to provisioned resources
You can configure a lambda function to write a tag based on the CloudTrail event that is generated whenever a resource is created.
To get Lambda to run against a CloudTrail event you need to setup your CloudTrail to write events to a S3 bucket, then trigger the Lambda on the object creation event in the bucket.
The lambda uses the bucket key in its context object to read the event and determine if a tag needs to be applied.
Check the AWS Documentation for further detail about triggering Lambda from CloudTrail.
Also GorillaStack has published an example on Github of using lambda to auto-tag newly created resources. You could use this as a basis for your solution.