EventBridge notification on Amazon s3 folder - amazon-web-services

I have to start stepMachine execution upon file upload on a folder inside bucket, I got to know how we can configure eventbridge on S3 bucket level. But on the same bucket there can be multiple file uploads. I need to get notified when object inserted into a particular folder inside bucket. Is there any possible way to achieve this?

Here is another solution. Since folders technically do not exist in S3 and merely a UI feature, "folders" in S3 are ultimately called prefixes.
You can trigger an EventBridge Notification on an S3 folder with the following event pattern:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["<bucket-name>"]
},
"object": {
"key": [{
"prefix": "<prefix/folder-name>"
}]
}
}
}

Yes, you can use an EventBridge Filter to only send the events when the S3 object's prefix matches your folder name.

Related

Using wildcard in Custom event pattern for the S3 event

I want to send an SQS notification from an S3 bucket on a specific folder but seems wildcards are not supported in EventBridge as well as S3 Event Notification. Is there any way by which I can trigger my SQS.
BucketName: my-s3-bucket
want to send notification if file added in a specific folder inside this bucket:
MyFolder/<username>/INPUT/abc.txt
and not on any other folder inside this bucket.
I tried EventBridge as well by below event but no luck there as well:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-s3-bucket"]
},
"object":{
"key":[{"prefix":"*/INPUT/*"}]
}
}
}
Please suggest
If its not directly supported then you have to filter your events through a lambda function:
S3 ---> Lambda to filter the enents ----> SQS

AWS Cloudformation: How do I check if a bucket exists from within the Cloudformation template?

In my CloudFormation template, I'm trying to create an S3 Bucket only if S3 doesn't already have a bucket that includes a certain keyword in it's name. For example, if my keyword is 'picture', I only want this S3 bucket to be created if no bucket in S3 contains the word 'picture' in its name.
"Resources": {
"myBucket": {
"Condition" : "<NO_S3_BUCKET_WITH_'picture'_IN_ITS_NAME_IN_THIS_ACCOUNT>",
"Type": "AWS::S3::Bucket",
"Properties": {
<SOME_PROPERTIES>
}
},
<OTHER_RESOURCES>
}
Is this possible? if so, can it be done with other AWS resources (CloudFront Distribution etc.)?
Is this possible?
Not with plain CloudFormation. You would have to develop a custom resource to do this.

Can I trigger an ECS/Fargate task from a specific file upload in S3?

I know that I can trigger a task when a file is uploaded (per https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-ECS.html) however, how can I trigger a task when a specific file is uploaded?
Amazon seems not to have anticipated people having multiple jobs watching the same bucket for different files :(
You can accomplish this with CloudWatch Events from CloudTrail Data Events.
Head over to CloudTrail, and create a Trail for your account.
For Apply trail to all regions, choose No.
Under Management events, Read/Write Events, select none.
Under Data events, select S3. Input your S3 bucket name and folder name (prefix) to log data events for, and select Write (don't set read).
Under Storage location, create a new bucket or provide a bucket to be used to store the log files.
Select Create
Next, create a CloudWatch Event rule that targets your ECS Task when the CloudTrail Data Event happens.
Head over to CloudWatch and Create a new Event rule.
For the Event Source select Event Pattern
Change the dropdown that says "Build event pattern to match events by service" to select "Custom Event Pattern"
Enter the event pattern below:
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
],
"requestParameters": {
"bucketName": [
"your-bucket-name" // this is the bucket where your events are happening
],
"key": [
"your-object-key" // this is the object key you want to trigger starting your ECS task, note that it's an array.
]
}
}
}
Customize the bucketName and key above as appropriate for your use.
For your target, select ECS Task, configure the task as appropriate.
Select Configure details, give the rule a name and set the State to Enabled, and click Create rule.
Now that your rule is enabled, when you upload an object with the specified key to the specified bucket, CloudWatch Events will trigger the ECS Task you specified.
Looks like you have a wildcard in your comment. To add to the event pattern from hephalump, you can indicate a prefix for the key as well, which will match to any key with that prefix, not just a specific key:
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
],
"requestParameters": {
"bucketName": [
"your-bucket-name" // this is the bucket where your events are happening
],
"key": [
{"prefix": "path/to/key"}
]
}
}
}
S3 does not generate events for a particular object, rather for events that occur within a bucket. The image below is the S3 console for events that a bucket can send.
Note that there you could use the prefix and/or suffix to perhaps get close to what you want but individual objects cannot generate events. A good option though would be a Lambda that can filter on the object name and, if it matches what you want, run the ECS task.

How to create a custom event trigger to invoke a lambda whenever a new bucket is created?

I have a lambda function in Python that I want to invoke whenever a new s3 bucket is created. I want to create a custom event trigger to invoke it. What would be the best way to go ahead implementing this.
You can create a cloudwatch rule (see below) that triggers when a bucket is created or deleted and launches a lambda as its target.
In Cloud watch create rule > Choose
Service Name: Simple Storage Service s3
Event type: Bucket Level Operations
and select Specific Operations, specifying CreateBucket (and DeleteBucket) if you need it.
This will produce "custom" code similar to below.
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": [
"aws.s3"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"CreateBucket",
"DeleteBucket"
]
}
}
I could answer here, but have a look on this: How to Execute Lambda Functions on S3 Event Triggers
Hello You can monitor new bucket creation from AWS Config or AWS Cloud Trail services and call Lambda function for such event.

Cloudwatch event rule listen to s3 bucket with specific keypath

Is there a way to listen to an s3 bucket but only inside a specific "folder" so for instance if i had a folder named input i would say listen on "s3://bucket-name/folder1/*"?
Right now it seems that you can only listen to the entire bucket.My issue is that I want to use the same bucket to trigger CloudWatch based on specific key path, and of course all cloudwatch rules will be triggered.
This is my flow:
CloudTrail (monitor s3://bucket/path) -> CloudWatch (will have an Event Rule for any PUT in that s3://bucket/path) trigger-> StepFunction-> LambdaFunctions
I also tried to restrict this from CloudWatch role to give permissions only to that specific S3 bucket path without luck.
This is my event rule:
{ "source": [ "aws.s3" ], "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "eventSource": [ "s3.amazonaws.com" ], "eventName": [ "PutObject" ], "requestParameters": { "bucketName": [ " bucketname" ] } } }
Is there any workaround?
As of the date of this writing, I do not currently know of a way to accomplish ON THE RULE. There could be a workaround on the rule but I have not found it...
...HOWEVER:
This can be accomplished by using CloudTrail.
Remove the key from the event rule object you have, and keep the bucket name
Go to CloudTrail. If all data events are turned on, disable them and created your own trail.
In CloudTrail, create a new Trail. Specify object or bucket level operations
Enable S3 Data events - ASSUMING you want to listen for putObject or similar
specify your bucket, AND when it says Individual bucket selection, type in the bucket name AND the path you want to monitor. bucketname/folder1/folder2
specify whether you want read and write actions to be logged to the trail.
Now you have a log trail for that path only. The cloudwatch rule, or eventBridge rule, can now specify the bucket, and whatever operations you want to monitor.
try adding
"requestParameters": { "bucketName": [ " bucketname" ],"key":["folder1"] }
could be work.