AWS EventBridge - Customize s3 event pattern - amazon-web-services

I have a s3 bucket with EventBridge events enabled and I have the following bucket structure.
s3://bucket/db/table/LOAD00001.csv
I need to create an event pattern that will identify all new files that contains "LOAD" in the filename.
My event pattern looks like:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": [{
"suffix": "-landing"
}]
},
"object": {
"key": ["*LOAD*"]
}
}
}
I have tried some combinations using prefix and sufix but no success yet.

Event content filtering does not support wildcard pattern matching.
If prefix or suffix matching does not work, consider overfetching and using the event target to screen out unwanted events.
"key": [{"prefix": "LOAD"}]

Related

EventBridge pattern invalid when I add a path prefix. "Event pattern is not valid. Reason: "name" must be an object or an array at..."

I am trying to create and EventBridge event that triggers when objects are created in a path prefix of my bucket. When I write the event pattern without the path prefix, it works. When I add the path prefix, I get a failure. I am using official documentation for syntax and this other SO question seems to confirm what I'm doing but the solution doesn't work.
I am using EventBridge to create the rule > Step 2 Build event pattern > Event pattern.
Error message:
Event pattern is not valid. Reason: "name" must be an object or an array at [Source: (String)"{"source":["aws.s3"],"detail-type":["Object Created"],"detail":{"bucket":{"name":"test-test-20230118"},"object":{"key":[{"prefix":"raw"}]}}}"; line: 1, column: 83]
Unsuccessful pattern:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["test-test-20230118"]
},
"object": {
"key": [{
"prefix": "raw"
}]
}
}
}
Successful pattern without prefix:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["test-test-20230118"]
}
}
}
Your pattern will work if you modify the sample event to match the name and prefix your filtering on. Ive not seen that error so not sure whats going on but i think its related to the sample event your testing your pattern against. Start again with the sample event (I copied the sample event from event type -> AWS Events, sample events -> Object Created and pasted it into "enter my own") and update resources, bucket->name and detail->object->key so your pattern will match it.
I assume "raw" is a directory in your "test-test-20230118" bucket. If that is the case, use a slash such as "raw/" as prefix.
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["test-test-20230118"]
},
"object": {
"key": [{
"prefix": "raw/"
}]
}
}
}

Can special characters be escaped in the AWS EventBridge Target Input Transformer?

Short version: S3 events can contain non-standard characters. These cannot be used to build URLs to those s3 objects. Can event transformers escape these characters to build valid URLs?
I have an event bridge rule set up that sends notifications to SNS topics when files are uploaded to S3, so related parties who do not have regular developer access to S3 can access files created by batch jobs.
The filenames are somewhat dynamic, and can contain characters that must be escaped in order to display as a valid URL.
Example (truncated) event:
{
"version": "0",
"detail-type": "Object Created",
"source": "aws.s3",
"time": "2021-11-12T00:00:00Z",
"region": "ap-northeast-1",
"resources": ["arn:aws:s3:::example-bucket"],
"detail": {
"version": "0",
"bucket": {
"name": "example-bucket"
},
"object": {
"key": "exampleテスト.zip",
"size": 5,
"etag": "b1946ac92492d2347c6235b4d2611184",
"version-id": "IYV3p45BT0ac8hjHg1houSdS1a.Mro8e",
"sequencer": "00617F08299329D189"
},
"request-id": "N4N7GDK58NMKJ12R",
"requester": "123456789012",
"source-ip-address": "1.2.3.4",
"reason": "PutObject"
}
}
And my event transformer looks like this:
{
"message": "Batch job report has finished",
"URL": "https://<bucket>.s3.ap-northeast-1.amazonaws.com/<objectKey>"
}
This will just send the raw json to an SNS topic, which will forward it as-is to subscribers.
The problem is that the url will end up being:
https://example-bucket.s3.ap-northeast-1.amazonaws.com/exampleテスト.zip
While stack overflow displays this as a valid URL, email clients do not. The URL ends at example, because the Japanese characters are not escaped. This is the case for all characters that need to be escaped, including spaces and other languages.
I need a way to escape these characters in eventbridge. Is there a away, given the event above, to make the following transformer output?
{
"message": "Batch job report has finished",
"URL": "https://example-bucket.s3.ap-northeast-1.amazonaws.com/example%E3%83%86%E3%82%B9%E3%83%88.zip"
}

How to get amazon SNS when a large file is uploaded to a S3 folder

I was able to set up an SNS notification for a specific file type in a folder on Amazon S3 but I want to restrict the notification emails to be sent only when the file size is bigger than 90MB.
How will I do that?
I was able to do it with Amazon EventBridge by creating a new rule and adding this Event pattern and linking it to my SNS topic
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["BUCKETNAME"]
},
"object": {
"size": [{
"numeric": [">=", 90000000]
}],
"key": [{
"prefix": "folderPath"
}]
}
}
}

AWS EventBridge: How to send only 1 notification when multiple objects deleted

I use AWS EventBridge with the following settings to activate Lambda functions. If there are three files under s3://testBucket/test/, and when these files deleted (I delete all files at the same time), EventBridge will send a notification to activate Lambda three times.
In this situation, I want to send only one notification to avoid duplicate execution of Lambda. Does anyone know how to set EventBridge to do so?
{
"source": [
"aws.s3"
],
"detail-type": [
"Object Deleted"
],
"detail": {
"bucket": {
"name": [
"testBucket"
]
},
"object": {
"key": [{
"prefix": "test/"
}]
}
}
}
It is not possible.
An event will be generated for each object deleted.

How to trigger an AWS Event Rule when a S3 key with a specific suffix gets uploaded

I'm trying to create an AWS Event Rule that is only triggered when a file with a specific suffix is uploaded to an S3 bucket.
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject",
"CompleteMultipartUpload"
],
"requestParameters": {
"bucketName": [
"bucket-name"
],
"key": [
{ "suffix": ".csv" }
]
}
}
}
As I understand there AWS has content-based filtering which can be used but docs doesn't show the ability to use a suffix, only prefix among other patterns: https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html
Ideally I could be able to do it here without the need for an intermediary Lambda as my event target is an ECS Fargate task.
At this time (July 2020) CloudWatch events does not appear to have suffix filtering built into it.
You could instead configure an S3 Event Notification which do support the ability to specify prefixes and suffixes.
By using an S3 event notification you can still have your target as a Lambda.