AWS cloudwatch Event : how distinguish multi domain in a source - amazon-web-services

{
"source": [
"aws.mediaconvert"
],
"detail-type": [
"MediaConvert Job State Change"
],
"detail": {
"status": [
"COMPLETE",
"ERROR"
]
}
}
My Follow:
Domain A: upload video to aws3 bukket A -> lambda create job mediaconvert ->
cloudwatch Event rule (check complete) -> Call lambda call API of
domain A
Domain B: upload video to aws3 bukket B -> lambda create job
mediaconvert -> cloudwatch Event rule (check complete) -> Call lambda
call API of domain B
At cloudwatch Event rule: How can i distinguish domain A and domain B ?
I tried to use "userMetadata" but incorrect

Event Patterns have a more strict format comparing to a simple JSON. It gets a key and verifies if the according event value is inside the list of values. So you can't set a value as a string inside a pattern. Use a list of values instead.
Example:
{
"source": [
"aws.mediaconvert"
],
"detail-type": [
"MediaConvert Job State Change"
],
"detail": {
"status": [
"COMPLETE",
"ERROR"
],
"userMetadata": {
"domain": [
"A"
]
}
}
}
That is exactly the same what the error says. You can use only arrays as leaves of an event pattern.

Related

AWS EventBridge: How to send only 1 notification when multiple objects deleted

I use AWS EventBridge with the following settings to activate Lambda functions. If there are three files under s3://testBucket/test/, and when these files deleted (I delete all files at the same time), EventBridge will send a notification to activate Lambda three times.
In this situation, I want to send only one notification to avoid duplicate execution of Lambda. Does anyone know how to set EventBridge to do so?
{
"source": [
"aws.s3"
],
"detail-type": [
"Object Deleted"
],
"detail": {
"bucket": {
"name": [
"testBucket"
]
},
"object": {
"key": [{
"prefix": "test/"
}]
}
}
}
It is not possible.
An event will be generated for each object deleted.

How to filter an s3 data event by object key suffix on AWS EventBridge

I've created a rule on AWS EventBridge that trigger a Sagemaker Pipeline execution. To do so, I have the following event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject", "CopyObject", "CompleteMultipartUpload"],
"requestParameters": {
"bucketName": ["my-bucket-name"],
"key": [{
"prefix": "folder/inside/my/bucket/"
}]
}
}
}
I have enabled CloudTrail to log my S3 Data Events and the rule are triggering my Sagemaker Pipeline execution correctly.
The problem here is:
A pipeline execution are being triggered for all put/copy of any object in my prefix. Then, I would like to trigger my pipeline execution only when a specific object is uploaded in the bucket, by I don't know its entire name.
For instance, possible object name I will have is, where this date is builded dynamically:
my-bucket-name/folder/inside/my/bucket/2021-07-28/_SUCESS
I would like to write an event pattern with something like this:
"prefix": "folder/inside/my/bucket/{current_date}/_SUCCESS"
or
"key": [{
"prefix": "folder/inside/my/bucket/"
}, {
"suffix": "_SUCCESS"
}]
I think that Event Pattern on AWS do not support suffix filtering. In the documentation, isn't clear the behavior.
I have configured a S3 Event Notification using a suffix and sent the filtered notification to a SQS Queue, but now I don't know what to do with this queue in order to invoke my EventBridge rule to trigger a Sagemaker Pipeline execution.
I was looking at a similar functionality.
Unfortunately, based on the docs from AWS, it looks like it only supports the following patterns:
Comparison
Example
Rule syntax
Null
UserID is null
"UserID": [ null ]
Empty
LastName is empty
"LastName": [""]
Equals
Name is "Alice"
"Name": [ "Alice" ]
And
Location is "New York" and Day is "Monday"
"Location": [ "New York" ], "Day": ["Monday"]
Or
PaymentType is "Credit" or "Debit"
"PaymentType": [ "Credit", "Debit"]
Not
Weather is anything but "Raining"
"Weather": [ { "anything-but": [ "Raining" ] } ]
Numeric (equals)
Price is 100
"Price": [ { "numeric": [ "=", 100 ] } ]
Numeric (range)
Price is more than 10, and less than or equal to 20
"Price": [ { "numeric": [ ">", 10, "<=", 20 ] } ]
Exists
ProductName exists
"ProductName": [ { "exists": true } ]
Does not exist
ProductName does not exist
"ProductName": [ { "exists": false } ]
Begins with
Region is in the US
"Region": [ {"prefix": "us-" } ]

Cloudwatch: event type syntax for monitoring S3 files

I need to create a cloudwatch event that runs a lambda function everytime my file in S3 gets updated/re-uploaded. What "eventName" should I use? I tried using "ObjectCreated" but it doesn't seem to work. Perhaps the syntax is incorrect.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [ "ObjectCreated:*"],
"requestParameters": {
"bucketName": [
"mynewbucket"
],
"key": [
"file.csv"
]
}
}
}
CloudWatch Events (or EventBridge) does not automatically track data events for S3 objects. You need to either use CloudTrail for this, which tracks data events on a particular S3 bucket and emits CloudWatch Events (or EventBridge) events for that: https://aws.amazon.com/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/
Or you can use S3 Event Notifications with an SNS topic and use a Lambda subscription on the SNS topic.

How to trigger an AWS Event Rule when a S3 key with a specific suffix gets uploaded

I'm trying to create an AWS Event Rule that is only triggered when a file with a specific suffix is uploaded to an S3 bucket.
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject",
"CompleteMultipartUpload"
],
"requestParameters": {
"bucketName": [
"bucket-name"
],
"key": [
{ "suffix": ".csv" }
]
}
}
}
As I understand there AWS has content-based filtering which can be used but docs doesn't show the ability to use a suffix, only prefix among other patterns: https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html
Ideally I could be able to do it here without the need for an intermediary Lambda as my event target is an ECS Fargate task.
At this time (July 2020) CloudWatch events does not appear to have suffix filtering built into it.
You could instead configure an S3 Event Notification which do support the ability to specify prefixes and suffixes.
By using an S3 event notification you can still have your target as a Lambda.

Including an exit code 1 event in CloudWatch using Terraform for ECS

I've been running containers on ECS, and using AWS Cloudwatch events to notify me when my tasks complete. All of the infrastructure has been created using Terraform. However, I'm unable to get the correct syntax in my event pattern so that I am only notified for non-zero exit codes.
The following resource works great, and sends notifications to SNS every time one of my containers exits:
resource "aws_cloudwatch_event_rule" "container-stopped-rule" {
name = "container-stopped"
description = "Notification for containers that exit for any reason. (error)."
event_pattern = <<PATTERN
{
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Task State Change"
],
"detail": {
"lastStatus": [
"STOPPED"
],
"stoppedReason" : [
"Essential container in task exited"
]
}
}
PATTERN
}
However, I'm trying to modify the pattern slightly so that I'm only notified when a container exits with an error code- since we get so many notifications, we've started to tune out the emails and sometimes don't notice the email notifications where containers are exiting with errors:
resource "aws_cloudwatch_event_rule" "container-stopped-rule" {
name = "container-stopped"
description = "Notification for containers with exit code of 1 (error)."
event_pattern = <<PATTERN
{
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Task State Change"
],
"detail": {
"containers": [
{
"exitCode": 1
}
],
"lastStatus": [
"STOPPED"
],
"stoppedReason" : [
"Essential container in task exited"
]
}
}
PATTERN
}
This triggers the following error when I terraform apply:
aws_cloudwatch_event_rule.container-stopped-rule: Updating CloudWatch
Event Rule failed: InvalidEventPatternException: Event pattern is not
valid. Reason: Match value must be String, number, true, false, or
null at [Source:
(String)"{"detail":{"containers":[{"exitCode":1}],"lastStatus":["STOPPED"],"stoppedReason":["Essential
container in task exited"]},"detail-type":["ECS Task State
Change"],"source":["aws.ecs"]}"; line: 1, column: 27] status code:
400
This is perplexing to me, since I'm following the exact structure laid out in the AWS CloudWatch documentation for containers. I've even attempted to put double quotes around 1 in case Terraform wants a string instead of a number.
I also tried to use AWS Console to manually edit the event pattern JSON, but received this error:
Validation error. Details: Event pattern contains invalid value (can
only be a nonempty array or nonempty object)
I'm honestly a bit stumped at this point and would appreciate any tips on where my syntax is incorrect.
The event pattern syntax is pretty weird, I ran into the same issue. The following will work:
{
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Task State Change"
],
"detail": {
"lastStatus": [
"STOPPED"
],
"stoppedReason": [
"Essential container in task exited"
],
"containers": {
"exitCode": [
1
]
}
}
}
I used $.detail.group in the Input Transformer to get the task family name in the notification message.
As per https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html,
For a pattern to match an event, the event must contain all the field names listed in the pattern. The field names must appear in the event with the same nesting structure.
Can you try adding more fields lister here https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_cwe_events.html like clusterArn, containerInstanceArn etc?