How to send scheduled custom message using Amazon EventBridge - amazon-web-services

I'm trying to build an Amazon EventBridge rule that runs on a schedule(weekly), to put an event in the SQS.
There are multiple options to choose from, as to what message is to be sent as an event.
I understand that it's essentially a JSON object, which can be set to a custom JSON, or the default(or some seletive fields from this) Something like:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
AWS EventBridge: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html
EB Events: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html
My question is: How can I send a JSON Object that has a different parameters every time?
Say I want to publish this object, with a date-range different, with
activeFrom: Today-7 days
activeTill: Today's date.
{
"dummyId": "xyz",
"activeFrom": "2021-07-09T18:43:48Z",
"activeTill": "2021-07-15T18:43:48Z"
}

You can let the EventBridge trigger a lambda function on schedule. In that lambda, you can bake your JSON and send the event to SQS.

Related

AWS EventBridge: How to send only 1 notification when multiple objects deleted

I use AWS EventBridge with the following settings to activate Lambda functions. If there are three files under s3://testBucket/test/, and when these files deleted (I delete all files at the same time), EventBridge will send a notification to activate Lambda three times.
In this situation, I want to send only one notification to avoid duplicate execution of Lambda. Does anyone know how to set EventBridge to do so?
{
"source": [
"aws.s3"
],
"detail-type": [
"Object Deleted"
],
"detail": {
"bucket": {
"name": [
"testBucket"
]
},
"object": {
"key": [{
"prefix": "test/"
}]
}
}
}
It is not possible.
An event will be generated for each object deleted.

Passing variable values through S3 and SQS event trigger message

I have setup the aws pipeline as S3 -> SQS -> Lambda. S3 PutObject event will generate an event trigger message and pass it to SQS and SQS will trigger the lambda. I have a requirement to pass a variable value from S3 to SQS and finally to Lambda as part of the event message. Variable value could be the file name or some string value.
can we customize the event message json data generated by S3 event to pass some more information along with the message.
Does SQS just pass the event message received from S3 to Lambda or does any alteration to the message or generate its own message.
how to display or see the message generated by S3 in SQS or Lambda.
You can't manipulate the S3 event data. The schema looks like this. That will be passed onto the SQS Queue which will add some it's own metadata and pass it along to Lambda. This tutorial has a sample SQS record.
When Amazon S3 triggers an event, a message is sent to the desired destination (AWS Lambda, Amazon SNS, Amazon SQS). The message includes the bucket name and key (filename) of the object that triggered the event.
Here is a sample event (from Using AWS Lambda with Amazon S3 - AWS Lambda):
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
},
"requestParameters": {
"sourceIPAddress": "205.255.255.255"
},
"responseElements": {
"x-amz-request-id": "D82B88E5F771F645",
"x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQjryTjKlc5aLWGVHPZLj5NeC6qMa0emYBDXOo6QBU0Wo="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
"bucket": {
"name": "lambda-artifacts-deafc19498e3f2df",
"ownerIdentity": {
"principalId": "A3I5XTEXAMAI3E"
},
"arn": "arn:aws:s3:::lambda-artifacts-deafc19498e3f2df"
},
"object": {
"key": "b21b84d653bb07b05b1e6b33684dc11b",
"size": 1305107,
"eTag": "b21b84d653bb07b05b1e6b33684dc11b",
"sequencer": "0C0F6F405D6ED209E1"
}
}
}
]
}
The bucket can be obtained from Records[].s3.bucket.name and the key can be obtained from Records[].s3.object.key.
However, there is no capability to send a particular value, since S3 triggers the event. However, you could possibly derive a value. For example, if you had events from several different buckets triggering the Lambda function, then the Lambda function could look at the bucket name to determine why it was triggered, and then substitute a desired value.

Cloudwatch: event type syntax for monitoring S3 files

I need to create a cloudwatch event that runs a lambda function everytime my file in S3 gets updated/re-uploaded. What "eventName" should I use? I tried using "ObjectCreated" but it doesn't seem to work. Perhaps the syntax is incorrect.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [ "ObjectCreated:*"],
"requestParameters": {
"bucketName": [
"mynewbucket"
],
"key": [
"file.csv"
]
}
}
}
CloudWatch Events (or EventBridge) does not automatically track data events for S3 objects. You need to either use CloudTrail for this, which tracks data events on a particular S3 bucket and emits CloudWatch Events (or EventBridge) events for that: https://aws.amazon.com/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/
Or you can use S3 Event Notifications with an SNS topic and use a Lambda subscription on the SNS topic.

How to create automatic Cloudwatch alarm on New EC2 instance creation

I want to create a lambda function that gets triggered whenever a new EC2 instance is created, this Lambda function should configure StatusCheck alarm on this new instance automatically. So that I don't have to manually configure cloudwatch alarm each time a new instance is created. Can someone help with code for lambda function that accomplishes this?
I have something like this:
response = client.put_metric_alarm(
AlarmName='StatusCheckFailed-Alarm-for-i-1234567890abcdef0',
AlarmActions=[
'arn:aws:sns:us-west-2:111122223333:my-sns-topic',
],
MetricName='StatusCheckFailed',
Namespace='AWS/EC2',
Statistic='Maximum',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-1234567890abcdef0'
},
],
Period=300,
Unit='Count',
EvaluationPeriods=2,
Threshold=1,
ComparisonOperator='GreaterThanOrEqualToThreshold')
But I have to map instance ID from cloudwatch rule as an input to Lambda. Because the function would trigger automatically so there is no way to put instance ID manually each time.
You will need two cloud watch rule to handle this as
One for instance launch from auto-scaling group
One for instance launch with EC2
Also, I am going to add Launch and Terminatioin
On Launch (add alarm)
On termination (delete alarm) to avoid reaching max limit
Autoscaling group CW rule:
{
"source": [
"aws.autoscaling"
],
"detail-type": [
"EC2 Instance Launch Successful",
"EC2 Instance Terminate Successful"
]
}
Autoscaling Event:
{
"version": "0",
"id": "3e3c153a-8339-4e30-8c35-687ebef853fe",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "2015-11-11T21:31:47Z",
"region": "us-east-1",
"resources": [
"arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:eb56d16b-bbf0-401d-b893-d5978ed4a025:autoScalingGroupName/sampleLuanchSucASG",
"arn:aws:ec2:us-east-1:123456789012:instance/i-b188560f"
],
"detail": {
"StatusCode": "InProgress",
"AutoScalingGroupName": "sampleLuanchSucASG",
"ActivityId": "9cabb81f-42de-417d-8aa7-ce16bf026590",
"Details": {
"Availability Zone": "us-east-1b",
"Subnet ID": "subnet-95bfcebe"
},
"RequestId": "9cabb81f-42de-417d-8aa7-ce16bf026590",
"EndTime": "2015-11-11T21:31:47.208Z",
"EC2InstanceId": "i-b188560f",
"StartTime": "2015-11-11T21:31:13.671Z",
"Cause": "At 2015-11-11T21:31:10Z a user request created an AutoScalingGroup changing the desired capacity from 0 to 1. At 2015-11-11T21:31:11Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1."
}
}
EC2 CW Rule:
{
"source": [
"aws.ec2"
],
"detail-type": [
"EC2 Instance State-change Notification"
],
"detail": {
"state": [
"running",
"terminated"
]
}
}
EC2 Event:
{
"version": "0",
"id": "ee376907-2647-4179-9203-343cfb3017a4",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:30:34Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail": {
"instance-id": "i-abcd1111",
"state": "running"
}
}
So you can do rest of the logic base on the event, below example is base on javascript
If the event from the auto-scaling group
if (event["source"] == "aws.autoscaling") {
if (event["detail-type"] === "EC2 Instance Launch Successful"){
let EC2_ID=event.detail.EC2InstanceId
// Add alarm here
// use EC2 instance ID
}
}
Same logic can be applied for EC2 events, where you can check the status
if (event["source"] == "aws.ec2") {
if (event.detail === "running"){
let EC2_ID=event.detail.EC2InstanceId
// Add alarm here
// use EC2 instance ID
}
// same can be check for termination
if (event.detail === "terminated"){
let EC2_ID=event.detail.EC2InstanceId
// remove alarm for this instance
// use EC2 instance ID here to remove/delete alaram
}
}
What you are looking for is AWS CloudTrail. It's a service that is used to monitor any and every API calls made to AWS for a given account.
Pro Tip: AWS is API driven, everything you do, even on the console (UI) is translated into an API call to get the desired result.
The scenario described a very common one and AWS has addressed it in Automating Amazon EC2 with CloudWatch Events - Amazon Elastic Compute Cloud. You can create a CloudTrail event trail for EC2 and configure it to trigger a lambda function. As you have described, this function can then do the necessary configurations.
I use this setup for a similar use case wherein monitoring for disk utilization and memory is configured for any new instance that any user or system spins up. This is just an additional check that makes sure if the correct/recommended AMI is not used, there is some process that goes in and makes sure the monitoring tools are in place.
A note from my experience:
I prefer using S3 in between CloudTrail and Lambda i.e. CloudTrail would write the events to S3 and then the lambda function would be triggered via S3 events. This has the added benefit of persisting the events for later reference. If the data is not sensitive, you can choose to use S3 Lifecycle hooks to delete the data in some time or even use a cheaper storage option to keep the cost down.
Not sure if you were able to get an answer to your question about getting the instance id of the instance. here is how i did it:
def lambda_handler(event, context):
cloudwatchclient = boto3.client('cloudwatch')
eventdata = json.load(event)
thisInstanceID = eventdata['detail']['instance-id']

create email notification when an ec2 instance is terminated. The Email should contain the instance details.eg:Instance Name

I have created cloudwatch alarms for cloudtrail events. I am getting the email notification whenever there is a state change. But It is tough for me to search for the instance which is deleted among hundreds of instances. It will be easier if I get the instance name in the notification Email. Have anyone tried this?
The best method is:
Create an Amazon SNS topic to receive the notification
Subscribe to the topic to receive notifications (eg via Email)
Create a rule in Amazon CloudWatch Events to trigger when an instance is terminated:
The result will be a message like this sent via email (or however you subscribed to the topic):
{
"version": "0",
"id": "0c921724-d932-9cc2-b620-4053a0ad3f73",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2018-01-09T07:04:42Z",
"region": "ap-southeast-2",
"resources": [
"arn:aws:ec2:ap-southeast-2:123456789012:instance/i-0a32beef35b8da342"
],
"detail": {
"instance-id": "i-0a32beef35b8da342",
"state": "terminated"
}
}