AWS Avoiding Duplicates in East and West SNS triggered lambda - amazon-web-services

We basically want to create an active passive setup without being active passive. So all aws services must be replicated in both regions. But routing can be different. Flow Currently. SNS -> Lambda
We currently have a lambda triggered by an SNS after another process we cant control is completed. Active - Active east wand west. Each region has its own sns trigger triggering it. East does processing and west does processing separately and they each print a file.
Current issue is east & west are both processing and outputting copies of the same file. Any suggestions on how I can resolve this ideally in the aws space? So only one lambda is triggered.
We need to have active active lambdas
Thought of sns -> sqs -> lambda. But havent thought of how to get around the routing for having a copy in each region.
Seems filtering can be done for sqs but can sqs policies talk to each other during this filtering?
Found an image thats a perfect depiction of what we have but what I DONT WANT. In the end I want only File 1 to send.
Diagram

Related

AWS CloudWatch logs in multiple regions

I created a lambda function in us-east-1 and sns topic to send notifications to a slack channel.
Now I also want to use logs from a service in us-west-2 to trigger the notifications but I can't because they are in different regions.
Whats the best way to handle this? I could just copy the Lambda function/sns topic into us-west-2 but that seems redundant....
Thanks
I decided to go with separate lambda functions in each region.
Since Network Manager is only available in US West 2 and the messages being processed will be specific to that region.

MediaLive (AWS) how to view channel alerts from php SDK

Question
I have set up a Laravel project that connects to AWS MediaLive for streaming.
Everything is working fine, and I am able to stream, but I couldn't find a way to see if a channel that was running had anyone connected to it.
What I need
I want to be able to see if a running channel has anyone connected to it via the php SDK.
Why
I want to show a stream on the user's side only if there is someone connected to it.
I want to stop a channel that has noone connected to it for too long (like an hour?)
Other
I tried looking at the docs but the closest thing I could find was the DescribeChannel command.
This however does not return any informations about the alerts. I also tried comparing the output of DescribeChannel when someone was connected and when noone was connected, but there was no difference
On the AWS site I can see the alerts on the channel page, but I cannot find how to view that from my laravel application.
Update
I tried running these from the SDK:
CloudWatch->DescribeAlarms();
CloudWatchLogs->GetLogEvents(['logGroupName'=>'ElementalMediaLive', 'logStreamName'=>'channel-log-stream-name']);
But it seems to me that their output didn't change after a channel started running without anyone connected to it.
I went on the console's CloudWatch and it was the same.
Do I need to first set up Egress Points for alerts to show here?
I looked into SNS Topics and lambda functions, but it seems they are for sending messages and notifications? can I also use this to stop/delete a channel that has been disconnected for over an hour? Are there any docs that could help me?
I'm using AWS MediaStore, but I'm guessing I can do the same as AWS MediaPackage? How can the threshold tell me if, and for how long no-one has been connected to a MediaLive channel?
Overall
After looking here and there in the docs I am assuming I have to:
1. set up a metric alarm that detects when a channel had no input for over an hour
2. Send the alarm message to the CloudWatchLogs
3. retrieve the alarm message from the SDK and/or the SNS Topic
4. stop/delete the channel that sent the alarm message
Did I understand this correctly?
Thanks for your post.
Channel alerts will go your AWS CloudWatch logs. You can poll these alarms from SDK or CLI using a command of the form 'aws cloudwatch describe-alarms'. Related log events may be retrieved with a command of the form 'aws logs get-log-events'.
You can also configure a CloudWatch rule to propagate selected service alerts to an SNS Topic which can be polled by various clients including a Lambda function, which can then take various actions on your behalf. This approach works well to aggregate the alerts from multiple channels or services.
Measuring the connected sessions is possible for MediaPackage endpoints, using the 2xx Egress Request Count metric. You can set a metric alarm on this metric such that when its value drops below a given threshold, and alarm message will be sent to the CloudWatch logs mentioned above.
With Regard to your list:
set up a metric alarm that detects when a channel had no input for over an hour
----->CORRECT.
Send the alarm message to the CloudWatchLogs
----->The alarm message goes directly to an SNS Topic, and will be echoed to your CloudWatch logs. See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
a Lambda Fn will need to be created to process new entries arriving in the SNS topic (queue) mentioned above, and take a desired action. This Lambda Fn can send API or CLI calls to stop/delete the channel that sent the alarm message. You can also have email alerts or other actions triggered from the SNS Topic (queue); refer to https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html
Alternatively, you could do everything in one lambda function that queries the same MediaPackage metric (EgressRequestCount), evaluates the response and takes a yes/no action WRT shutting down a specified channel. This lambda function could be scheduled to run in a recurring fashion every 5 minutes to achieve the desired result. This approach would be simpler to implement, but is limited in scope to the metrics and actions coded into the Lambda Function. The Channel Alert->SNS->LAMBDA approach would allow you to take multiple actions based on any one Alert hitting the SNS Topic (queue).

Resilience with SQS queue

We are trying to work on a Live - DR model for SQS queue.
We have two different account in AWS:
eu-west (Account no 1234)
us-east (Account no 4567)
Our application resides in both account (ACTIVE - PASSIVE).
In a normal scenario, EU-WEST is active and US-EAST inactive.
When we failover for DR, US-EAST will be active and EU-WEST inactive.
We want to have two SQS queue in each account (eu-west, us-east).
When EU-WEST is active, we want only SQS queue in EU-WEST working and processing events.
When we switch to DR we want to make EU-WEST SQS inactive and make SQS in US-EAST active.
There is a Lambda trigger on each SQS.
The problem we might face here is: Both SQS queues will process events since it subscribes to the same SNS topic. And since it is connected to a Lambda function, both will process events.
I don't want this to happen. I want only one pair of SQS and Lambda functions working at a time - either EU-WEST or US-EAST. I know this can be achieved by removing the Lambda trigger on the inactive region.
Just looking for a better approach.
I got a solution here.
We need to check the Route53 to find the current active region for the application.
In DR region, when we receive a message on SQS , it will trigger the lambda. The lambda checks for active region based on route53 or ALB dns . If it finds the region is not active/live , it will skip processing of the message and hence the SQS queue will clear up on DR.
So live region Lambda will be actively processing SQS messages whereas the DR one will skip all the processing.
This idea should work for the scenario I mentioned above.

AWS EventBridge pricing for cron events

I would like to schedule triggering my Lambda function.
AWS EventBridge allows me to do this (e.g. by creating cron based rule) but I don't understand its pricing model.
It states "All state change events published by AWS services are free" but I'm not sure if an event fired by EventBridge (generated by EventBridge rule) relates to this free one. Probably not.
If it's not free, then how 30 events per month will be billed if pricing is based on million of events.
Googling didn't help. Do you have ideas how 30 events will be billed?
Update:
While experimenting I found that we should not pay for 30 events per months but it's not clear why. Probably less than million of events are free.
When you enable AWS EventBridge Rule to trigger based on a fixed rate or a Cron expression, the EventBus selection gets disabled. This means, the events from this Rule will be delivered to the default bus which is where all the events from AWS services end up. Also, if you look at one of the events delivered using the fixed rate or the cron expression, the event source is "aws.events" (which is one of the AWS services. In this case, it is the EventBridge). All events generated by AWS services fall under AWS Service events and they are free.

Is it possible to automatically delete objects older than 10 minutes in AWS S3?

We want to delete objects from S3, 10 minutes after they are created. Is it possible currently?
I have a working solution that was built serverless with the help of AWS's Simple Queue Service and AWS Lambda. This works for all objects created in an s3 bucket.
Overview
When any object is created in your s3 bucket, the bucket will send an event with object details to an SQS queue configured with a 10 minute delivery delay. The SQS queue is also configured to trigger a Lambda function. The Lambda function reads the object details from the event sent and deletes the object from the s3 bucket. All three components involved (s3, SQS and Lambda) are low cost, loosely coupled, serverless and scale automatically to very large workloads.
Steps Involved
Setup your Lambda Function First. In my solution, I used Python 3.7. The code for the function is:
import json
import boto3
def lambda_handler(event, context):
for record in event['Records']:
v = json.loads(record['body'])
for rec in v["Records"]:
bucketName = rec["s3"]["bucket"]["name"]
objectKey = rec["s3"]["object"]["key"]
#print("bucket is " + bucketName + " and object is " + objectKey )
sss = boto3.resource("s3")
obj = sss.Object(bucketName, objectKey)
obj.delete()
return {
'statusCode': 200,
'body': json.dumps('Delete Completed.')
}
This code and a sample message file were uploaded to a github repo.
Create a vanilla SQS queue. Then configure the SQS queue to have a 10 minute delivery Delay. This setting can be found under Queue Actions -> Configure Queue -> 4 setting down
Configure the SQS Queue to trigger the Lambda Function you created in Step 1. To do this use Queue Actions -> Configure Trigger for Lambda Function. The setup screen is self explanatory. If you don't see your Lambda function from step 1, redo it correctly and make sure you are using the same Region.
Setup your S3 Bucket so that it fires an event to the SQS Queue you created in step 2. This is found on the main bucket screen, click Properties tab and select Events. Click the plus sign to add an event and fill out the following form:
Important points to select are to select All Object create events and to select the queue you created in Step 2 for the last pull down on this screen.
Last step - Add an execute policy to your Lambda Function that allows it to only delete from the specific S3 bucket. You can do this via the Lambda function console. Scroll down the Lambda function screen of your console and configure it under Execution Role.
This works for files I've copied into a single s3 bucket. This solution could support many S3 buckets to 1 queue and 1 lambda.
In addition to the detailed solution proposed by #taterhead involving a SQS queue, one might also consider the following serverless solution using AWS Step Functions:
Create a State Machine in AWS Step Functions with a Wait state of 10 minutes followed by a Task state executing a Lambda function that will delete the object.
Configure CloudTrail and CloudWatch Events to start an execution of your state machine when an object is uploaded to S3.
It has the advantage of (1) not having the 15 minutes limit and (2) avoiding the continuous queue polling cost generated by the Lambda function.
Inspiration: Schedule emails without polling a database using Step Functions
If anyone is still interest in this, S3 now offers Life Cycle rules which I've just been looking into, and they seem simple enough to configure in the AWS S3 Console.
The "Management" tab of an S3 bucket will reveal a button labeled "Add lifecycle rule" where users can select specific prefixes for objects and also set expiration times for the life times of the objects in the bucket that's being modified.
For a more detailed explanation, AWS have published an article on the matter, which explains this in more detail here.