Can an aws target be triggered by multiple cloudwatch event rules - amazon-web-services

I'm trying to set up a scheduler system for our infrastructure that suppose to take care of all scheduled housekeeping tasks. Our proposal is to make it simple and scalable with one docker image. A Script of each task and cloudWatch event rule will be passed in as parameters. The scripts will be uploaded on an s3 bucket and will be downloaded when the job gets triggered. This way we can avoid redeploying every time a task gets added.
The only tricky park is to pass in cloudWatch event rule as parameter.
Can an event target be triggered by multiple rules? Am I too ambitious on this project? I use terraform to provision it.

Turn Cloud watch logs on
Create a metric filter
Assign a metric
Create alarm.
Here is a tutorial which you can modify to suit your needs.
https://aws.amazon.com/blogs/security/how-to-receive-notifications-when-your-aws-accounts-root-access-keys-are-used/

Related

AWS Eventbridge: how do I run a scheduled rule manually (in order to test it)?

In Amazon Web Services (AWS) Eventbridge, I can create cron-style scheduled rules to fire an event regularly.
When I'm creating or editing these, I often want to test that they work immediately (rather than waiting until the next scheduled execution). For testing purposes, triggering the rule's target manually is not always equivalent to the rule running (perhaps because a template is used to customise the event JSON).
Is there an easy way of triggering a AWS EventBridge scheduled job to run immediately, via the user interface or via the command line?
I generally do this by modifying the cron schedule to two minutes in the future, then reverting it, but this is tedious and error prone. Perhaps there's an obvious button I've failed to see, or else a cli command that I haven't found (e.g. at https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/index.html#cli-aws-events).
I think you are looking for a one-time schedule. For that AWS Recently(10-Nov-2022) launched a new service called EventBridge Scheduler. You can also do a Recurring Schedule, but in your case, I think you need a One-time Schedule. Then you can immediately trigger any target in your own time period.
Hope this will fulfill your need.

AWS lambda function for copying data into Redshift

I am new to AWS world and I am trying to implement a process where data written into S3 by AWS EMR can be loaded into AWS Redshift. I am using terraform to create S3 and Redshift and other supported functionality. For loading data I am using lambda function which gets triggered when the redshift cluster is up . The lambda function has the code to copy the data from S3 to redshift. Currently the process seams to work fine .The amount of data is currently low
My question is
This approach seems to work right now but I don't know how it will work once the volume of data increases and what if lambda functions times out
can someone please suggest me any alternate way of handling this scenario even if it can be handled without lambda .One alternate I came across searching for this topic is AWS data pipeline.
Thank you
A server-less approach I've recommended clients move to in this case is Redshift Data API (and Step Functions if needed). With the Redshift Data API you can launch a SQL command (COPY) and close your Lambda function. The COPY command will run to completion and if this is all you need to do then your done.
If you need to take additional actions after the COPY then you need a polling Lambda that checks to see when the COPY completes. This is enabled by Redshift Data API. Once COPY completes you can start another Lambda to run the additional actions. All these Lambdas and their interactions are orchestrated by a Step Function that:
launches the first Lambda (initiates the COPY)
has a wait loop that calls the "status checker" Lambda every 30 sec (or whatever interval you want) and keeps looping until the checker says that the COPY completed successfully
Once the status checker lambda says the COPY is complete the step function launches the additional actions Lambda
The Step function is an action sequencer and the Lambdas are the actions. There are a number of frameworks that can set up the Lambdas and Step Function as one unit.
With bigger datasets, as you already know, Lambda may time out. But 15 minutes is still a lot of time, so you can implement alternative solution meanwhile.
I wouldn't recommend data pipeline as it might be an overhead (It will start an EC2 instance to run your commands). Your problem is simply time out, so you may use either ECS Fargate, or Glue Python Shell Job. Either of them can be triggered by Cloudwatch Event triggered on an S3 event.
a. Using ECS Fargate, you'll have to take care of docker image and setup ECS infrastructure i.e. Task Definition, Cluster (simple for Fargate).
b. Using Glue Python Shell job you'll simply have to deploy your python script in S3 (along with the required packages as wheel files), and link those files in the job configuration.
Both of these options are serverless and you may chose one based on ease of deployment and your comfort level with docker.
ECS doesn't have any timeout limits, while timeout limit for Glue is 2 days.
Note: To trigger AWS Glue job from Cloudwatch Event, you'll have to use a Lambda function, as Cloudwatch Event doesn't support Glue start job yet.
Reference: https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutTargets.html

AWS question - How can I get Cloudwatch event data in a Fargate task with Python

I'm new to Cloudwatch events and to Fargate. I want to trigger a Fargate task (Python) to run whenever a file is uploaded to a specific S3 bucket. I can get the task to run whenever I upload a file, and can see the name in the event log; however I can't figure out a simple way to read the event data in Fargate. I've been researching this the past couple of days and haven't found solution other than reading the event log or using a lambda to invoke the task and to put the event data in a message queue.
Is there a simple way to obtain the event data in Fargate with boto3? It's likely that I'm not looking in the right places or asking the right question.
Thanks
One of the easiest options that you can configure is two targets for same s3 image upload event.
Push the Same Event to SQS
launch Fargate task at the same time
Read Message Event from SQS when Fargate is up (No Lambda in between), also same task definition that will work a normal use case, make sure you exit the process after reading the message from sqs.
So in this case whenever Fargate Task up, it will read messages from the SQS.
To do this you would need to use a input transformer.
Each time a event rule is triggered a JSON object accessible to use for in the transformation.
As the event itself is not accessible within the container (like with Lambda functions), the idea is that you would actually forward key information as environment variables and manipulate in your container.
At this time it does not look like every service supports this in the console so you have the following options:
CloudFormation
Terraform
CLI
You can view a tutorial for this exact scenario from this link.

How to extract event relayed from AWS EventBridge to ECS Fargate

I articulate the question as follows:
Is the EventBridge event relayed to the ECS Task? (I can't see how much useful it could be if the event is not relayed).
If the event is relayed, then how to able to extract it from within say a Node app running as Task.
Some Context is Due: It is possible to set an EventBridge rule to trigger ECS Fargate Tasks as the result of events sourced from, say, CodeCommit. Mind you, the issue here is the sink/target, not the source. I was able to trigger a Fargate Task as I updated my repo. I could have used other events. My challenge resides in extracting the event relayed (in this case, repository name, commitId, etc from Fargate.)
The EventBridge documentation is clear on how to set the rules to trigger events but is mum on how events can be extracted - which makes sense as the sink/target documentation would have the necessary reference. But ECS documentation is not clear on how to extract relayed events.
I was able to inspect the metadata and process.env. I could not find the event in either of the stores.
I have added a CloudWatch Log Group as a target for the same rule and was able to extract the event. So it certainly relayed to some of the targets, but not sure if events are relayed to ECS Task.
Therefore, the questions arise: is the event relayed to the ECS Task? If so, how would you access it?

Idea and guidelines on end to end AWS solution

I want to build an end to end automated system which consists of the following steps:
Getting data from source to landing bucket AWS S3 using AWS Lambda
Running some transformation job using AWS Lambda and storing in processed bucket of AWS S3
Running Redshift copy command using AWS Lambda to push the transformed/processed data from AWS S3 to AWS Redshift
From the above points, I've completed pulling data, transforming data and running manual copy command from a Redshift using a SQL query tool.
Doubts:
I've heard AWS CloudWatch can be used to schedule/automate things but never worked on it. So, if I want to achieve the steps above in a streamlined fashion, how to go about it?
Should I use Lambda to trigger copy and insert statements? Or are there better AWS services to do the same?
Any other suggestion on other AWS Services and of the likes are most welcome.
Constraint: Want as many tasks as possible to be serverless (except for semantic layer, Redshift).
CloudWatch:
Your options here are either to use CloudWatch Alarms or Events.
With alarms, you can respond to any metric of your system (eg CPU utilization, Disk IOPS, count of Lambda invocations etc) when it crosses some threshold, and when this alarm is triggered, invoke a lambda function (or send SNS notification etc) to perform a task.
With events you can use either a cron expression or some AWS service event (eg EC2 instance state change, SNS notification etc) to then trigger another service (eg Lambda), so you could for example run some kind of clean-up operation via lambda on a regular schedule, or create a snapshot of an EBS volume when its instance is shut down.
Lambda itself is a very powerful tool, and should allow you to program a decent copy/insert function in a language you are familiar with. AWS has several GitHub repos with lots of examples too, see for example the serverless examples and many samples. There may be other services which could work for you in your specific case, but part of Lambda's power is its flexibility.