I have a serverless infrastructure that has a front-end web app. In this front-end, users can select specific times of day for taskX to occur.
I know (and have) set up events to occur on a recurring schedule with a manually-created (with serverless framework) cron-based trigger. It's my understanding I could use a cron to trigger at specific times as explained here: How to trigger a Lambda function at specific time in AWS? and here: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-run-lambda-schedule.html ...
...however, I don't know how I would programmatically create (and also remove) these events using the AWS SDK. (also noting I might have thousands of said events - Perhaps eventbridge isn't the right tool?)
Suggestions on how to do this?
As I understood your question correctly, you need to set up a one-time schedule. Recently(10-Nov-2022) AWS launched a new service called EventBridge Scheduler and it supports a One-time schedule. I already added an answer here, please have a look. If not comment below, I might think I can help you.
Related
In Amazon Web Services (AWS) Eventbridge, I can create cron-style scheduled rules to fire an event regularly.
When I'm creating or editing these, I often want to test that they work immediately (rather than waiting until the next scheduled execution). For testing purposes, triggering the rule's target manually is not always equivalent to the rule running (perhaps because a template is used to customise the event JSON).
Is there an easy way of triggering a AWS EventBridge scheduled job to run immediately, via the user interface or via the command line?
I generally do this by modifying the cron schedule to two minutes in the future, then reverting it, but this is tedious and error prone. Perhaps there's an obvious button I've failed to see, or else a cli command that I haven't found (e.g. at https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/index.html#cli-aws-events).
I think you are looking for a one-time schedule. For that AWS Recently(10-Nov-2022) launched a new service called EventBridge Scheduler. You can also do a Recurring Schedule, but in your case, I think you need a One-time Schedule. Then you can immediately trigger any target in your own time period.
Hope this will fulfill your need.
I am looking for a way to monitor any changes that occur to my production envrionment. Such as security group changes, ec2 create/stop/deletes, database changes, s3 bucket changes, route table changes, subnet changes, etc... I was looking at using cloudtrail for this and monitoring all api calls. However, when testing, my subscribed SNS topic was not receiving any notifications when i was making some changes for a test. Curious if anyone else has a work around for this or if I am missing something? Maybe lambda? Just looking for the easiest way to receive email notifications when any changes are made within my prod environment. Thank you.
If you're looking to audit the entire event history of AWS API calls then you would use CloudTrail, remembering to create a trail and enabling the options if you want to audit S3 or Lambda API calls.
By itself CloudTrail will provide auditing, but it can be combined with CloudWatch/EventBridge to automate actions based on specific API calls such as triggering a Lambda or triggering an SNS topic.
Regarding your own implementation so far using SNS always ensure you've accepted the subscription first on the subscriber(s).
In addition you can use AWS Config with many resources in AWS providing 2 benefits to you. You will be able to maintain a history of changes to you resources, whilst also being able to configure compliance and resolution rules for your resources.
For example say I build a workflow that uses 10 lambda functions that trigger each other and are triggered by a dynamodb table and an S3 bucket.
Is there any AWS tool that tracks how these triggers are tying together so I can easily visualize the whole workflow I’ve created?
Bang on, few months ago, I too was in a similar situation for my distributed architecture running on AWS.
So far, I have found the following options as possibilities. I'm still figuring out which is more suitable. But, hope this information helps you.
1. AWS-Native option :: Engineer your Lambda code to trigger Cloudwatch custom-metrics for any important events from within the code. Later, you may use Cloudwatch dashboard to visualize them.
2. Non-AWS options :: There are several of them, but all of them require you to engineer your code with their respective libraries / packages to transmit the needed information. Some of them support ASYNC invocations, so it shouldn't keep your master lambdas in the waiting state for log tracing.
IOPipe
Epsagon
3. Mix of AWS & Non-AWS :: This is a more traditional approach to our problem. You log events to Cloudwatch Logs (like how Lambda does it out of the box), "ingest" these logs into popular log management and analysis SaaS tooling to make sense between these logs via "pattern-matching" and other proprietary techniques.
Splunk Cloud
Datadog
All the best! Keep me posted how it is going.
cheers,
ram
If you use CloudFormation you can visualize the resource relations with CloudFormation Designer. However, if you don't have the resources in a CloudFormation stack, you can create one from all the existing resources.
I have written some cronjobs in my django app and I want to schedule these jobs using AWS Lambda service. Can someone please recommend a good approach to get this done?
I will answer this based on the question's topic rather than the body, since I am not sure what the OP means with "I want to schedule these jobs using AWS Lambda".
If all you want is trigger your Lambda function based in a cronjob, you can use CloudWatch Events to achieve this. You can specify regular cron expressions or some built-in expressions that AWS makes available, like rate(1 min) will run your function every minute. You can see how to trigger a Lambda function via CloudWatch Events on the docs. See cron/rate to see all the available options.
CloudWatch Events is only one of the many options to trigger you Lambda function. Your function can react to a whole bunch of AWS Events, including S3, SQS, SNS, API Gateway, etc. You can see the full list of events here. Just pick one that fits your needs and you are good to go.
EDIT AFTER OP'S UPDATE:
Yes, what you're looking for is CloudWatch Events. Once you have the Lambda to poll your database in place, you can just create a rule in CloudWatchEvents and have your Lambda be triggered by it. Please see the following images for guidance.
Go to CloudWatch, click on Events and choose Schedule as the Event Source
(make sure to setup your own Cron expression or select the pre-defined rate values)
On the right-hand side, choose your Lambda function accordingly.
Click on "Configure Details" when you are done, give it a name, leave the "Enabled" box checked and finally click on Create.
Go back to your Lambda function and you should see it's now triggered by CloudWatch Events (column on the left-hand side)
Your lambda is now configured properly and will execute once a day.
I have a batch job that I need to run on AWS. I'm wondering what's the best service to use. The job needs to run once a day, so I think that naturally AWS Lambda with a CloudWatch Rule triggering it would do it. However, I'm starting to think that AWS Lambda is thought to be used as a service to handle requests. This AWS official library to integrate Spring-Boot is very oriented to handle HTTP requests, and when creating a lambda via AWS Console, only test cases that send an input to the lambda can be written.
Then, is this a use case for AWS Lambda? Also, these functions can run up to 15 minutes. What should I use if my job needs to run longer?
The purpose of Lambda, as compared to AWS EC2, is to simplify building smaller, on-demand applications that are responsive to events and new information.
If your batch is running within a limit of 15 minutes then you can go with a lambda function.
But if you want batch processing to be done, you should check AWS batch.
Here is nice article which demonstrates the usage of AWS batch.
If you are already using some batch framework like spring-batch, you can also take a look at ECS scheduled task with Fargate.
With ECS Fargate you can launch and stop container services that you need to run only at certain times.
Here are some related articles on Fargate event and scheduled task and Scheduled Tasks.
If you're confident that your function will only run at maximum of 15mins, AWS Lambda could be the solution. Here are the AWS Lambda limits that could help you decide on that.
Also note that lambda has cold start, it's when it will run slower at first but will eventually pick up the pace. Here are some good reads about it that could help you decide on the lambda direction, but feel free to check on any articles that could better explain at your disposal.
This one shows a brief lists that you would like to consider and the factors affecting it.
This one might have a deeper explanation of the cold start with regards to how it works internally.
What should I use if my job needs to run longer?
Depending on your infrastructure, you could maybe explore Scheduled Tasks