Schedule AWS lambda in a interval with immediately trigger - amazon-web-services

To run lambda in an interval, I could use EventBridge rule: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html
For example, if I set the rule to 7 days, the lambda will execute at 7 days after the lambda is created.
What if I need to run this lambda immediately after its creation and also run this lambda in an interval?
How can I do this programmatically or in CDK?

Note: This solution only works for those who use AWS CodeBuild to deploy their Lambdas.
In this sample project (https://github.com/dashmug/us-covid-stats) I did a while back, I configured the Lambda to also have another trigger based on CodeBuild's "Build Succeeded" event.
In https://github.com/dashmug/us-covid-stats/blob/main/backend/serverless.yml#L71,
RefreshDataFromSources:
handler: us_covid_stats/etl/handler.refresh_data_from_sources
events:
- schedule:
enabled: false
rate: rate(1 day)
- cloudwatchEvent:
enabled: false
event:
source:
- aws.codebuild
detail-type:
- CodeBuild Build State Change
detail:
build-status:
- SUCCEEDED
project-name:
- us-covid-stats-deployment-backend
you'll see that the Lambda is normally triggered once daily. But also on top of the daily schedule, it is triggered when the deployment succeeds.

You can create a Cloudwatch Event Rule that occurs a single time using a very specific schedule expression thanks to the fact that Cloudwatch Events supports the Year for cron expressions. You would need to know, roughly, how long until deployment is complete. While this method wouldn't be instant it could be close enough to serve your purpose without complicated custom resources or post deployment triggering as long as a buffer of some minutes is okay.
For example in Typescript this might look like follows:
Get the future date you'd like to target and add some minutes (10 in this case):
const date = new Date();
const minutesToAdd = 10;
const future = new Date(date.getTime() + minutesToAdd * 60000);
Convert that future date to a cron expression for exactly the that single date then store the expression with the Day of Week set to ? for "no specific value" since we are setting the day of the month.
const minutes = future.getUTCMinutes();
const hours = future.getUTCHours();
const days = future.getUTCDay();
const months = future.getUTCMonth() + 1;
const years = future.getUTCFullYear();
let futureCron = `${minutes} ${hours} ${days} ${months} ? ${years}`;
Create the schedule expression
const futureEvent = events.schedule.expression('cron(' + dateToCron(future) + ')');
Use the expression to schedule an event rule
new events.Rule(this, 'immediateTrigger', {
schedule: futureEvent
targets: [new targets.LambdaFunction(someHandler)]
}
This would result in a scheduled event that occurs only at a single point in UTC time. For example if it was 2:55PM on Dec 5th 2021 when you deployed it would create an expression for 03:05 PM, on day 5 of the month, only in December, only in 2021
I had a need for this exact type of setup so I created a library for this purpose that simplifies the above process to generate schedule expressions for both one time OnDeploy or one time At a given future date. While anyone is free to use this you should understand why it works and if it's the right choice for your needs.

Related

How to prevent serverless cron from being changed by Daylight Savings time? [duplicate]

We have set an EventBridge to trigger a Lambda. It should run every day at 9:30AM Local time (US EST/EDT in my case). The problem is the date seems to be set in EventBridge by UTC. Is there a way to get this to always be based on a specific timezone? IE 9:30AM regardless of the season?
Updated: November 2022
Use EventBridge Scheduler as it allows you to schedule events at specific date-time along with timezone. Also supports one-time schedules.
Introducing Amazon EventBridge Scheduler - AWS Blog
Original: July 2021
Either schedule another event to adjust the first event, or execute the lambda at both 9:30 EST and 9:30 EDT and have the lambda figure out which one should run.
Run another lambda at 2am and 3am local time that can adjust the schedule of the first lambda for daylight saving time. You can use the lambda language's implementation to decide whether the event needs to be adjusted.
You could also schedule your original lambda to run at both the daylight-saving-adjusted and the non-adjusted time (so 14:30 UTC for EDT and 13:30 UTC for EST). Then the lambda could decide whether it was the proper time to execute based on a calendar check.
I prefer the first option because it's a clearer separation of duties.
Sadly, you can't do this, as only UTC time zone is used:
All scheduled events use UTC time zone and the minimum precision for schedules is 1 minute.
You would need custom solution for what you want to do. For example, let AWS EventBridge trigger a lambda function and the function evaluates what else should be triggered based on its conversions of UTC to local times.
As can be read in another answer here, this is possible using the EventBridge Scheduler. In below example I'm using CDK to schedule a lambda at 6 am and 8 am, from Monday to Friday.
// Lambda
const func = new NodejsFunction(this, 'func-id', {
runtime: Runtime.NODEJS_18_X
});
// Role
const role = new Role(this, 'role-id', {
managedPolicies: [{ managedPolicyArn: 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole' }],
assumedBy: new ServicePrincipal('scheduler.amazonaws.com')
});
// EventBridge Schedule
new CfnSchedule(this, 'schedule-id', {
flexibleTimeWindow: {
mode: 'OFF',
},
scheduleExpression: 'cron(0 6,8 ? * MON-FRI *)',
scheduleExpressionTimezone: 'Europe/Amsterdam',
target: {
arn: func.functionArn,
roleArn: role.roleArn,
},
});

AWS EventBridge Cron Timezone - Fire on Specific Local Time

We have set an EventBridge to trigger a Lambda. It should run every day at 9:30AM Local time (US EST/EDT in my case). The problem is the date seems to be set in EventBridge by UTC. Is there a way to get this to always be based on a specific timezone? IE 9:30AM regardless of the season?
Updated: November 2022
Use EventBridge Scheduler as it allows you to schedule events at specific date-time along with timezone. Also supports one-time schedules.
Introducing Amazon EventBridge Scheduler - AWS Blog
Original: July 2021
Either schedule another event to adjust the first event, or execute the lambda at both 9:30 EST and 9:30 EDT and have the lambda figure out which one should run.
Run another lambda at 2am and 3am local time that can adjust the schedule of the first lambda for daylight saving time. You can use the lambda language's implementation to decide whether the event needs to be adjusted.
You could also schedule your original lambda to run at both the daylight-saving-adjusted and the non-adjusted time (so 14:30 UTC for EDT and 13:30 UTC for EST). Then the lambda could decide whether it was the proper time to execute based on a calendar check.
I prefer the first option because it's a clearer separation of duties.
Sadly, you can't do this, as only UTC time zone is used:
All scheduled events use UTC time zone and the minimum precision for schedules is 1 minute.
You would need custom solution for what you want to do. For example, let AWS EventBridge trigger a lambda function and the function evaluates what else should be triggered based on its conversions of UTC to local times.
As can be read in another answer here, this is possible using the EventBridge Scheduler. In below example I'm using CDK to schedule a lambda at 6 am and 8 am, from Monday to Friday.
// Lambda
const func = new NodejsFunction(this, 'func-id', {
runtime: Runtime.NODEJS_18_X
});
// Role
const role = new Role(this, 'role-id', {
managedPolicies: [{ managedPolicyArn: 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole' }],
assumedBy: new ServicePrincipal('scheduler.amazonaws.com')
});
// EventBridge Schedule
new CfnSchedule(this, 'schedule-id', {
flexibleTimeWindow: {
mode: 'OFF',
},
scheduleExpression: 'cron(0 6,8 ? * MON-FRI *)',
scheduleExpressionTimezone: 'Europe/Amsterdam',
target: {
arn: func.functionArn,
roleArn: role.roleArn,
},
});

Trigger another lambda after a week of first lambda execution

I am working on a code where Lambda Function 1 (call it, L1) executes on messages from an SQS queue. I want to execute another lambda (call it, L2) exactly a week after L1 completes and want to pass L1's output to L2.
Execution Environment: Java
For my application, we are expecting around 10k requests on L1 per day. And same number of requests for L2.
If it runs for a week, we can have around 70k active executions at peak.
Things that I have tried:
Cloudwatch events with cron: I can schedule a cron with specified time or date which will trigger L2. But I couldn't find way to pass input with scheduled Cloudwatch event.
Cloudwatch events with new rules: At the end of first lambda I can create a new cloudwatch rule with specified time and specified input. But that will create as many rules (for my case, it could be around 10k new cloudwatch rules everyday). Not sure if that is a good practice or even supported.
Step function: There are two types step functions in play today.
Standard: Supports wait for a year, but only supports 25k active executions at any time. Won't scale since my application will already have 70k active executions at the end of first week.
https://docs.aws.amazon.com/step-functions/latest/dg/limits.html
Express: Doesn't have limit on number of active executions but supports max 5 minutes executions. It will time out after that.
https://docs.aws.amazon.com/step-functions/latest/dg/express-limits.html
It would be easy to create a new Cloudwatch Rule with the "week later" Lambda as a target as the last step in the first Lambda. You would set a Rule with a cron that runs 1 time in 1 week. Then, the Target has an input field. In the console it looks like:
You didn't indicate your programming environment but you can do something similar to (psuedo code, based on Java SDK v2):
String lambdaArn = "the one week from today lambda arn";
String ruleArn = client.putRule(PutRuleRequest.builder()
.scheduleExpression("17 20 23 7 *")
.name("myRule")).ruleArn();
Target target = TargetBuilder.builder().arn(lambdaArn).input("{\"message\": \"blah\"}").rule("myRule");
client.putTargets(PutTargetsRequest.builder().targets(target));
This will create a Cloudwatch Event Rule that runs one time, 1 week from today with the input as shown.
Major Edit
With your new requirements (at least 1 week later, 10's of thousands of events) I would not use the method I described above as there are just too many things happening. Instead I would have a database of events that will act as a queue. Either a DynamoDB or RDS database will suffice. At the end of each "primary" Lambda run, insert an event with the date and time of the next run. For example, today, July 18 I would insert July 25. The table would be something like (PostgreSQL syntax):
create table event_queue (
run_time timestamp not null,
lambda_input varchar(8192),
);
create index on event_queue( run_time );
Where the lambda_input column has whatever data you want to pass to the "week later" Lambda. In PostgreSQL you would do something like:
insert into event_queue (run_time, lambda_input)
values ((current_timestamp + interval '1 week'), '{"value":"hello"}');
Every database has something similar to the date/time functions shown or the code to do this isn't terrible.
Now, in CloudWatch create a rule that runs once an hour (the resolution can be tuned). It will trigger a Lambda that "feeds" an SQS queue. The Lambda will query the database:
select * from event_queue where run_time < current_timestamp
and, for each row, put a message into an SQS queue. The last thing it does is delete these "old" messages using the same where clause.
On the other side you have your "week later" Lambdas that are getting events from the SQS queue. These Lambdas are idle until a set of messages are put into the queue. At that time they fire up and empty the queue, doing whatever the "week later" Lambda is supposed to do.
By running the "feeder" Lambda hourly you basically capture everything that is 1 week plus up to 1 hour old. The less often you run it the more work that your "week later" Lambda's have to do and conversely, running every minute will add load to the database but remove it from the week later Lambda.
This should scale well, assuming that the "feeder" Lambda can keep up. 10k transactions / 24 hours is only 416 transactions and the reading of the DB and creation of the messages should be very quick. Even scaling that by 10 to 100k/day is still only ~4000 rows and messages which, again, should be very doable.
Cloudwatch is more for cron jobs. To trigger something at a specific timestamp or after X amount of time I would recommend using Step Functions instead.
You can achieve your use-case by using a State Machine with a Wait State (you can pass tell it how long to wait based on your input) followed by your Lambda Task State. It will be similar to this example.

Cron Job to trigger AWS Lambda not working as expected

I want to trigger my AWS lambda function on 15th of every month but my function is triggering after every 30 minutes. My function in Serverless.yml is
monthlyTbAlert:
warmup: true
handler: handlers/monthly-tbalert/index.monthlyTbAlert
timeout: 60
events:
- schedule: cron(0 0 10 15 1/1 ? *)
enabled: true
If you want to debug your cron expressions before deploying them, you can go to CloudWatch -> Rules and test them there. It's a very useful playground if you're unsure about what may be going on.
If we grab the expression provided in #Stargazer's answer (which, by the way, is very accurate) and paste it in CloudWatch Rules, we can see when the next triggers will happen:
By using yours, however, we can see no events are shown. If you say it is running every 30 minutes, then there potentially is a bug in CloudWatch rules that triggers invalid expressions every 30 minutes:
According to aws docs, the format is cron(Minutes Hours Day-of-month Month Day-of-week Year)
So you should use this:
0 - Minute 0 of the hours
10- Hours of the day. So, 10:00
15- 15th day of the month
* - Execute it every month
? - Regardless of the day of the week
*- Every year
So, your cron expression should be 0 10 15 * ? * To execute your cron every 15th day of the month at 10:00AM

Can I schedule a lambda function execution with a lambda function?

I'm looking for the ability to programmatically schedule a lambda function to run a single time with another lambda function. For example, I made a request to myFirstFunction with date and time parameters, and then at that date and time, have mySecondFunction execute. Is that possible only with stateless AWS services? I'm trying to avoid an always-on ec2 instance.
Most of the results I'm finding for scheduling a lambda functions have to do with cloudwatch and regularly scheduled events, not ad-hoc events.
This is a perfect use case for aws step functions.
Use Wait state with SecondsPath or TimestampPath to add the required delay before executing the Next State.
What you're tring to do (schedule Lambda from Lambda) it's not possible with the current AWS services.
So, in order to avoid an always-on ec2 instance, there are other options:
1) Use AWS default or custom metrics. You can use, for example, ApproximateNumberOfMessagesVisible or CPUUtilization (if your app fires a big CPU utilization when process a request). You can also create a custom metric and fire it when your instance is idle (depending on the app that's running in your instance).
The problem with this option is that you'll waste already paid minutes (AWS always charge a full-hour, no matter if you used your instance for 15 minutes).
2) A better option, in my opinion, would be to run a Lambda function once per minute to check if your instances are idle and shut them down only if they are close to the full hour.
import boto3
from datetime import datetime
def lambda_handler(event, context):
print('ManageInstances function executed.')
environments = [['instance-id-1', 'SQS-queue-url-1'], ['instance-id-2', 'SQS-queue-url-2'], ...]
ec2_client = boto3.client('ec2')
for environment in environments:
instance_id = environment[0]
queue_url = environment[1]
print 'Instance:', instance_id
print 'Queue:', queue_url
rsp = ec2_client.describe_instances(InstanceIds=[instance_id])
if rsp:
status = rsp['Reservations'][0]['Instances'][0]
if status['State']['Name'] == 'running':
current_time = datetime.now()
diff = current_time - status['LaunchTime'].replace(tzinfo=None)
total_minutes = divmod(diff.total_seconds(), 60)[0]
minutes_to_complete_hour = 60 - divmod(total_minutes, 60)[1]
print 'Started time:', status['LaunchTime']
print 'Current time:', str(current_time)
print 'Minutes passed:', total_minutes
print 'Minutes to reach a full hour:', minutes_to_complete_hour
if(minutes_to_complete_hour <= 2):
sqs_client = boto3.client('sqs')
response = sqs_client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'])
messages_in_flight = int(response['Attributes']['ApproximateNumberOfMessagesNotVisible'])
messages_available = int(response['Attributes']['ApproximateNumberOfMessages'])
print 'Messages in flight:', messages_in_flight
print 'Messages available:', messages_available
if(messages_in_flight + messages_available == 0):
ec2_resource = boto3.resource('ec2')
instance = ec2_resource.Instance(instance_id)
instance.stop()
print('Stopping instance.')
else:
print('Status was not running. Nothing is done.')
else:
print('Problem while describing instance.')
UPDATE - I wouldn't recommend using this approach. Things changed in when TTL deletions happen and they are not close to TTL time. The only guarantee is that the item will be deleted after the TTL. Thanks #Mentor for highlighting this.
2 months ago AWS announced DynamoDB item TTL, which allows you to insert an item and mark when you wish for it to be deleted. It will be deleted automatically when the time comes.
You can use this feature in conjunction with DynamoDB Streams to achieve your goal - your first function inserts an item to a DynamoDB table. The record TTL should be when you want the second lambda triggered. Setup a stream that triggers your second lambda. In this lambda you will identify deletion events and if that's a delete then run your logic.
Bonus point - you can use the table item as a mechanism for the first lambda to pass parameters to the second lambda.
About DynamoDB TTL:
https://aws.amazon.com/blogs/aws/new-manage-dynamodb-items-using-time-to-live-ttl/
It does depend on your use case, but the idea that you want to trigger something at a later date is a common pattern. The way I do it serverless is I have a react application that triggers an action to store a date in the future. I take the date format like 24-12-2020 and then convert it using date(), having researched that the date format mentioned is correct, so I might try 12-24-2020 and see what I get(!). When I am happy I convert it to a Unix number in javascript React I use this code:
new Date(action.data).getTime() / 1000
where action.data is the date and maybe the time for the action.
I run React in Amplify (serverless), I store that to dynamodb (serverless). I then run a Lambda function (serverless) to check my dynamodb for any dates (I actually use the Unix time for now) and compare the two Unix dates now and then (stored) which are both numbers, so the comparison is easy. This seems to me to be super easy and very reliable.
I just set the crontab on the Lambda to whatever is needed depending on the approximate frequency required, in most cases running a lambda every five minutes is pretty good, although if I was only operating this in a certain time zone for a business weekday application I would control the Lambda a little more. Lambda is free for the first 1m functions per month and running it every few minutes will cost nothing. Obviously things change, so you will need to look that up in your area.
You will never get perfect timing in this scenario. It will, however, for the vast majority of use cases be close enough according to the timing settings of the Lambda function, you could set it up to check every minute or just once per day, it all depends on your application.
Alternatively, If I wanted an instant reaction to an event I might use SMS, SQS, or Kinesis to instantly stream a message, it all depends on your use case.
I'd opt for enqueuing deferred work to SQS using message timers in myFirstFunction.
Currently, you can't use SQS as a Lambda event source, but you can either periodically schedule mySecondFunction to check the queue via scheduled CloudWatch Events (somewhat of a variant of the other options you've found) or use a CloudWatch alarm on the ApproximateNumberOfMessagesVisible to fire an SNS message to a Lambda and avoid constant polling for queues that are frequently inactive for long periods.