Can I schedule a lambda function execution with a lambda function? - amazon-web-services

I'm looking for the ability to programmatically schedule a lambda function to run a single time with another lambda function. For example, I made a request to myFirstFunction with date and time parameters, and then at that date and time, have mySecondFunction execute. Is that possible only with stateless AWS services? I'm trying to avoid an always-on ec2 instance.
Most of the results I'm finding for scheduling a lambda functions have to do with cloudwatch and regularly scheduled events, not ad-hoc events.

This is a perfect use case for aws step functions.
Use Wait state with SecondsPath or TimestampPath to add the required delay before executing the Next State.

What you're tring to do (schedule Lambda from Lambda) it's not possible with the current AWS services.
So, in order to avoid an always-on ec2 instance, there are other options:
1) Use AWS default or custom metrics. You can use, for example, ApproximateNumberOfMessagesVisible or CPUUtilization (if your app fires a big CPU utilization when process a request). You can also create a custom metric and fire it when your instance is idle (depending on the app that's running in your instance).
The problem with this option is that you'll waste already paid minutes (AWS always charge a full-hour, no matter if you used your instance for 15 minutes).
2) A better option, in my opinion, would be to run a Lambda function once per minute to check if your instances are idle and shut them down only if they are close to the full hour.
import boto3
from datetime import datetime
def lambda_handler(event, context):
print('ManageInstances function executed.')
environments = [['instance-id-1', 'SQS-queue-url-1'], ['instance-id-2', 'SQS-queue-url-2'], ...]
ec2_client = boto3.client('ec2')
for environment in environments:
instance_id = environment[0]
queue_url = environment[1]
print 'Instance:', instance_id
print 'Queue:', queue_url
rsp = ec2_client.describe_instances(InstanceIds=[instance_id])
if rsp:
status = rsp['Reservations'][0]['Instances'][0]
if status['State']['Name'] == 'running':
current_time = datetime.now()
diff = current_time - status['LaunchTime'].replace(tzinfo=None)
total_minutes = divmod(diff.total_seconds(), 60)[0]
minutes_to_complete_hour = 60 - divmod(total_minutes, 60)[1]
print 'Started time:', status['LaunchTime']
print 'Current time:', str(current_time)
print 'Minutes passed:', total_minutes
print 'Minutes to reach a full hour:', minutes_to_complete_hour
if(minutes_to_complete_hour <= 2):
sqs_client = boto3.client('sqs')
response = sqs_client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'])
messages_in_flight = int(response['Attributes']['ApproximateNumberOfMessagesNotVisible'])
messages_available = int(response['Attributes']['ApproximateNumberOfMessages'])
print 'Messages in flight:', messages_in_flight
print 'Messages available:', messages_available
if(messages_in_flight + messages_available == 0):
ec2_resource = boto3.resource('ec2')
instance = ec2_resource.Instance(instance_id)
instance.stop()
print('Stopping instance.')
else:
print('Status was not running. Nothing is done.')
else:
print('Problem while describing instance.')

UPDATE - I wouldn't recommend using this approach. Things changed in when TTL deletions happen and they are not close to TTL time. The only guarantee is that the item will be deleted after the TTL. Thanks #Mentor for highlighting this.
2 months ago AWS announced DynamoDB item TTL, which allows you to insert an item and mark when you wish for it to be deleted. It will be deleted automatically when the time comes.
You can use this feature in conjunction with DynamoDB Streams to achieve your goal - your first function inserts an item to a DynamoDB table. The record TTL should be when you want the second lambda triggered. Setup a stream that triggers your second lambda. In this lambda you will identify deletion events and if that's a delete then run your logic.
Bonus point - you can use the table item as a mechanism for the first lambda to pass parameters to the second lambda.
About DynamoDB TTL:
https://aws.amazon.com/blogs/aws/new-manage-dynamodb-items-using-time-to-live-ttl/

It does depend on your use case, but the idea that you want to trigger something at a later date is a common pattern. The way I do it serverless is I have a react application that triggers an action to store a date in the future. I take the date format like 24-12-2020 and then convert it using date(), having researched that the date format mentioned is correct, so I might try 12-24-2020 and see what I get(!). When I am happy I convert it to a Unix number in javascript React I use this code:
new Date(action.data).getTime() / 1000
where action.data is the date and maybe the time for the action.
I run React in Amplify (serverless), I store that to dynamodb (serverless). I then run a Lambda function (serverless) to check my dynamodb for any dates (I actually use the Unix time for now) and compare the two Unix dates now and then (stored) which are both numbers, so the comparison is easy. This seems to me to be super easy and very reliable.
I just set the crontab on the Lambda to whatever is needed depending on the approximate frequency required, in most cases running a lambda every five minutes is pretty good, although if I was only operating this in a certain time zone for a business weekday application I would control the Lambda a little more. Lambda is free for the first 1m functions per month and running it every few minutes will cost nothing. Obviously things change, so you will need to look that up in your area.
You will never get perfect timing in this scenario. It will, however, for the vast majority of use cases be close enough according to the timing settings of the Lambda function, you could set it up to check every minute or just once per day, it all depends on your application.
Alternatively, If I wanted an instant reaction to an event I might use SMS, SQS, or Kinesis to instantly stream a message, it all depends on your use case.

I'd opt for enqueuing deferred work to SQS using message timers in myFirstFunction.
Currently, you can't use SQS as a Lambda event source, but you can either periodically schedule mySecondFunction to check the queue via scheduled CloudWatch Events (somewhat of a variant of the other options you've found) or use a CloudWatch alarm on the ApproximateNumberOfMessagesVisible to fire an SNS message to a Lambda and avoid constant polling for queues that are frequently inactive for long periods.

Related

Boto3 invocations of long-running Lambda runs break with TooManyRequestsException

Experience with "long-running" Lambda's
In my company, we recently ran into this behaviour, when triggering Lambdas, that run for > 60 seconds (boto3's default timeout for connection establishment and reads).
The beauty of the Lambda invocation with boto3 (using the 'InvocationType' 'RequestResponse') is, that the API returns the result state of the respective Lambda run, so we wanted to stick to that.
The issue seems to be, that the client fires to many requests per minute on the standing connection to the API. Therefore, we experimented with the boto3 client configuration, but increasing the read timeout resulted in new (unwanted) invocations after each timeout period and increasing the connection timeout triggered a new invocation, after the Lambda was finished.
Workaround
As various investigations and experimentation with boto3's Lambda client did not result in a working setup using 'RequestResponse' invocations,
we circumvented the problem now by making use of Cloudwatch logs. For this, the Lambda has to be setup up to write to an accessible log group. Then, these logs can the queried for the state. Then you would invoke the Lambda and monitor it like this:
import boto3
lambda_client = boto3.client('lambda')
logs_clients = boto3.client('logs')
invocation = lambda_client.invoke(
FunctionName='your_lambda',
InvocationType='Event'
)
# Identifier of the invoked Lambda run
request_id = invocation['ResponseMetadata']['RequestID']
while True:
# filter the logs for the Lambda end event
events = logs_client.filter_log_events(
logGroupName='your_lambda_loggroup',
filterPattern=f'"END RequestId: {request_id}"'
).get('events', [])
if len(events) > 0:
# the Lambda invocation finished
break
This approach works for us now, but it's honestly ugly. To make this approach slightly better, I recommend to set the time range filtering in the filter_log_events call.
One thing, that was not tested (yet): The above approach only tells, whether the Lambda terminated, but not the state (failed or successful) and the default logs don't hold anything useful in that regards. Therefore, I will investigate, if a Lambda run can know its own request id during runtime. Then the Lambda code can be prepared to also write error messages with the request id, which then can be filtered for again.

Recursive AWS lambda with updating state

I have an AWS Lambda that polls from an external server for new events every 6 hours. On every call, if there are any new events, it publishes the updated total number of events polled to a SNS. So I essentially need to call the lambda on fixed intervals but also pass a counter state across calls.
I'm currently considering the following options:
Store the counter somewhere on a EFS/S3, but it seems an
overkill for a simple number
EventBridge, which would be ok to schedule the execution, but doesn't store state across calls
A step function with a loop + wait on the the lambda would do it, but it doesn't seem to be the most efficient/cost effective way to do it
use a SQS with a delay so that the lambda essentially
triggers itself, passing the updated state. Again I don't think
this is the most effective, and to actually get to the 6 hours delay
I would have to implement some checks/delays within the lambda, as the max delay for SQS is 15 minutes
What would be the best way to do it?
For scheduling Lambda at intervals, you can use CloudWatch Events. Scheduling Lambda using Serverless framework is a breeze. A cronjob type statement can schedule your lambda call. Here's a guide on scheduling: https://www.serverless.com/framework/docs/providers/aws/events/schedule
As for saving data, you can use AWS Systems Manager Parameter Store. It's a simple Key value pair storate for such small amount of data.
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
OR you can also save it in DynamoDB. Since the data is small and frequency is less, you wont be charged much and there's no hassle of reading files or parsing.

Trigger another lambda after a week of first lambda execution

I am working on a code where Lambda Function 1 (call it, L1) executes on messages from an SQS queue. I want to execute another lambda (call it, L2) exactly a week after L1 completes and want to pass L1's output to L2.
Execution Environment: Java
For my application, we are expecting around 10k requests on L1 per day. And same number of requests for L2.
If it runs for a week, we can have around 70k active executions at peak.
Things that I have tried:
Cloudwatch events with cron: I can schedule a cron with specified time or date which will trigger L2. But I couldn't find way to pass input with scheduled Cloudwatch event.
Cloudwatch events with new rules: At the end of first lambda I can create a new cloudwatch rule with specified time and specified input. But that will create as many rules (for my case, it could be around 10k new cloudwatch rules everyday). Not sure if that is a good practice or even supported.
Step function: There are two types step functions in play today.
Standard: Supports wait for a year, but only supports 25k active executions at any time. Won't scale since my application will already have 70k active executions at the end of first week.
https://docs.aws.amazon.com/step-functions/latest/dg/limits.html
Express: Doesn't have limit on number of active executions but supports max 5 minutes executions. It will time out after that.
https://docs.aws.amazon.com/step-functions/latest/dg/express-limits.html
It would be easy to create a new Cloudwatch Rule with the "week later" Lambda as a target as the last step in the first Lambda. You would set a Rule with a cron that runs 1 time in 1 week. Then, the Target has an input field. In the console it looks like:
You didn't indicate your programming environment but you can do something similar to (psuedo code, based on Java SDK v2):
String lambdaArn = "the one week from today lambda arn";
String ruleArn = client.putRule(PutRuleRequest.builder()
.scheduleExpression("17 20 23 7 *")
.name("myRule")).ruleArn();
Target target = TargetBuilder.builder().arn(lambdaArn).input("{\"message\": \"blah\"}").rule("myRule");
client.putTargets(PutTargetsRequest.builder().targets(target));
This will create a Cloudwatch Event Rule that runs one time, 1 week from today with the input as shown.
Major Edit
With your new requirements (at least 1 week later, 10's of thousands of events) I would not use the method I described above as there are just too many things happening. Instead I would have a database of events that will act as a queue. Either a DynamoDB or RDS database will suffice. At the end of each "primary" Lambda run, insert an event with the date and time of the next run. For example, today, July 18 I would insert July 25. The table would be something like (PostgreSQL syntax):
create table event_queue (
run_time timestamp not null,
lambda_input varchar(8192),
);
create index on event_queue( run_time );
Where the lambda_input column has whatever data you want to pass to the "week later" Lambda. In PostgreSQL you would do something like:
insert into event_queue (run_time, lambda_input)
values ((current_timestamp + interval '1 week'), '{"value":"hello"}');
Every database has something similar to the date/time functions shown or the code to do this isn't terrible.
Now, in CloudWatch create a rule that runs once an hour (the resolution can be tuned). It will trigger a Lambda that "feeds" an SQS queue. The Lambda will query the database:
select * from event_queue where run_time < current_timestamp
and, for each row, put a message into an SQS queue. The last thing it does is delete these "old" messages using the same where clause.
On the other side you have your "week later" Lambdas that are getting events from the SQS queue. These Lambdas are idle until a set of messages are put into the queue. At that time they fire up and empty the queue, doing whatever the "week later" Lambda is supposed to do.
By running the "feeder" Lambda hourly you basically capture everything that is 1 week plus up to 1 hour old. The less often you run it the more work that your "week later" Lambda's have to do and conversely, running every minute will add load to the database but remove it from the week later Lambda.
This should scale well, assuming that the "feeder" Lambda can keep up. 10k transactions / 24 hours is only 416 transactions and the reading of the DB and creation of the messages should be very quick. Even scaling that by 10 to 100k/day is still only ~4000 rows and messages which, again, should be very doable.
Cloudwatch is more for cron jobs. To trigger something at a specific timestamp or after X amount of time I would recommend using Step Functions instead.
You can achieve your use-case by using a State Machine with a Wait State (you can pass tell it how long to wait based on your input) followed by your Lambda Task State. It will be similar to this example.

Configuring AWS Lambda to a parallel computations for Dynamodb Streams

I have a flask on EC2 and python 3.6 AWS Lambda architecture. When response comes to flask the new item is added to dynamoDB, which triggers Lambda that starts some process with new added item. For some strange reason it doesn't process triggers in parallel, starting new lambda function for each trigger, but processes them one by one.
I tried setting concurrency limit to maximum value, but that didn't work.
I need to get a result as fast as possible and don't manage any of scaling processes by myself. So triggers are need to be processed in parallel not one-by-one as it is now.
If you develop a Lambda function with Python, parallelism doesn’t come by default. Lambda supports Python 2.7 and Python 3.6, both of which have multiprocessing and threading modules.
On the other hand, you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need without getting any errors during the execution of the Lambda function.
Please refer below link for more details about Source Code for Parallel Execution:
https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/
Also, you can refer below code:
import time
import multiprocessing
region_maps = {
"eu-west-1": {
"dynamodb":"dynamodb.eu-west-1.amazonaws.com"
},
"us-east-1": {
"dynamodb":"dynamodb.us-east-1.amazonaws.com"
},
"us-east-2": {
"dynamodb": "dynamodb.us-east-2.amazonaws.com"
}
}
def multiprocessing_func(region):
time.sleep(1)
endpoint = region_maps[region]['dynamodb']
print('endpoint for {} is {}'.format(region, endpoint))
def lambda_handler(event, context):
starttime = time.time()
processes = []
regions = ['us-east-1', 'us-east-2', 'eu-west-1']
for region in regions:
p = multiprocessing.Process(target=multiprocessing_func, args=(region,))
processes.append(p)
p.start()
for process in processes:
process.join()
output = 'That took {} seconds'.format(time.time() - starttime)
print(output)
return output
Hope this helps.
Number of parallel lambdas are controlled by number of shards you are writing to, in dynamodb.
Amazon DynamoDB, AWS Lambda polls your stream and invokes your Lambda function.
When your Lambda function is throttled, Lambda attempts to process the
throttled batch of records until the time the data expires.
This time period can be up to seven days for Amazon Kinesis.
The throttled request is treated as blocking per shard, and
Lambda doesn't read any new records from the shard until the
throttled batch of records either expires or succeeds.
If there is more than one shard in the stream,
Lambda continues invoking on the non-throttled shards until one gets through.
source
This is done to control that the events are processed in order they were done on dynamodb. But number of shards are not directly controlled by you.
Now the best thing you can do is,
set a higher Batch size in the lambda function. By doing this you will receive multiple events in the same lambda. You can have parallelism in the lambda function to process all of them together. but this will have obvious drawbacks like what if you are not able to process all of them before lambda times out. you will have to make sure that code is thread safe.
Probably writing to DynamoDB is blocking parallelism in this case.
Alternative architecture for fast and very scalable processing of items: add items to S3 bucket as files. Then trigger on S3 bucket will start Lambda. New file - new Lambda, this way only Lambda concurrency would limit how many lambdas you have in parallel.

Lambda : Is any Batch processing scheduler available?

Problem : Fetch 2000 items from Dynamo DB and process(Create a POST req from 100 items) it batch by batch (Batch size = 100).
Question : Is there anyway that I can achieve it from any configuration in AWS.
PS : I've configured a cron schedule to run my Lambda function. I'm using Java. I've made multi-threaded application which synchronously does so, but this eventually increases my computation time drastically.
I have the same problem and thinking of solving it in following way. Please let me know if you try it.
Schedule Job to fetch N items from DynamoDB using Lambda function
Lambda function in #1 will submit M messages to SQS to process each
item and trigger lambda functions, in this case it should call
lambda functions M times Each lambda function will process request
given in the message
In order to achieve this you need to schedule an event via CloudWatch, setup SQS and create lambda function triggered by SQS events.
Honestly, I am not sure if this is price effective but it should be working. Assuming your fetch size is so low, this should be reasonable.
Also you can try using SNS in this case you don't need to worry about SQS message polling.