How to run Lambda created in CDK on a regular basis? - amazon-web-services

As the title says - I've created a Lambda in the Python CDK and I'd like to know how to trigger it on a regular basis (e.g. once per day).
I'm sure it's possible, but I'm new to the CDK and I'm struggling to find my way around the documentation. From what I can tell it will use some sort of event trigger - but I'm not sure how to use it.
Can anyone help?

Sure - it's fairly simple once you get the hang of it.
First, make sure you're importing the right libraries:
from aws_cdk import core, aws_events, aws_events_targets
Then you'll need to make an instance of the schedule class and use the core.Duration (docs for that here) to set the length. Let's say 1 day for example:
lambda_schedule = aws_events.Schedule.rate(core.Duration.days(1))
Then you want to create the event target - this is the actual reference to the Lambda you created in your CDK earlier:
event_lambda_target = aws_events_targets.LambdaFunction(handler=lambda_defined_in_cdk_here)
Lastly you bind it all together in an aws_events.Rule like so:
lambda_cw_event = aws_events.Rule(
self,
"Rule_ID_Here",
description=
"The once per day CloudWatch event trigger for the Lambda",
enabled=True,
schedule=lambda_schedule,
targets=[event_lambda_target])
Hope that helps!

The question is for Python but thought it might be useful to post a Javascript equivalent:
const aws_events = require("aws-cdk-lib/aws-events");
const aws_events_targets = require("aws-cdk-lib/aws-events-targets");
const MyLambdaFunction = <...SDK code for Lambda function here...>
new aws_events.Rule(this, "my-rule-identifier", {
schedule: aws_events.Schedule.rate(aws_cdk_lib.Duration.days(1)),
targets: [new aws_events_targets.LambdaFunction(MyLambdaFunction)],
});
Note: The above is for version 2 of the SDK - might need a few tweaks for v3.

Related

Multiple actions - AwsCustomResource.on.. Is it possible?

I've created a custom resource for creating a ThingType which is not yet implemented by AWS as simple CfnObjects. My code looks like this:
String physicalResIdThingType = "ThisISMyThing";
AwsCustomResource.Builder.create(this, "myThingType")
.onCreate(AwsSdkCall.builder()
.service("Iot")
.action("createThingType")
.physicalResourceId(PhysicalResourceId.of(physicalResIdThingType))
.parameters(new HashMap() {{
put("thingTypeName", "myThingType");
}})
.build())
.onDelete(AwsSdkCall.builder()
.service("Iot")
.action("deleteThingType")
.physicalResourceId(PhysicalResourceId.of(physicalResIdThingType))
.parameters(new HashMap() {{
put("thingTypeName", "myThingType");
}}).build()) .policy(AwsCustomResourcePolicy.fromSdkCalls(SdkCallsPolicyOptions.builder()
.resources(AwsCustomResourcePolicy.ANY_RESOURCE)
.build()))
.installLatestAwsSdk(false)
.resourceType(Consts.CUSTOM_RESOURCE_THING_TYPE)
.build();
It is creating well. but not allowing me to delete the thing type because I need first to deprecate it and then delete it.. In the console we need to wait even 5 minutes after deprecation for complete deletion.
My questions are:
Is it possible to override this deprecation ?
If not, Is it possible to do multiple AwsSdkCalls without writing my own lambda functions ?
If none from above, then maybe someone has an idea how can I use this simple solution of AwsCustomResource to delete my thing type?
I can only see one way to do it if you want the thing type to be deleted via cloudformation.
You can configure the timeout-in-minutes when call cloudformation crate-stack . This value should be above than 5 minutes + extra buffer for the other resources to be deleted.
When you receive a DELETE event in your custom resource, you can deprecate the thing type, wait 5min and call delete thing type.

Using the CfnOutput created inside a LambdaRestApi in AWS CDK

I'm creating a LambdaRestApi as follows
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
})
and I'd like to get to the CfnOutput it generates, is it possible? I want to pass it to other functions and I want to avoid creating a new one.
Specifically the situation I'm tackling is this: I have have a post stage that verifies things are working at it uses the CfnOutput:
deployStage.addPost(
new CodeBuildStep("VerifyAPIGatewayEndpoint", {
envFromCfnOutputs: {
ENDPOINT_URL: deploy.hcEndpoint
},
commands: [
"curl -Ssf $ENDPOINT_URL",
"curl -Ssf $ENDPOINT_URL/hello",
"curl -Ssf $ENDPOINT_URL/test"
]
})
)
That deploy.hcEndpoint is a CfnOutput that I'm manually creating after the LambdaRestApi is created:
const gateway = new LambdaRestApi(this, "Endpoint", {handler: hello})
this.hcEndpoint = new CfnOutput(this, "GatewayUrl", {value: gateway.url})
and then making sure that every construct makes it available to its parent.
Using CfnOutputs in the post-deployment step makes sense. I am trying to learn the proper way of doing things, and also have clean stacks. With only one Lambda function it's no big deal, but with tens or hundreds it might. And since LambdaRestApi already creates the output, it does feel like I'm repeating myself by creating an identical one.
Assuming you are using the following code for your LambdaRestApi:
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
});
Referencing in same stack as LambdaRestApi
const outputValue = this.gateway.urlForPath("/");
Looking at the source code, the output value is just a call to urlForPath. The method is public, so you can use it directly.
Referencing from another stack
You can use cross stack references to get a reference to the output value of the stack.
import { Fn } from 'aws-cdk-lib';
const outputValue = Fn.importValue("MainURL");
If you try to use the first method in another stack, CDK will just generate a cross stack reference dynamically by adding extra outputs, so it is better to import the value directly.
I'd like to get to the CfnOutput it generates, is it possible?
Yes. Use the escape hatch syntax to get a reference to the CfnOutput that RestApi creates for the endpointExportName:
const urlCfnOutput = this.gateway.node.findChild('Endpoint') as cdk.CfnOutput;
console.log(urlCfnOutput.exportName);
// MainURL
console.log(urlCfnOutput.value);
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/
Prefer standard CDK
As their name suggests, "escape hatches" are for "emergencies" when the CDK's standard solutions fail. Your use case may be one such instance, I don't know. But as #Kaustubh Khavnekar points out, you don't need the CfnOutput to get the url token value.
console.log(this.gateway.url)
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/

The create_profile_job() method is not accepting the new parameters

I am trying to use a Lambda function (written in python) to create a series of profile jobs in DataBrew. AWS recently added a new parameter to this function ("Configuration) which I have added in my code. However, when I call the function, I get the following error message: "Unknown parameter in input: "Configuration", must be one of: DatasetName, EncryptionKeyArn, EncryptionMode, Name, LogSubscription, MaxCapacity, MaxRetries, OutputLocation, RoleArn, Tags, Timeout, JobSample." This does not match the parameter list in the boto3 documentation, which was recently updated to align with the new features added to DataBrew on 07/23/21. Has anyone else had this issue? If so, is there a timeline for this bug to be fixed?
It turns out that the version of boto3 that is available in Lambda by default is not the most updated version. Hence, in order to use all the parameters for this method, you have to add the latest version of boto3 (and all dependencies) as a Lambda layer.

AWS Lambda retry after certain time

I have a lambda that performs, alongside with other things, a GET request, every day, at 5am, on some service, triggered by CloudWatchEvents.
This service may or may not have the data I need by the time queried.
Therefore, if the data is not there, I need to re-invoke the lambda, let's say, 6am. If it' still not there, again at 7am, and so on.
How can I accomplish that using AWS infrastructure?
This seems like a very good use case for Step Functions.
Step functions allow you to create a workflow with AWS Services including Lambda that allow for decision branches and wait loops.
For example you could create a workflow that is invoked daily at 5am where you invoke the lambda, the lambda can return whether it could process the data or that it needs to wait more. The step function will inspect the results and either end the workflow since the data was processed or go into a wait state for an hour and then retry the function.
Check out this article that includes code samples for a workflow that is similar to yours.
I had a similar situation, my lambda needed to change schedule on weekends and this is how I solved it.
def lambda_handler(event, context):
reschedule_event()
keep_working()
REGULAR_SCHEDULE = 'rate(20 minutes)'
WEEKEND_SHEDULE = 'rate(1 hour)'
RULE_NAME = 'My Rule'
def reschedule_event():
"""
Cambia la planificación de la lambda, para que descanse los findes :D
"""
sched = boto3.client('events')
current = sched.describe_rule(Name=RULE_NAME)
if is_weekend() and 'minutes' in current['ScheduleExpression']:
sched.put_rule(
Name=RULE_NAME,
ScheduleExpression=WEEKEND_SCHEDULE,
)
if not is_weekend and 'hour' in current['ScheduleExpression']:
sched.put_rule(
Name=RULE_NAME,
ScheduleExpression=REGULAR_SCHEDULE,
)
I agree there must be some proper way to do this, but time was short at the moment and that lambda needed to go into production. You could do something alike to reschedule yours when there's no data to be retrieved and then go back to the original schedule.

Is it possible to rename an AWS Lambda function?

I have created some AWS Lambda functions for testing purposes (named as test_function something), then after testing I found those functions can be used in prod environment.
Is it possible to rename the AWS Lambda function? and how?
Or should I create a new one and copy paste source code?
The closest you can get to renaming the AWS Lambda function is using an alias, which is a way to name a specific version of an AWS Lambda function. The actual name of the function though, is set once you create it. If you want to rename it, just create a new function and copy the exact same code into it. It won't cost you any extra to do this (since you are only charged for execution time) so you lose nothing.
For a reference on how to name versions of the AWS Lambda function, check out the documentation here: Lambda function versions
.
You cannot rename the function, your only option is to follow the suggestions already provided here or create a new one and copypaste the code.
It's a good thing actually that you cannot rename it: if you were able to, it would cease to work because the policies attached to the function still point to the old name, unless you were to edit every single one of them manually, or made them generic (which is ill-advised).
However, as a best practice in terms of software development, I suggest you to always keep production and testing (staging) separate, effectively duplicating your environment.
This allows you to test stuff on a safe environment, where if you make a mistake you don't lose anything important, and when you confirm that your new features work, replicate them in production.
So in your case, you would have two lambdas, one called 'my-lambda-staging' and the other 'my-lambda-prod'. Use the ENV variables of lambdas to adapt to the current environment, so you don't need to refactor!
My solution is to export the function, create a new Lambda, then upload the .zip file to the new Lambda.
My solution for lambda rename, basically use boto3 describe previous lambda info for configuration setting and download the previous lambda function code to create a new lambda, but the trigger won't be set so you need to add trigger back manually
from boto3.session import Session
from botocore.client import Config
from botocore.handlers import set_list_objects_encoding_type_url
import boto3
import pprint
import urllib3
pp = pprint.PrettyPrinter(indent=4)
session = Session(aws_access_key_id= {YOUR_ACCESS_KEY},
aws_secret_access_key= {YOUR_SECRET_KEY},
region_name= 'your_region')
PREV_FUNC_NAME = 'your_prev_function_name'
NEW_FUNC_NAME = 'your_new_function_name'
def prev_lambda_code(code_temp_path):
'''
download prev function code
'''
code_url = code_temp_path
http = urllib3.PoolManager()
response = http.request("GET", code_url)
if not 200 <= response.status < 300:
raise Exception(f'Failed to download function code: {response}')
return response.data
def rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME):
'''
Copy previous lambda function and rename it
'''
lambda_client = session.client('lambda')
prev_func_info = lambda_client.get_function(FunctionName = PREV_FUNC_NAME)
if 'VpcConfig' in prev_func_info['Configuration']:
VpcConfig = {
'SubnetIds' : prev_func_info['Configuration']['VpcConfig']['SubnetIds'],
'SecurityGroupIds' : prev_func_info['Configuration']['VpcConfig']['SecurityGroupIds']
}
else:
VpcConfig = {}
if 'Environment' in prev_func_info['Configuration']:
Environment = prev_func_info['Configuration']['Environment']
else:
Environment = {}
response = client.create_function(
FunctionName = NEW_FUNC_NAME,
Runtime = prev_func_info['Configuration']['Runtime'],
Role = prev_func_info['Configuration']['Role'],
Handler = prev_func_info['Configuration']['Handler'],
Code = {
'ZipFile' : prev_lambda_code(prev_func_info['Code']['Location'])
},
Description = prev_func_info['Configuration']['Description'],
Timeout = prev_func_info['Configuration']['Timeout'],
MemorySize = prev_func_info['Configuration']['MemorySize'],
VpcConfig = VpcConfig,
Environment = Environment,
PackageType = prev_func_info['Configuration']['PackageType'],
TracingConfig = prev_func_info['Configuration']['TracingConfig'],
Layers = [Layer['Arn'] for Layer in prev_func_info['Configuration']['Layers']],
)
pp.pprint(response)
rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME)