Dynamically scheduling events in AWS EventBridge from a Lambda - amazon-web-services

I have the following two equivalent lambda functions:
setNotification(date, text):
exports.lambdaHandler = async (event, context) => {
const params = {
Entries: [
{
Source: "com.aws.message-event-lambda",
EventBusName: "",
DetailType: "message",
Detail: JSON.stringify({
title: event.detail.title,
text: event.detail.text,
}),
},
],
};
await eventbridge.putEvents(params).promise();
};
sendNotification(text)
Currently I am using Event bridge to trigget th sendNotification function from the setNotification function, but it triggers the function immediatley.
How can I trigger the sendNotification function at a sppecific date defined by the setNotification function?
Currently I see the following 2 options:
Create code inside the setNotification function that creates a scheduled rule on the EventBridge
Stop using EventBridge and use step functions.
I would like to know what is the correct approach between these two or if there is a better approach which i havent found.

I figured it out, you need a different architecture including a lambda function called by a cron expression on the eventbridge that checks a DB for entries to then send notifications to.
More information on scheduling systems on AWS in the following link:
https://aws.amazon.com/blogs/architecture/serverless-scheduling-with-amazon-eventbridge-aws-lambda-and-amazon-dynamodb/

Related

Lambda event filtering for DynamoDB trigger

Here is a modified version of an Event type I am receiving in my handler for a lambda function with a DynamoDB someTableName table trigger that I logged using cargo lambda.
Event {
records: [
EventRecord {
change: StreamRecord {
approximate_creation_date_time: ___
keys: {"id": String("___")},
new_image: {
....
"valid": Boolean(true),
},
...
},
...
event_name: "INSERT",
event_source: Some("aws:dynamodb"),
table_name: None
}
]
}
Goal: Correctly filter with event_name=INSERT && valid=false
I have tried a number of options, for example;
{"eventName": ["INSERT"]}
While the filter is added correctly, it does not trigger the lambda on item inserted.
Q1) What am I doing incorrectly here?
Q2) Why is table_name returning None? The lambda function is created with a specific table name as trigger. The returned fields are returning an option (Some(_)) so I'm asssuming it returns None if the table name is specified on lambda creation, but seems odd to me?
Q3) From AWS Management Console > Lambda > ... > Trigger Detail, I see the following (which is slightly different from my code mentioned above), where does "key" come from and what does it represent in the original Event?
Filters must follow the documented syntax for filtering in the Event Source Mapping between Lambda and DynamoDB Streams.
If you are entering the filter in the Lambda console:
{ "eventName": ["INSERT"], "dynamodb": { "NewImage": {"valid": { "BOOL" : [false]}} } }
The attribute name is actually eventName, so your filter should look like this:
{"eventName": ["INSERT"]}

How to trigger ıot events with lambda

I have a aws ıot events model. I have to trigger that model with using lambda ptyhon functions, like <if this happens, trigger the model like>.
So, how can ı trigger ıot events model with using lambda
Assuming your model is listening for a data from an Input, you can invoke the BatchPutMessage API from Lambda code.
Refer here for documentation.
response = client.batch_put_message(
messages=[
{
'messageId': 'string',
'inputName': 'string',
'payload': b'bytes',
'timestamp': {
'timeInMillis': 123
}
},
]
)

Handling nested JSON messages with AWS IoT Core rules and AWS Lambda

We are using an AWS IoT Rule to forward all messages from things to a Lambda function and appending some properties on the way, using this query:
SELECT *, topic(2) AS DeviceId, timestamp() AS RoutedAt FROM 'devices/+/message'
The message sent to the topic is a nested JSON:
{
version: 1,
type: "string",
payload: {
property1: "foo",
nestedPayload: {
nestedProperty: "bar"
}
}
}
When we use the same query for another rule and route the messages into an S3 bucket instead of a Lambda, the resulting JSON files in the bucket are as expected:
{
DeviceId: "test",
RoutedAt:1618311374770,
version: 1,
type: "string",
payload: {
property1: "foo",
nestedPayload: {
nestedProperty: "bar"
}
}
}
But when routing into a lambda function, the properties of the "nestedPayload" are pulled up one level:
{
DeviceId: "test",
RoutedAt:1618311374770,
version: 1,
type: "string",
payload: {
property1: "foo",
nestedProperty: "bar"
}
}
However, when debugging the Lambda locally using VS Code, providing a JSON file (in other words: not connecting to AWS IoT Core), the JSON structure is as expected, which is why I am assuming the error is not with the JSON serializer / deserializer, but with the rule.
Did anyone experience the same issue?
It turns out, the issue was with the SQL version of the rule.
We created the rule routing to the Lambda using CDK, which by default set the version to "2015-10-08". The rule routing to S3, which didn't show the error, was created manually and used version "2016-03-23". Updating the rule routing to the Lambda to also use "2016-03-23" fixed the issue.

Which functions should I use to read aws lambda log

Once my lambda run is finished, I am getting this payload as a result:
{
"version": "1.0",
"timestamp": "2020-09-30T19:20:03.360Z",
"requestContext": {
"requestId": "2de65baf-f630-48a7-881a-ce3145f1127d",
"functionArn": "arn:aws:lambda:us-east-2:044739556748:function:puppeteer:$LATEST",
"condition": "Success",
"approximateInvokeCount": 1
},
"responseContext": {
"statusCode": 200,
"executedVersion": "$LATEST"
}
}
I would like to read logs of my run from cloudwatch and also memory usage which I can see in lambda monitoring tab:
How can do it via sdk? Which functions should I use?
I am using nodejs.
You need to discover the log stream name that has been assigned to the Lambda function invocation. This is available inside the Lambda function's context.
exports.handler = async (event, context) => {
console.log('context', context);
};
Results in the following log:
context { callbackWaitsForEmptyEventLoop: [Getter/Setter],
succeed: [Function],
fail: [Function],
done: [Function],
functionVersion: '$LATEST',
functionName: 'test-log',
memoryLimitInMB: '128',
logGroupName: '/aws/lambda/test-log',
logStreamName: '2020/10/03/[$LATEST]f123a3c1bca123df8c12e7c12c8fe13e',
clientContext: undefined,
identity: undefined,
invokedFunctionArn: 'arn:aws:lambda:us-east-1:123456781234:function:test-log',
awsRequestId: 'e1234567-6b7c-4477-ac3d-74bc62b97bb2',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis] }
So, the CloudWatch Logs stream name is available in context.logStreamName. I'm not aware of an API to map a Lambda request ID to a log stream name after the fact, so you may need to return this or somehow persist the mapping.
Finding logs of a specific request-id can be done via AWS cloudwatch API.
You can use [filterLogEvents][1] API to extract (using regex) the relevant START and REPORT logs to gather the relevant information of the memory usage (You will also get the log stream name in the response for future use).
If you want to gather all the relevant logs of a specific invocation you will need to query create pairs of START and REPORT logs and query for all the logs in the specific timeframe between them on a specific log stream.

Regex filtering of messages in SNS

Is there a way to filter messages based on Regex or substring in AWS SNS?
AWS Documentation for filtering messages mentions three types of filtering for strings:
Exact matching (whitelisting)
Anything-but matching (blacklisting)
Prefix matching
I want to filter out messages based on substrings in the messages, for example
I have a S3 event that sends a message to SNS when a new object is added to S3, the contents of the message are as below:
{
"Records": [
{
"s3": {
"bucket": {
"name": "images-bucket"
},
"object": {
"key": "some-key/more-key/filteringText/additionaldata.png"
}
}
}
]
}
I want to keep the messages if only filteringText is present in key field.
Note: The entire message is sent as text by S3 notification service, so Records is not a json object but string.
From what I've seen in the documentation, you can't do regex matches or substrings, but you can match prefixes and create your own attributes in the MessageAttributes field.
To do this, I send the S3 event to a simple Lambda that adds MessageAttributes and then sends to SNS.
In effect, S3 -> Lambda -> SNS -> other consumers (with filtering).
The Lambda can do something like this (where you'll have to programmatically decide when to add the attribute):
let messageAttributes = {
myfilterkey: {DataType: "String", StringValue:"filteringText"}
};
let params = {
Message: JSON.stringify(payload),
MessageAttributes: messageAttributes,
MessageStructure: 'json',
TargetArn: SNS_ARN
};
await sns.publish(params).promise();
Then in SNS you can filter:
{"myfilterkey": ["filtertext"]}
It seems a little convoluted to put the Lambda in there, but I like the idea of being able to plug and unplug consumers from SNS on the fly and use filtering to determine who gets what.