Lambda function receiving null S3 event object via SNS - amazon-web-services

I'm trying to invoke a Lambda function using an SNS event carrying an S3 event payload (i.e. S3 Put -> triggers an event published to an SNS topic -> delivers to a subscribed Lambda function), but it seems the only way I have been able to get to the actual S3 event information is to access it as a JsonNode and I know there has to be a better (e.g. deserialization).
I really thought I could have my Lambda function accept an S3EventNotification, due to the comments I found here:
https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/event/S3EventNotification.java
A helper class that represents a strongly typed S3 EventNotification item sent
to SQS, SNS, or Lambda.
So, how do I receive the S3EventNotification as a POJO?
Below are the various ways I have tried:
public class LambdaFunction implements RequestHandler<S3EventNotification, Object>{
#Override
public Object handleRequest(S3EventNotification input, Context context) {
System.out.println(JsonUtil.MAPPER.writeValueAsString(input));
return null;
}
}
Resulting in:
{
"Records": [
{
"awsRegion": null,
"eventName": null,
"eventSource": null,
"eventTime": null,
"eventVersion": null,
"requestParameters": null,
"responseElements": null,
"s3": null,
"userIdentity": null
}
]
}
I have also tried the following (note: JsonUtil.MAPPER just returns a Jackson ObjectMapper):
public class LambdaFunction {
public Object handleRequest(S3EventNotification records, Context context) throws IOException {
System.out.println(JsonUtil.MAPPER.writeValueAsString(records));
return null;
}
}
This returns the same as before:
{
"Records": [
{
"awsRegion": null,
"eventName": null,
"eventSource": null,
"eventTime": null,
"eventVersion": null,
"requestParameters": null,
"responseElements": null,
"s3": null,
"userIdentity": null
}
]
}
I can access the S3 event payload by simply receiving the SNSEvent, however when I try to deserialize the msg payload into the S3EventRecord or S3EventNotification, there are differences in fields. I really don't want to have to walk down the JsonNode manually...
public class LambdaFunction {
public Object handleRequest(SNSEvent input, Context context) throws IOException {
System.out.println("Records: " + JsonUtil.MAPPER.writeValueAsString(input));
for (SNSEvent.SNSRecord record : input.getRecords()) {
System.out.println("Record Direct: " + record.getSNS().getMessage());
JsonNode node = JsonUtil.MAPPER.readTree(record.getSNS().getMessage());
JsonNode recordNode = ((ArrayNode) node.get("Records")).get(0);
System.out.println(recordNode.toString());
S3EventNotification s3events = JsonUtil.MAPPER.readValue(record.getSNS().getMessage(), new TypeReference<S3EventNotification>() {});
System.out.println(s3events == null);
}
return null;
}
This returns the following:
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2017-03-04T05:34:25.149Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:XXXXXXXXXXXXX"
},
"requestParameters": {
"sourceIPAddress": "<<IP ADDRESS>>"
},
"responseElements": {
"x-amz-request-id": "XXXXXXXX",
"x-amz-id-2": "XXXXXXXXXXXXX="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "NotifyNewRawArticle",
"bucket": {
"name": "MYBUCKET",
"ownerIdentity": {
"principalId": "XXXXXXXXXXXXXXX"
},
"arn": "arn:aws:s3:::MYBUCKET"
},
"object": {
"key": "news\/test",
"size": 0,
"eTag": "d41d8cd98f00b204e9800998ecf8427e",
"sequencer": "0058BA51E113A948C3"
}
}
}
Unrecognized field "sequencer" (class com.amazonaws.services.s3.event.S3EventNotification$S3ObjectEntity), not marked as ignorable (4 known properties: "size", "versionId", "eTag", "key"])
I am depending on aws-java-sdk-s3-1.11.77 and aws-java-sdk-sns-1.11.77.

you should handle the SNSEvent instead of S3Event since the lambda consume your SNS events. below code works for me.
public Object handleRequest(SNSEvent request, Context context) {
request.getRecords().forEach(snsRecord -> {
System.out.println("Record Direct: " +snsRecord.getSNS().getMessage());
S3EventNotification s3eventNotifcation=S3Event.parseJson(snsRecord.getSNS().getMessage());
System.out.println(s3eventNotifcation.toJson());
}
);
}

Related

AWS Lambda Test S3 Event Notification is Null

I have the following Kotlin code:
override fun execute(input: APIGatewayProxyRequestEvent): APIGatewayProxyResponseEvent {
val response = APIGatewayProxyResponseEvent()
val body = input.body
if(body != null) {
val json = JSONObject(body)
val s3 = json.optJSONArray("Records").getJSONObject(0).getJSONObject("s3")
val bucketName = s3.getJSONObject("bucket").getString("name")
try {
val jsonResponse = objectMapper.writeValueAsString(mapOf("message" to bucketName))
response.statusCode = 200
response.body = jsonResponse
} catch (e: JsonProcessingException) {
response.statusCode = 500
}
}
return response
}
I want to basically get the function to be triggered on a new S3 put and just get the bucket name. When I try locally to pass a APIGatewayProxyRequestEvent with the following body:
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test%2Fkey",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
The kotlin code works as expected. When I deploy this on AWS lambda, and I either provide the exact same body in a test event or actually uploading an object on S3 to trigger the function, the input.body is null. I don't understand why.

Micronaut + GraalVM + Kotlin for Lambda Function with S3 Notification fails to run with error "Unconvertible input: null"

I am using micronaut to create a kotlin project that can build to a native GraalVM image that I can upload as a lambda function on AWS. I want this lambda function to get triggereted on S3 Notification Event.
I went to micronaut.io, clicked launched and then tried to choose:
Function Application for Serverless
Features: aws-lambda, aws-lambda-s3-event-notification
Kotlin and Gradle Kotlin.
It generated some files that I have not changed except putting a simple hello world print.
The files are like so:
FunctionlambdaRuntime.kt
package com.example
import com.amazonaws.services.lambda.runtime.RequestHandler
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent
import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification;
import io.micronaut.function.aws.runtime.AbstractMicronautLambdaRuntime
import java.net.MalformedURLException
class FunctionLambdaRuntime : AbstractMicronautLambdaRuntime<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent, S3EventNotification, Void?>()
{
override fun createRequestHandler(vararg args: String?): RequestHandler<S3EventNotification, Void?> {
return FunctionRequestHandler()
}
companion object {
#JvmStatic
fun main(vararg args: String) {
try {
FunctionLambdaRuntime().run(*args)
} catch (e: MalformedURLException) {
e.printStackTrace()
}
}
}
}
FunctionRequestHandler.kt
package com.example
import io.micronaut.function.aws.MicronautRequestHandler
import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification;
class FunctionRequestHandler : MicronautRequestHandler<S3EventNotification, Void?>() {
override fun execute(input: S3EventNotification): Void? {
println("Hello, world!")
return null
}
}
Then I use ./gradlew buildNativeLambda to bulid the image in a zip file. I upload this file to the AWS lambda function, which is a custom runtime on Amazon Linux 2 with handler set to com.example.FunctionRequestHandler. I then try to create a S3 Notification test from the Lambda console like this one:
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test%2Fkey",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
But when I run it, I get the following error:
{
"errorMessage": "Unconvertible input: null"
}
I have also tried creating a real S3 notification event and then observe the CloudWatch logs for the "hello world" message but I still see in the logs "unconvertible input: null". I am not sure what's wrong. I have not changed anything except putting that print statement there.
Any help will be appreciated!
class FunctionLambdaRuntime : AbstractMicronautLambdaRuntime<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent, S3EventNotification, Void?>()
↓
class FunctionLambdaRuntime : AbstractMicronautLambdaRuntime<S3EventNotification, Void?, S3EventNotification, Void?>()

How to format Filter Criteria on Lambda Functions that are triggered via Kinesis Streams for DynamoDB?

I have a dynamodb table that has a Kinesis stream attached to it. See the relevant cloudformation configuration here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-kinesisstreamspecification
Recently, AWS announced Filtering Event Sources for AWS Lambda
My goal would be to filter all events that begins with a specific string.
For example - say the original table has a document like:
"dynamodb": {
"ApproximateCreationDateTime": 1640276115300,
"Keys": {
"pk": "foo:random",
"sk": "bar:something"
},
.....
I want to filter all events that start with bar:. The data comes in this format in the lambda function logs:
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "E7DF48140C98F2557BDAF0126B8443AC",
"sequenceNumber": "49624912313474127477164618281231039365742153203189809218",
"data": "eyJhd3NSZWdpb24iOiJ1cy1lYXN0LTEiLCJldmVudElEIjoiYTJlYTlmNGEtNWU5Zi00MzAwLWE0ZjItOWFlY2Y3ZTM2ZTA0IiwiZXZlbnROYW1lIjoiTU9ESUZZIiwidXNlcklkZW50aXR5IjpudWxsLCJyZWNvcmRGb3JtYXQiOiJhcHBsaWNhdGlvbi9qc29uIiwidGFibGVOYW1lIjoicnBwLXJlY29uLXdvcmstb3JkZXIiLCJkeW5hbW9kYiI6eyJBcHByb3hpbWF0ZUNyZWF0aW9uRGF0ZVRpbWUiOjE2NDAyNzYxMTUzMDAsIktleXMiOnsicGsiOnsiUyI6IndvcmtvcmRlcjo0MzA4NDY5I1FJTTEifSwic2siOnsiUyI6Im9mZmVyaW5nIn19LCJOZXdJbWFnZSI6eyJlbnRpdHlfdHlwZSI6eyJTIjoib2ZmZXJpbmcifSwid29ya19vcmRlcl9rZXkiOnsiUyI6IjQzMDg0NjkjUUlNMSJ9LCJidXllcl9yZXBfaWQiOnsiUyI6IjAifSwic2VsbGVyX2dyb3VwX2NvZGUiOnsiUyI6IkRMUiJ9LCJzZWxsZXJfZGVhbGVyaWQiOnsiUyI6IjU0Mzc3NzcifSwidXBkYXRlZCI6eyJOIjoiMTYzNzY1OTE5NS4zMzEwMjM2OTMwODQ3MTY3OTY4NzYifSwiYnV5ZXJfbmV0Ijp7IlMiOiIwLjAwIn0sInZpbiI6eyJTIjoiMkMzQ0NBQ0cwQ0gzNDE0ODUifSwiYnV5ZXJfbnVtYmVyIjp7IlMiOiIwIn0sInNrIjp7IlMiOiJvZmZlcmluZyJ9LCJzYmx1Ijp7IlMiOiI0MzA4NDY5In0sInNlbGxlcl9uYW1lIjp7IlMiOiJFUEVBTCBBVVRPIFNBTEVTIERCQTIifSwicGsiOnsiUyI6IndvcmtvcmRlcjo0MzA4NDY5I1FJTTEifSwiYnV5ZXJfZmVlIjp7IlMiOiIwLjAwIn0sImJ1eWVyX2FkaiI6eyJTIjoiMC4wMCJ9LCJzaXRlX2lkIjp7IlMiOiJRSU0xIn0sImJ1eWVyX3VuaXZlcnNhbCI6eyJTIjoiMCJ9fSwiT2xkSW1hZ2UiOnsiZW50aXR5X3R5cGUiOnsiUyI6Im9mZmVyaW5nIn0sIndvcmtfb3JkZXJfa2V5Ijp7IlMiOiI0MzA4NDY5I1FJTTEifSwiYnV5ZXJfcmVwX2lkIjp7IlMiOiIwIn0sInNlbGxlcl9ncm91cF9jb2RlIjp7IlMiOiJETFIifSwic2VsbGVyX2RlYWxlcmlkIjp7IlMiOiI1NDM3Nzc3In0sInVwZGF0ZWQiOnsiTiI6IjE2Mzc2NTkxOTUuMzMxMDIzNjkzMDg0NzE2Nzk2ODc1In0sImJ1eWVyX25ldCI6eyJTIjoiMC4wMCJ9LCJ2aW4iOnsiUyI6IjJDM0NDQUNHMENIMzQxNDg1In0sImJ1eWVyX251bWJlciI6eyJTIjoiMCJ9LCJzayI6eyJTIjoib2ZmZXJpbmcifSwic2JsdSI6eyJTIjoiNDMwODQ2OSJ9LCJzZWxsZXJfbmFtZSI6eyJTIjoiRVBFQUwgQVVUTyBTQUxFUyBEQkEyIn0sInBrIjp7IlMiOiJ3b3Jrb3JkZXI6NDMwODQ2OSNRSU0xIn0sImJ1eWVyX2ZlZSI6eyJTIjoiMC4wMCJ9LCJidXllcl9hZGoiOnsiUyI6IjAuMDAifSwic2l0ZV9pZCI6eyJTIjoiUUlNMSJ9LCJidXllcl91bml2ZXJzYWwiOnsiUyI6IjAifX0sIlNpemVCeXRlcyI6NjM0fSwiZXZlbnRTb3VyY2UiOiJhd3M6ZHluYW1vZGIifQ==",
"approximateArrivalTimestamp": 1640276115.796
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000004:49624912313474127477164618281231039365742153203189809218",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::111111111111:role/acct-managed/foo-bar-role",
"awsRegion": "us-east-1",
"eventSourceARN": "arn:aws:kinesis:us-east-1:111111111111:stream/foo-bar-stream-role/consumer/foo-bar-consumer:1638560626"
}
]
}
Once data is decoded it looks like:
{
"awsRegion": "us-east-1",
"eventID": "a2ea9f4a-5e9f-4300-a4f2-9aecf7e36e04",
"eventName": "MODIFY",
"userIdentity": null,
"recordFormat": "application/json",
"tableName": "foo-bar",
"dynamodb": {
"ApproximateCreationDateTime": 1640276115300,
"Keys": {
"pk": "foo:random",
"sk": "bar:something"
},
"NewImage": {...},
"OldImage": {...},
"SizeBytes": 634
},
"eventSource": "aws:dynamodb"
}
What I have tried so far:
FilterCriteria:
Filters:
- Pattern: "{\"data\": { \"sk\": [ { \"prefix\": \"bar:\"} ] }}"
Filters:
- Pattern: "{\"data\": { \"dynamodb\": { \"sk\": [ { \"prefix\": \"bar:\"} ] }} }"
I was able to get it working using the following:
FilterCriteria:
Filters:
- Pattern: "{\"data\": { \"dynamodb\": { \"NewImage\": { \"sk\": { \"S\": [{ \"prefix\": \"rims:\" }] }}}}}"

S3 - Getting metadata from Post S3 Upload lambda function

I'm generating a presigned URL using s3.getSignedUrl('putObject', params) and for my params
var params = {
Bucket: bucketName,
Key: photoId + "-" + photoNumber + "-of-" + numberImages + ".jpeg",
Expires: signedUrlExpireSeconds,
ContentType: contentType,
Metadata : { testkey1 : "hello" }
};
I'm trying to receive the Metadata in my S3 successful upload lambda function, however it's not appearing. Anyone know why? The upload is successful and for my printed logs, I'm receiving everything but the metadata tag in the event:
console.log(event);
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2020-01-15T06:51:57.171Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId":
},
"requestParameters": {
"sourceIPAddress":
},
"responseElements": {
"x-amz-request-id": "4C32689CE5B70A82",
"x-amz-id-2": "AS0f97RHlLW2DF6tVfRwbTeoEpk2bEne/0LrWqHpLJRHY5GMBjy/NQOHqYAMhd2JjiiUcuw0nUTMJS8pDAch1Abc5xzzWVMv"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "9a9a755e-e809-4dbf-abf8-3450aaa208ed",
"bucket": {
"name": ,
"ownerIdentity": {
"principalId": "A3SZPXLS03IWBG"
},
"arn":
},
"object": {
"key": "BcGMYe-1-of-1.jpeg",
"size": 19371,
"eTag": "45c719f2f6b5349cc360db9a13d0cee4",
"sequencer": "005E1EB6921E08F7E4"
}
}
}
]
This is s3 event message structure. the message structure originally doesn't contain metadata.
You need to get metadata in the lambda function in person.
You would get metadata through s3 head-object command with the bucket-name and object-key in the event received.
{
"Records":[
{
"eventVersion":"2.2",
"eventSource":"aws:s3",
"awsRegion":"us-west-2",
"eventTime":The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, when Amazon S3 finished processing the request,
"eventName":"event-type",
"userIdentity":{
"principalId":"Amazon-customer-ID-of-the-user-who-caused-the-event"
},
"requestParameters":{
"sourceIPAddress":"ip-address-where-request-came-from"
},
"responseElements":{
"x-amz-request-id":"Amazon S3 generated request ID",
"x-amz-id-2":"Amazon S3 host that processed the request"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"ID found in the bucket notification configuration",
"bucket":{
"name":"bucket-name",
"ownerIdentity":{
"principalId":"Amazon-customer-ID-of-the-bucket-owner"
},
"arn":"bucket-ARN"
},
"object":{
"key":"object-key",
"size":object-size,
"eTag":"object eTag",
"versionId":"object version if bucket is versioning-enabled, otherwise null",
"sequencer": "a string representation of a hexadecimal value used to determine event sequence,
only used with PUTs and DELETEs"
}
},
"glacierEventData": {
"restoreEventData": {
"lifecycleRestorationExpiryTime": "The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, of Restore Expiry",
"lifecycleRestoreStorageClass": "Source storage class for restore"
}
}
}
]
}

AWS Lambda: can one Lambda function have Event Sources of both Kinesis and DynamoDB Streams?

Can one AWS Lambda function have two Event Sources both of one Kinesis stream and one DynamoDB Stream?
I've looked but I haven't found any documentation which says that I can or cannot have different types of event sources for the same AWS Lambda function.
Yes a lambda functions can have event sources of different types but you must use the com.amazonaws.services.lambda.runtime.RequestStreamHandler to be able to correctly deserialize the input data. This is because the internal lambda code that invokes com.amazonaws.services.lambda.runtime.RequestHandler won't deserialize the data dynamically based on types then invoke an overloaded method with correct type, but instead seems to reflectively pick a method and invoke it.
Sample inputs:
Kinesis Event input:
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "1",
"sequenceNumber": "11111111111111111111111111111111111111111111111111111111",
"data": "e30=",
"approximateArrivalTimestamp": 1518397399.55
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000000:11111111111111111111111111111111111111111111111111111111",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::111111111111:role/lambda_test-lambda-multipe-sources",
"awsRegion": "us-east-1",
"eventSourceARN": "arn:aws:kinesis:us-east-1:111111111111:stream/test-lambda-multipe-sources"
}
]
}
DynamoDb Stream Record:
{
"Records": [
{
"eventID": "11111111111111111111111111111111",
"eventName": "INSERT",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb": {
"ApproximateCreationDateTime": 1518397440,
"Keys": {
"key": {
"S": "asdf"
}
},
"NewImage": {
"key": {
"S": "asdf"
}
},
"SequenceNumber": "111111111111111111111111",
"SizeBytes": 14,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:111111111111:table/test-lambda-multipe-sources/stream/2018-02-11T18:57:44.017"
}
]
}
Sample code:
public final class MultipleEventSourcesRequestHandler
implements RequestHandler<KinesisEvent, Void>
// implements RequestStreamHandler
{
private static final Logger LOG = LoggerFactory.getLogger(MultipleEventSourcesRequestHandler.class);
public Void handleRequest(final DynamodbEvent input, final Context context)
{
LOG.info("In DynamodbEvent handler with event of source: " + input.getRecords().get(0).getEventSource());
return null;
}
// public final void handleRequest(final InputStream input, final OutputStream output, final Context context)
// throws IOException
// {
// final byte[] serializedSpeechletRequest = IOUtils.toByteArray(input);
// LOG.info("In Stream handler. Request bytes: " + new String(serializedSpeechletRequest, StandardCharsets.UTF_8));
// output.close();
// }
public Void handleRequest(final KinesisEvent input, final Context context)
{
LOG.info("In KinesisEvent handler with event of source: " + input.getRecords().get(0).getEventSource());
return null;
}
}
Sample logs:
2018-02-12 01:32:57 INFO (main) com.example.lambda.eventsourcetest.MultipleEventSourcesRequestHandler - In KinesisEvent handler with event of source: aws:dynamodb