I'm generating a presigned URL using s3.getSignedUrl('putObject', params) and for my params
var params = {
Bucket: bucketName,
Key: photoId + "-" + photoNumber + "-of-" + numberImages + ".jpeg",
Expires: signedUrlExpireSeconds,
ContentType: contentType,
Metadata : { testkey1 : "hello" }
};
I'm trying to receive the Metadata in my S3 successful upload lambda function, however it's not appearing. Anyone know why? The upload is successful and for my printed logs, I'm receiving everything but the metadata tag in the event:
console.log(event);
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2020-01-15T06:51:57.171Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId":
},
"requestParameters": {
"sourceIPAddress":
},
"responseElements": {
"x-amz-request-id": "4C32689CE5B70A82",
"x-amz-id-2": "AS0f97RHlLW2DF6tVfRwbTeoEpk2bEne/0LrWqHpLJRHY5GMBjy/NQOHqYAMhd2JjiiUcuw0nUTMJS8pDAch1Abc5xzzWVMv"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "9a9a755e-e809-4dbf-abf8-3450aaa208ed",
"bucket": {
"name": ,
"ownerIdentity": {
"principalId": "A3SZPXLS03IWBG"
},
"arn":
},
"object": {
"key": "BcGMYe-1-of-1.jpeg",
"size": 19371,
"eTag": "45c719f2f6b5349cc360db9a13d0cee4",
"sequencer": "005E1EB6921E08F7E4"
}
}
}
]
This is s3 event message structure. the message structure originally doesn't contain metadata.
You need to get metadata in the lambda function in person.
You would get metadata through s3 head-object command with the bucket-name and object-key in the event received.
{
"Records":[
{
"eventVersion":"2.2",
"eventSource":"aws:s3",
"awsRegion":"us-west-2",
"eventTime":The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, when Amazon S3 finished processing the request,
"eventName":"event-type",
"userIdentity":{
"principalId":"Amazon-customer-ID-of-the-user-who-caused-the-event"
},
"requestParameters":{
"sourceIPAddress":"ip-address-where-request-came-from"
},
"responseElements":{
"x-amz-request-id":"Amazon S3 generated request ID",
"x-amz-id-2":"Amazon S3 host that processed the request"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"ID found in the bucket notification configuration",
"bucket":{
"name":"bucket-name",
"ownerIdentity":{
"principalId":"Amazon-customer-ID-of-the-bucket-owner"
},
"arn":"bucket-ARN"
},
"object":{
"key":"object-key",
"size":object-size,
"eTag":"object eTag",
"versionId":"object version if bucket is versioning-enabled, otherwise null",
"sequencer": "a string representation of a hexadecimal value used to determine event sequence,
only used with PUTs and DELETEs"
}
},
"glacierEventData": {
"restoreEventData": {
"lifecycleRestorationExpiryTime": "The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, of Restore Expiry",
"lifecycleRestoreStorageClass": "Source storage class for restore"
}
}
}
]
}
Related
I have the following Kotlin code:
override fun execute(input: APIGatewayProxyRequestEvent): APIGatewayProxyResponseEvent {
val response = APIGatewayProxyResponseEvent()
val body = input.body
if(body != null) {
val json = JSONObject(body)
val s3 = json.optJSONArray("Records").getJSONObject(0).getJSONObject("s3")
val bucketName = s3.getJSONObject("bucket").getString("name")
try {
val jsonResponse = objectMapper.writeValueAsString(mapOf("message" to bucketName))
response.statusCode = 200
response.body = jsonResponse
} catch (e: JsonProcessingException) {
response.statusCode = 500
}
}
return response
}
I want to basically get the function to be triggered on a new S3 put and just get the bucket name. When I try locally to pass a APIGatewayProxyRequestEvent with the following body:
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test%2Fkey",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
The kotlin code works as expected. When I deploy this on AWS lambda, and I either provide the exact same body in a test event or actually uploading an object on S3 to trigger the function, the input.body is null. I don't understand why.
I have a dynamodb table that has a Kinesis stream attached to it. See the relevant cloudformation configuration here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-kinesisstreamspecification
Recently, AWS announced Filtering Event Sources for AWS Lambda
My goal would be to filter all events that begins with a specific string.
For example - say the original table has a document like:
"dynamodb": {
"ApproximateCreationDateTime": 1640276115300,
"Keys": {
"pk": "foo:random",
"sk": "bar:something"
},
.....
I want to filter all events that start with bar:. The data comes in this format in the lambda function logs:
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "E7DF48140C98F2557BDAF0126B8443AC",
"sequenceNumber": "49624912313474127477164618281231039365742153203189809218",
"data": "eyJhd3NSZWdpb24iOiJ1cy1lYXN0LTEiLCJldmVudElEIjoiYTJlYTlmNGEtNWU5Zi00MzAwLWE0ZjItOWFlY2Y3ZTM2ZTA0IiwiZXZlbnROYW1lIjoiTU9ESUZZIiwidXNlcklkZW50aXR5IjpudWxsLCJyZWNvcmRGb3JtYXQiOiJhcHBsaWNhdGlvbi9qc29uIiwidGFibGVOYW1lIjoicnBwLXJlY29uLXdvcmstb3JkZXIiLCJkeW5hbW9kYiI6eyJBcHByb3hpbWF0ZUNyZWF0aW9uRGF0ZVRpbWUiOjE2NDAyNzYxMTUzMDAsIktleXMiOnsicGsiOnsiUyI6IndvcmtvcmRlcjo0MzA4NDY5I1FJTTEifSwic2siOnsiUyI6Im9mZmVyaW5nIn19LCJOZXdJbWFnZSI6eyJlbnRpdHlfdHlwZSI6eyJTIjoib2ZmZXJpbmcifSwid29ya19vcmRlcl9rZXkiOnsiUyI6IjQzMDg0NjkjUUlNMSJ9LCJidXllcl9yZXBfaWQiOnsiUyI6IjAifSwic2VsbGVyX2dyb3VwX2NvZGUiOnsiUyI6IkRMUiJ9LCJzZWxsZXJfZGVhbGVyaWQiOnsiUyI6IjU0Mzc3NzcifSwidXBkYXRlZCI6eyJOIjoiMTYzNzY1OTE5NS4zMzEwMjM2OTMwODQ3MTY3OTY4NzYifSwiYnV5ZXJfbmV0Ijp7IlMiOiIwLjAwIn0sInZpbiI6eyJTIjoiMkMzQ0NBQ0cwQ0gzNDE0ODUifSwiYnV5ZXJfbnVtYmVyIjp7IlMiOiIwIn0sInNrIjp7IlMiOiJvZmZlcmluZyJ9LCJzYmx1Ijp7IlMiOiI0MzA4NDY5In0sInNlbGxlcl9uYW1lIjp7IlMiOiJFUEVBTCBBVVRPIFNBTEVTIERCQTIifSwicGsiOnsiUyI6IndvcmtvcmRlcjo0MzA4NDY5I1FJTTEifSwiYnV5ZXJfZmVlIjp7IlMiOiIwLjAwIn0sImJ1eWVyX2FkaiI6eyJTIjoiMC4wMCJ9LCJzaXRlX2lkIjp7IlMiOiJRSU0xIn0sImJ1eWVyX3VuaXZlcnNhbCI6eyJTIjoiMCJ9fSwiT2xkSW1hZ2UiOnsiZW50aXR5X3R5cGUiOnsiUyI6Im9mZmVyaW5nIn0sIndvcmtfb3JkZXJfa2V5Ijp7IlMiOiI0MzA4NDY5I1FJTTEifSwiYnV5ZXJfcmVwX2lkIjp7IlMiOiIwIn0sInNlbGxlcl9ncm91cF9jb2RlIjp7IlMiOiJETFIifSwic2VsbGVyX2RlYWxlcmlkIjp7IlMiOiI1NDM3Nzc3In0sInVwZGF0ZWQiOnsiTiI6IjE2Mzc2NTkxOTUuMzMxMDIzNjkzMDg0NzE2Nzk2ODc1In0sImJ1eWVyX25ldCI6eyJTIjoiMC4wMCJ9LCJ2aW4iOnsiUyI6IjJDM0NDQUNHMENIMzQxNDg1In0sImJ1eWVyX251bWJlciI6eyJTIjoiMCJ9LCJzayI6eyJTIjoib2ZmZXJpbmcifSwic2JsdSI6eyJTIjoiNDMwODQ2OSJ9LCJzZWxsZXJfbmFtZSI6eyJTIjoiRVBFQUwgQVVUTyBTQUxFUyBEQkEyIn0sInBrIjp7IlMiOiJ3b3Jrb3JkZXI6NDMwODQ2OSNRSU0xIn0sImJ1eWVyX2ZlZSI6eyJTIjoiMC4wMCJ9LCJidXllcl9hZGoiOnsiUyI6IjAuMDAifSwic2l0ZV9pZCI6eyJTIjoiUUlNMSJ9LCJidXllcl91bml2ZXJzYWwiOnsiUyI6IjAifX0sIlNpemVCeXRlcyI6NjM0fSwiZXZlbnRTb3VyY2UiOiJhd3M6ZHluYW1vZGIifQ==",
"approximateArrivalTimestamp": 1640276115.796
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000004:49624912313474127477164618281231039365742153203189809218",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::111111111111:role/acct-managed/foo-bar-role",
"awsRegion": "us-east-1",
"eventSourceARN": "arn:aws:kinesis:us-east-1:111111111111:stream/foo-bar-stream-role/consumer/foo-bar-consumer:1638560626"
}
]
}
Once data is decoded it looks like:
{
"awsRegion": "us-east-1",
"eventID": "a2ea9f4a-5e9f-4300-a4f2-9aecf7e36e04",
"eventName": "MODIFY",
"userIdentity": null,
"recordFormat": "application/json",
"tableName": "foo-bar",
"dynamodb": {
"ApproximateCreationDateTime": 1640276115300,
"Keys": {
"pk": "foo:random",
"sk": "bar:something"
},
"NewImage": {...},
"OldImage": {...},
"SizeBytes": 634
},
"eventSource": "aws:dynamodb"
}
What I have tried so far:
FilterCriteria:
Filters:
- Pattern: "{\"data\": { \"sk\": [ { \"prefix\": \"bar:\"} ] }}"
Filters:
- Pattern: "{\"data\": { \"dynamodb\": { \"sk\": [ { \"prefix\": \"bar:\"} ] }} }"
I was able to get it working using the following:
FilterCriteria:
Filters:
- Pattern: "{\"data\": { \"dynamodb\": { \"NewImage\": { \"sk\": { \"S\": [{ \"prefix\": \"rims:\" }] }}}}}"
I get sns event in json object to my emails but I want specific part to be sent to slack notification using lambda. I want time, event name, group id, event id etc to be parsed and send to slack. I tried various example online but keep getting error. I have a cloudwatch event which monitor if someone used 0.0.0.0/0 on a given security group. If this happen this will trigger cloud watch event associated to sns alert. I have integrated email alert but i want this done on slack. I Need guidance on this, i tried other example online ?
*{
"version": "0",
"id": "5391448e-1276-49f1-d5a2-5b4898b1f863",
"detail-type": "AWS API Call via CloudTrail",
"source": "aws.ec2",
"account": "982239453305",
"time": "2019-10-02T10:07:07Z",
"region": "eu-west-1",
"resources": [],
"detail": {
"eventVersion": "1.05",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAIZE22Q5MDGTLWB2FW:jahmed",
"arn": "arn:aws:sts::988339453305:assumed-role/dp-admins/arahman",
"accountId": "988339453305",
"accessKeyId": "*******",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "********",
"arn": "arn:aws:iam::988569453305:role/dp-admins",
"accountId": "988569453305",
"userName": "dp-admins"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "true",
"creationDate": "2019-10-02T10:05:55Z"
}
}
},
"eventTime": "2019-10-02T10:07:07Z",
"eventSource": "ec2.amazonaws.com",
"eventName": "RevokeSecurityGroupIngress",
"awsRegion": "eu-west-1",
"sourceIPAddress": "195.89.75.182",
"userAgent": "console.ec2.amazonaws.com",
"requestParameters": {
"groupId": "sg-00d088d28c60e6bd0",
"ipPermissions": {
"items": [
{
"ipProtocol": "tcp",
"fromPort": 0,
"toPort": 0,
"groups": {},
"ipRanges": {
"items": [
{
"cidrIp": "0.0.0.0/0",
"description": "test-MUST-REMOVE!"
}
]
},
"ipv6Ranges": {},
"prefixListIds": {}
},
{
"ipProtocol": "tcp",
"fromPort": 0,
"toPort": 0,
"groups": {},
"ipRanges": {},
"ipv6Ranges": {
"items": [
{
"cidrIpv6": "::/0",
"description": "test-MUST-REMOVE!"
}
]
},
"prefixListIds": {}
}
]
}
},
"responseElements": {
"requestId": "93fc850f-65e7-464f-b2e0-3db1753a0c94",
"_return": true
},
"requestID": "93fc850f-65e7-464f-b2e0-3db1753a0c94",
"eventID": "2aa40c8d-cc28-45af-89c8-e8885d98dc00",
"eventType": "AwsApiCall"
}
}*
This is the code i have used to integrate with slack
Read the SNS message then post the message on slack webhook url.
import json
import logging
import os
from urllib2 import Request, urlopen, URLError, HTTPError
# Read all the environment variables
SLACK_WEBHOOK_URL = os.environ['SLACK_WEBHOOK_URL']
SLACK_USER = os.environ['SLACK_USER']
SLACK_CHANNEL = os.environ['SLACK_CHANNEL']
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.info("Event: " + str(event))
# Read message posted on SNS Topic
message = json.loads(event['Records'][0]['Sns']['Message'])
logger.info("Message: " + str(message))
# Construct a new slack message
slack_message = {
'channel': SLACK_CHANNEL,
'username': SLACK_USER,
'text': "%s" % (message)
}
# Post message on SLACK_WEBHOOK_URL
req = Request(SLACK_WEBHOOK_URL, json.dumps(slack_message))
try:
response = urlopen(req)
response.read()
logger.info("Message posted to %s", slack_message['channel'])
except HTTPError as e:
logger.error("Request failed: %d %s", e.code, e.reason)
except URLError as e:
logger.error("Server connection failed: %s", e.reason)
Have gotten really stuck trying to get AWS Rekognition to label images I upload to S3. I am still learning how to get the roles and acceess right (I have added 'all' Rekognition services as inline policies to all the Roles I have in IAM for this app I'm building to get some hands-on experience with AWS.
Below is all the code (apologies for the messy code - still learning).
Further below that is the output from the tests I'm running in Lambda.
Could someone please help to suggest what I am doing wrong and how I could make some adjustments to get Rekognition to be able to scan the image and use list out what is in the image (eg; person, tree, car, etc).
Thanks in advance!!!
'use strict';
let aws = require('aws-sdk');
let s3 = new aws.S3({ apiVersion: '2006-03-01' });
let rekognition = new aws.Rekognition();
s3.bucket = 'arn:aws:s3:::XXXXXXX/uploads';
exports.handler = function(event, context) {
// Get the object from the event and show its content type
const eventn = event.Records[0].eventName;
const filesize = event.Records[0].s3.object.size;
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
var eventText = JSON.stringify(event, null, 2);
console.log('print this out -->' + eventText);
console.log('bucket name --> ' + s3.bucket);
var filesizemod = "-";
if (typeof filesize == "number") {
if (filesize>=1000000000) {filesizemod=(filesize/1000000000).toFixed(2)+' GB';}
else if (filesize>=1000000) {filesizemod=(filesize/1000000).toFixed(2)+' MB';}
else if (filesize>=1000) {filesizemod=(filesize/1000).toFixed(2)+' KB';}
else {filesizemod=filesize+' bytes';}
} else if (typeof filesize !== 'undefined' && filesize) {
filesizemod = filesize;
}
var Rekparams = {
Image: {
S3Object: {Bucket: s3.bucket, Name: key }},
MaxLabels: 10,
MinConfidence: 0.0
};
console.log("s3object is = " + JSON.stringify(Rekparams));
var request = rekognition.detectLabels(Rekparams, function(err, data) {
if(err){
var errorMessage = 'Error in [rekognition-image-assessment].\r' +
' Function input ['+JSON.stringify(event, null, 2)+'].\r' +
' Error ['+err+'].';
// Log error
console.log(errorMessage, err.stack);
return(errorMessage, null);
}
else{
console.log("i get to here!!!! ****")
console.log('Retrieved Labels ['+JSON.stringify(data)+']');
console.log("i have got all the labels i need!!")
// Return labels as a JavaScript object that can be passed into the
// subsequent lambda function.
return(null, Object.assign(data, event));
}
});
console.log("not in label getting function!!")
// Call detectLabels
//var request = rekognition.detectLabels(Rekparams);
//var request1 = rekognition.detectLabels(bucket, key);
//var labels = JSON.stringify(request1);
//console.log('Retrieved Labels ['+JSON.stringify(data)+']');
//DetectLabelsRequest request = new DetectLabelsRequest()
// .withImage(new Image().withS3Object(new S3Object().withName(key).withBucket(s3.bucket))).withMaxLabels(10).withMinConfidence(75F);
var subjecttext="Myfirstapp -> New image uploaded";
var eventText2 = "\n\nFile: " + key + "\nSize: "
+ filesizemod
+ "\n\nPlease see my S3 bucket for images."
+ "\nThis is what is in the image:"
+ request;
var sns = new aws.SNS();
var params = {
Message: eventText2,
Subject: subjecttext,
TopicArn: "arn:aws:sns:XXXXXX"
};
sns.publish(params, context.done);
};
Test output from Lambda. Also note my S3 bucket is in the same region as my Lambda function:
Response:
{
"ResponseMetadata": {
"RequestId": "a08afc8a-d2a4-5a8a-a435-af4503295913"
},
"MessageId": "5f1c516b-c52f-5aa1-8af3-02a414a2c938"
}
Request ID:
"1b17d85f-8e77-11e8-a89d-e723ca75e0cf"
Function Logs:
"1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"key": "HappyFace.jpg",
"sequencer": "0A1B2C3D4E5F678901",
"size": 1024
},
"bucket": {
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"name": "sourcebucket",
"arn": "arn:aws:s3:::mybucket"
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}
2018-07-23T12:51:24.864Z 1b17d85f-8e77-11e8-a89d-e723ca75e0cf bucket name --> arn:aws:s3:::XXXXXXXX/uploads
2018-07-23T12:51:24.865Z 1b17d85f-8e77-11e8-a89d-e723ca75e0cf s3object is = {"Image":{"S3Object":{"Bucket":"arn:aws:s3:::XXXXXXX/uploads","Name":"HappyFace.jpg"}},"MaxLabels":10,"MinConfidence":0}
2018-07-23T12:51:25.427Z 1b17d85f-8e77-11e8-a89d-e723ca75e0cf not in label getting function!!
2018-07-23T12:51:25.925Z 1b17d85f-8e77-11e8-a89d-e723ca75e0cf Error in [rekognition-image-assessment].
Function input [{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"key": "HappyFace.jpg",
"sequencer": "0A1B2C3D4E5F678901",
"size": 1024
},
"bucket": {
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"name": "sourcebucket",
"arn": "arn:aws:s3:::mybucket"
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}].
Error [ValidationException: 1 validation error detected: Value 'arn:aws:s3:::XXXXXX/uploads' at 'image.s3Object.bucket' failed to satisfy constraint: Member must satisfy regular expression pattern: [0-9A-Za-z\.\-_]*]. ValidationException: 1 validation error detected: Value 'arn:aws:s3:::XXXXXXXX/uploads' at 'image.s3Object.bucket' failed to satisfy constraint: Member must satisfy regular expression pattern: [0-9A-Za-z\.\-_]*
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:48:27)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
END RequestId: 1b17d85f-8e77-11e8-a89d-e723ca75e0cf
REPORT RequestId: 1b17d85f-8e77-11e8-a89d-e723ca75e0cf Duration: 1309.41 ms Billed Duration: 1400 ms Memory Size: 128 MB Max Memory Used: 36 MB
Bucket is not supposed to be an ARN, but the name of the bucket.
I'm trying to invoke a Lambda function using an SNS event carrying an S3 event payload (i.e. S3 Put -> triggers an event published to an SNS topic -> delivers to a subscribed Lambda function), but it seems the only way I have been able to get to the actual S3 event information is to access it as a JsonNode and I know there has to be a better (e.g. deserialization).
I really thought I could have my Lambda function accept an S3EventNotification, due to the comments I found here:
https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/event/S3EventNotification.java
A helper class that represents a strongly typed S3 EventNotification item sent
to SQS, SNS, or Lambda.
So, how do I receive the S3EventNotification as a POJO?
Below are the various ways I have tried:
public class LambdaFunction implements RequestHandler<S3EventNotification, Object>{
#Override
public Object handleRequest(S3EventNotification input, Context context) {
System.out.println(JsonUtil.MAPPER.writeValueAsString(input));
return null;
}
}
Resulting in:
{
"Records": [
{
"awsRegion": null,
"eventName": null,
"eventSource": null,
"eventTime": null,
"eventVersion": null,
"requestParameters": null,
"responseElements": null,
"s3": null,
"userIdentity": null
}
]
}
I have also tried the following (note: JsonUtil.MAPPER just returns a Jackson ObjectMapper):
public class LambdaFunction {
public Object handleRequest(S3EventNotification records, Context context) throws IOException {
System.out.println(JsonUtil.MAPPER.writeValueAsString(records));
return null;
}
}
This returns the same as before:
{
"Records": [
{
"awsRegion": null,
"eventName": null,
"eventSource": null,
"eventTime": null,
"eventVersion": null,
"requestParameters": null,
"responseElements": null,
"s3": null,
"userIdentity": null
}
]
}
I can access the S3 event payload by simply receiving the SNSEvent, however when I try to deserialize the msg payload into the S3EventRecord or S3EventNotification, there are differences in fields. I really don't want to have to walk down the JsonNode manually...
public class LambdaFunction {
public Object handleRequest(SNSEvent input, Context context) throws IOException {
System.out.println("Records: " + JsonUtil.MAPPER.writeValueAsString(input));
for (SNSEvent.SNSRecord record : input.getRecords()) {
System.out.println("Record Direct: " + record.getSNS().getMessage());
JsonNode node = JsonUtil.MAPPER.readTree(record.getSNS().getMessage());
JsonNode recordNode = ((ArrayNode) node.get("Records")).get(0);
System.out.println(recordNode.toString());
S3EventNotification s3events = JsonUtil.MAPPER.readValue(record.getSNS().getMessage(), new TypeReference<S3EventNotification>() {});
System.out.println(s3events == null);
}
return null;
}
This returns the following:
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2017-03-04T05:34:25.149Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:XXXXXXXXXXXXX"
},
"requestParameters": {
"sourceIPAddress": "<<IP ADDRESS>>"
},
"responseElements": {
"x-amz-request-id": "XXXXXXXX",
"x-amz-id-2": "XXXXXXXXXXXXX="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "NotifyNewRawArticle",
"bucket": {
"name": "MYBUCKET",
"ownerIdentity": {
"principalId": "XXXXXXXXXXXXXXX"
},
"arn": "arn:aws:s3:::MYBUCKET"
},
"object": {
"key": "news\/test",
"size": 0,
"eTag": "d41d8cd98f00b204e9800998ecf8427e",
"sequencer": "0058BA51E113A948C3"
}
}
}
Unrecognized field "sequencer" (class com.amazonaws.services.s3.event.S3EventNotification$S3ObjectEntity), not marked as ignorable (4 known properties: "size", "versionId", "eTag", "key"])
I am depending on aws-java-sdk-s3-1.11.77 and aws-java-sdk-sns-1.11.77.
you should handle the SNSEvent instead of S3Event since the lambda consume your SNS events. below code works for me.
public Object handleRequest(SNSEvent request, Context context) {
request.getRecords().forEach(snsRecord -> {
System.out.println("Record Direct: " +snsRecord.getSNS().getMessage());
S3EventNotification s3eventNotifcation=S3Event.parseJson(snsRecord.getSNS().getMessage());
System.out.println(s3eventNotifcation.toJson());
}
);
}