Get input in Lambda from s3 event trigeer - amazon-web-services

I am trying to trigger a Lambda once I get any file in an s3 bucket. For that I have configured the event in the s3 bucket. But, I need to give an input to the Lambda (event) that will be triggered. How do I do that?

Add a new Lambda function handler to the project. Create a new function handler name, say 'S3FunctionHandler'; use the default input type that is already selected, S3 Event; and as leave the output type as object:
This will create some boilerplate code with the Lambda function handler that takes an S3 event as input:
Select S3FunctionHandler and select an IAM Role:
Switch over to the AWS Management Console to test our Lambda function with a dummy S3 event. We need to test it, so to do this let's configure a test event. Select the S3 Put event, which you find by clicking on the Actions tab. This simulates somebody uploading a new object to an S3 bucket.

If an Amazon S3 Event is configured to trigger an AWS Lambda function, then S3 will provide information to the Lambda function about the S3 object that triggered the function.
From Using AWS Lambda with Amazon S3 - AWS Lambda:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
},
"requestParameters": {
"sourceIPAddress": "205.255.255.255"
},
"responseElements": {
"x-amz-request-id": "D82B88E5F771F645",
"x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQjryTjKlc5aLWGVHPZLj5NeC6qMa0emYBDXOo6QBU0Wo="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
"bucket": {
"name": "my-bucket",
"ownerIdentity": {
"principalId": "A3I5XTEXAMAI3E"
},
"arn": "arn:aws:s3:::my-bucket"
},
"object": {
"key": "foo.jpg",
"size": 1305107,
"eTag": "b21b84d653bb07b05b1e6b33684dc11b",
"sequencer": "0C0F6F405D6ED209E1"
}
}
}
]
}
This information includes the Bucket name, Key (filename) of the object, the event that triggered the function and various other tidbits of information. The Lambda function can then use this information to process the object appropriately.

Related

Passing variable values through S3 and SQS event trigger message

I have setup the aws pipeline as S3 -> SQS -> Lambda. S3 PutObject event will generate an event trigger message and pass it to SQS and SQS will trigger the lambda. I have a requirement to pass a variable value from S3 to SQS and finally to Lambda as part of the event message. Variable value could be the file name or some string value.
can we customize the event message json data generated by S3 event to pass some more information along with the message.
Does SQS just pass the event message received from S3 to Lambda or does any alteration to the message or generate its own message.
how to display or see the message generated by S3 in SQS or Lambda.
You can't manipulate the S3 event data. The schema looks like this. That will be passed onto the SQS Queue which will add some it's own metadata and pass it along to Lambda. This tutorial has a sample SQS record.
When Amazon S3 triggers an event, a message is sent to the desired destination (AWS Lambda, Amazon SNS, Amazon SQS). The message includes the bucket name and key (filename) of the object that triggered the event.
Here is a sample event (from Using AWS Lambda with Amazon S3 - AWS Lambda):
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
},
"requestParameters": {
"sourceIPAddress": "205.255.255.255"
},
"responseElements": {
"x-amz-request-id": "D82B88E5F771F645",
"x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQjryTjKlc5aLWGVHPZLj5NeC6qMa0emYBDXOo6QBU0Wo="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
"bucket": {
"name": "lambda-artifacts-deafc19498e3f2df",
"ownerIdentity": {
"principalId": "A3I5XTEXAMAI3E"
},
"arn": "arn:aws:s3:::lambda-artifacts-deafc19498e3f2df"
},
"object": {
"key": "b21b84d653bb07b05b1e6b33684dc11b",
"size": 1305107,
"eTag": "b21b84d653bb07b05b1e6b33684dc11b",
"sequencer": "0C0F6F405D6ED209E1"
}
}
}
]
}
The bucket can be obtained from Records[].s3.bucket.name and the key can be obtained from Records[].s3.object.key.
However, there is no capability to send a particular value, since S3 triggers the event. However, you could possibly derive a value. For example, if you had events from several different buckets triggering the Lambda function, then the Lambda function could look at the bucket name to determine why it was triggered, and then substitute a desired value.

AWS Lambda S3 trigger on multiple uploads

I'm looking to implement an AWS Lambda function that is triggered by the input of an audio file to my s3 bucket, and then concatenates this file with the previously uploaded file (already stored in the bucket), and outputs this concatenated file back to the bucket. I'm quite new to Lambda and I'm wondering, is possible to pass in a list of file names to process into a Lambda function to transcode? Or does Lambda only accept one read at a time?
When Lambda gets invoked by S3 directly, it will get an event that looks similar to this one from the docs:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
},
"requestParameters": {
"sourceIPAddress": "205.255.255.255"
},
"responseElements": {
"x-amz-request-id": "D82B88E5F771F645",
"x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQ"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
"bucket": {
"name": "lambda-artifacts-deafc19498e3f2df",
"ownerIdentity": {
"principalId": "A3I5XTEXAMAI3E"
},
"arn": "arn:aws:s3:::lambda-artifacts-deafc19498e3f2df"
},
"object": {
"key": "b21b84d653bb07b05b1e6b33684dc11b",
"size": 1305107,
"eTag": "b21b84d653bb07b05b1e6b33684dc11b",
"sequencer": "0C0F6F405D6ED209E1"
}
}
}
]
}
It gives you basic meta information about the object that was uploaded.
There's no way to customize this event, but a Lambda function can query external resources for more information.
You could also use S3-Batching, but that's probably not designed for your use case.

AWS account Id in SNS topic event

Is there any way to get AWS Account Id in an SNS topic event to the subscriber? Actually, in my case, I want multiple customer account can trigger their s3 putObject to the given sns topic arn which is from my account and I have a lambda method which is subscribed to that topic. Now I'm getting event payload in my lambda handler whenever a customer puts an object to s3 bucket. But as I said, there would be many customer so my lambda need to process that coming event is from which customer? So I need customer account Id available in the sns event payload, is it possible?
Schema that is received by subscriber already contains Arns of both subscriber and topic. Here is the schema. We can parse the accountId from it.
`"TopicArn":"arn:aws:sns:us-east-2:123456789012:sns-lambda"`
"EventSubscriptionArn": "arn:aws:sns:us-east-2:123456789012:sns-lambda:21be56ed-a058-49f5-8c98-aedd2564c486"
It appears that your situation is:
Multiple AWS Accounts have Amazon S3 buckets with an Amazon S3 Event configured to trigger your AWS Lambda function
You want the ability to for the Lambda function to detect which account triggered the event
I don't think that this information is available. Here is a sample S3 Put event from the AWS Lambda "Test" console:
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "ap-southeast-2",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test/key",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
There does not appear to be a field containing the Account ID of the source bucket.
To confirm this, I triggered an event on an S3 bucket and logged the event. I could not find any reference to an AWS Account ID.

How do I monitor multiple lambda function which are a part of micro service application?

We are trying to follow a micro-server architecture in our approach.
We have our front end in our S3 bucket and Apis in the API gateway connected to the Lambda function.
so the request flow would be something similar to this:
S3 -> API -> Lambda -> DB
The concern that I have is, how do I know if my API has triggered a lambda function?
there are monitoring options available for lambda but those are post invocation of lambda function.
Is there a way I can know if my lambda function is triggered or not from the API? also send the notification on same?
I checked the CloudTrial's Trial event for lambda invocation on my own API gateway with lambda. It has the form:
{
"eventVersion": "1.07",
"userIdentity": {
"type": "AWSService",
"invokedBy": "apigateway.amazonaws.com"
},
"eventTime": "2020-10-30T12:03:17Z",
"eventSource": "lambda.amazonaws.com",
"eventName": "Invoke",
"awsRegion": "us-east-1",
"sourceIPAddress": "apigateway.amazonaws.com",
"userAgent": "apigateway.amazonaws.com",
"requestParameters": {
"xxxx": "arn:aws:lambda:us-east-1:xxxx:function:fff",
"sourceArn": "arn:aws:execute-api:us-east-1:xxx:84j28c7zga/test/ANY/test"
},
"responseElements": null,
"additionalEventData": {
"functionVersion": "arn:aws:lambda:us-east-1:xxxx:function:fff:$LATEST"
},
"requestID": "bc5f574e-58d8-4a2b-978b-5ec32aba447e",
"eventID": "2345b878-4998-4317-a0c4-1005df40d873",
"readOnly": false,
"resources": [
{
"accountId": "xxxx",
"type": "AWS::Lambda::Function",
"ARN": "arn:aws:lambda:us-east-1:xxx:function:fff"
}
],
"eventType": "AwsApiCall",
"managementEvent": false,
"recipientAccountId": "xxxx",
"sharedEventID": "1906ed81-6835-4046-943d-f2ca9e5b9d40",
"eventCategory": "Data"
}
As you can see above, when the lambda is invoked, you get information that it was API gateway which invoked it:
"userIdentity": {
"type": "AWSService",
"invokedBy": "apigateway.amazonaws.com"
},
Another approach that easier than cloud trail, When your lambda invoked by APIGW, on the lambda event you have some details you can use match your use cases.
Event schema: https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
For example, NodeJS Lambda:
const handler = async (event, context) => {
if(event["requestContext"] && event["httpMethod"]) {
console.log("This is probably an API-GW event");
} else {
console.log("This is definitely not an API-GW event");
}
};
Another cool way is to use some of the monitoring tools that give you these abilities out of the box:
Like this tool
Full disclosure, The tool I just showed called lumigo, the company I work for.
This is a great tool and I'm using it for my personal projects also

AWS Lambda S3 Bucket Notification via CloudFormation

I'm trying to create a Lambda notification via CloudFormation but getting an error about the ARN format being incorrect.
Either my CloudFormation is wrong or it doesn't support the Lambda preview yet.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"LambdaArn": {
"Type": "String",
"Default": "arn:aws:lambda:{some-region}:{some-account-id}:function:{some-fn-name}"
}
},
"Resources": {
"EventArchive": {
"Type": "AWS::S3::Bucket",
"Properties": {
"NotificationConfiguration": {
"TopicConfigurations": [
{
"Event": "s3:ObjectCreated:Put",
"Topic": {
"Ref": "LambdaArn"
}
}
]
}
}
}
}
}
But when I push up this CloudFormation I get the message:
The ARN is not well formed
Does anyone have idea as to what this means? I know the example above has been modified so not to use my actual ARN, but in my actual code I've copied the ARN directly from the GUI.
Also, interestingly I was able to create the notification via the AWS console, and so I just assume that AWS CloudFormation doesn't yet support this feature (even though that's not quite clear I don't think when reading the documentation).
It looks like AWS has now released support for notifying lambda functions directly in CloudFormation.
The S3 NotificationConfiguration definition used to only include TopicConfigurations but has been updated to include LambdaConfigurations as well.
After adding the NoficationConfiguration, make sure you include a Lambda::Permission resource so that S3 is allowed to execute your lambda function. Here is an example permission that can be used as a template:
"PhotoBucketExecuteProcessorPermission": {
"Type" : "AWS::Lambda::Permission",
"Properties" : {
"Action":"lambda:invokeFunction",
"FunctionName": { "Fn::GetAtt": [ "PhotoProcessor", "Arn" ]},
"Principal": "s3.amazonaws.com",
"SourceAccount": {"Ref" : "AWS::AccountId" },
"SourceArn": {
"Fn::Join": [":", [
"arn","aws","s3","", ""
,{"Ref" : "PhotoBucketName"}]]
}
}
}
From the docs:
The Amazon SNS topic to which Amazon S3 reports the specified events.
It appears that although S3 supports sending events to Lambda, CloudFormation has not yet caught up. It expects an SNS ARN where you are providing a Lambda function ARN.
For now, it looks like you will have to hook up the event notification manually.