I have created a lambda function and I want to trigger it from Amazon SQS. For the Event value in handler (Event,Context), I want to specify a value from this SQS. I want to specify a big JSON. How can I do that?
From Sample Events Published by Event Sources - AWS Lambda, Amazon SQS will send this event information to the AWS Lambda function:
{
"Records": [
{
"messageId": "c80e8021-a70a-42c7-a470-796e1186f753",
"receiptHandle": "...",
"body": "{\"foo\":\"bar\"}",
"attributes": {
"ApproximateReceiveCount": "3",
"SentTimestamp": "1529104986221",
"SenderId": "594035263019",
"ApproximateFirstReceiveTimestamp": "1529104986230"
},
"messageAttributes": {},
"md5OfBody": "9bb58f26192e4ba00f01e2e7b136bbd8",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-west-2:594035263019:NOTFIFOQUEUE",
"awsRegion": "us-west-2"
}
]
The body of the SQS message is provided in the body parameter.
The maximum size of an SQS message is 256 KB, but I'm not sure you'd be able to pass something that big to Lambda. I recommend you try it and see!
Worst case, store the content in Amazon S3 and pass a reference to the S3 object in the message.
Create an SQS queue. This SQS queue should take s3 bucket names as an input. Maybe it should also take the region of the s3 bucket as well? Might want to have it take a JSON object:
{"bucketname": "this_is_my_bucket", "region": "us-west-2"}
Related
I want to send an SQS notification from an S3 bucket on a specific folder but seems wildcards are not supported in EventBridge as well as S3 Event Notification. Is there any way by which I can trigger my SQS.
BucketName: my-s3-bucket
want to send notification if file added in a specific folder inside this bucket:
MyFolder/<username>/INPUT/abc.txt
and not on any other folder inside this bucket.
I tried EventBridge as well by below event but no luck there as well:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["my-s3-bucket"]
},
"object":{
"key":[{"prefix":"*/INPUT/*"}]
}
}
}
Please suggest
If its not directly supported then you have to filter your events through a lambda function:
S3 ---> Lambda to filter the enents ----> SQS
In my pipeline I have an event notification on an S3 bucket which triggers an SNS topic. That SNS topic in turn has a lambda function subscribed to it. I need the SNS topic to send a hard coded message body to the lambda because it get's used in that function.
Since the SNS topic publishes the message automatically when the S3 event notification is set off I am wondering if and how I can edit the message that gets sent to lambda?
To be clear: I want the same message sent every time. The goal is for lambda to get a variable which is only dependent on which topic the lambda was triggered from.
Currently I am building this through the UI but will eventually code it in terraform for production.
When Amazon SNS triggers an AWS Lambda function, the information it sends includes SNS TopicArn.
You could use that ARN to determine which SNS Topic triggered the Lambda function, and therefore which action it should process.
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:us-east-1:{{{accountId}}}:ExampleTopic",
"Sns": {
"Type": "Notification",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:ExampleTopic",
"Subject": "example subject",
"Message": "example message",
"Timestamp": "1970-01-01T00:00:00.000Z",
"SignatureVersion": "1",
"Signature": "EXAMPLE",
"SigningCertUrl": "EXAMPLE",
"UnsubscribeUrl": "EXAMPLE",
"MessageAttributes": {
"Test": {
"Type": "String",
"Value": "TestString"
},
"TestBinary": {
"Type": "Binary",
"Value": "TestBinary"
}
}
}
}
]
}
Rather than having Amazon S3 send a message to Amazon SNS directly, you might be able to configure an Amazon CloudWatch Events rule that triggers on object creation and sends a Constant as part of the message to Amazon SNS, like this:
If large files are being uploaded, you might also need to trigger it on CompleteMultipartUpload.\
You could also have the rule trigger the AWS Lambda function directly (without going via Amazon SNS), depending upon your use-case. A Constant can also be specified for this.
I have a step function and I want to send a notification to SNS topic once the State Machine completes the execution. I know state machine can trigger Lambda and I can implement the logic in lambda but wanted to checkout if State Machine can directly notify SNS?
State Machine does support SNS. You can have a lambda and notify your SNS from there but you can directly use SNS with SFN without using any Lambda. Below you can see the implementation:
{
"Publish to SNS to notify DataDog": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"TopicArn": "<SNS topic ARN>",
"Message.$": "$",
"MessageAttributes": {
"notify": {
"DataType": "String",
"StringValue": "Send to SNS"
}
}
},
"Next": "<Next state>"
}
}
You can read more about it here
Question
How to know the event format coming to Lambda from AWS services?
The AWS Lambda console includes a Test function, which can provide a sample event for most of the events that are generated by AWS.
You can modify these sample events to include your specific data.
For example, the Amazon S3 Put sample event simulates a new object being added to an Amazon S3 bucket. You can modify the event to include your own Bucket and Object names, then use it to test the function without actually using Amazon S3.
Generating Sample Event Payloads
$ sam local generate-event --help
Usage: sam local generate-event [OPTIONS] COMMAND [ARGS]...
You can use this command to generate sample payloads from different event
sources such as S3, API Gateway, and SNS. These payloads contain the
information that the event sources send to your Lambda functions.
Commands:
alexa-skills-kit
alexa-smart-home
apigateway
batch
cloudformation
cloudfront
cloudwatch
codecommit
codepipeline
cognito
config
connect
dynamodb
kinesis
lex
rekognition
s3
sagemaker
ses
sns
sqs
stepfunctions
S3 put
$ sam local generate-event s3 put
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test/key",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
Firehose
$ sam local generate-event kinesis kinesis-firehose
{
"invocationId": "invocationIdExample",
"deliveryStreamArn": "arn:aws:kinesis:EXAMPLE",
"region": "us-east-1",
"records": [
{
"recordId": "49546986683135544286507457936321625675700192471156785154",
"approximateArrivalTimestamp": 1495072949453,
"data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4="
}
]
}
Update
As per the comment by #John Rotenstein, Lambda console can generate much more sample events.
For Go lang, aws-lambda-go/events/, provides a list of sample codes showing how to handle events from different sources and sample test data. Good resource to have a look.
Just print it out the first time you start your development. For python, the command is:
print(json.dumps(event))
The output should be available in CloudWatch log group for your lambda. This is must useful for debugging and testing with real-live events.
Sometimes, in documentation you can also find it. But I found that just printing it out is the fastest and most reliable way to get to know the event format.
I'm new to AWS and here is the task I'm trying to solve.
SQS queue is set up and from time to time new messages are coming to it. I want to set up Lambda and retrieve those messages and perform some business logic on the content of that messages.
Searching across AWS site and Internet in general I understood that SQS itself can't be a trigger for Lambda, hence I need to set up Cloud Watch that will trigger Lambda by schedule (every minute for example). Here is code example from aws github how to consume a message.
So far so good. Now, when creating Lambda itself, I need to specify the input type to implement RequestHandler interface:
public interface RequestHandler<I, O> {
O handleRequest(I var1, Context var2);
}
But if my Lambda is not expecting any input, it will go to SQS on its own and pull the messages does it make any sense to have input?
Can I leave it void or even use some other method signature at all (of course not implementing that interface in this case)?
Here your Lambda will get a reference to the cloudwatch trigger.
You might not be interested in that but there can be instances where the Lambda wants to know the trigger details even if the trigger is a cloudwatch alarm
The following is an example event:
{ "version": "0", "id": "53dc4d37-cffa-4f76-80c9-8b7d4a4d2eaa",
"detail-type": "Scheduled Event", "source": "aws.events", "account":
"123456789012", "time": "2015-10-08T16:53:06Z", "region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456789012:rule/my-scheduled-rule" ],
"detail": {} }