Lamda to process SQS messages not triggered - amazon-web-services

I have a lambda function that's supposed to process my SQS messages. However, it doesn't seem to be triggered automatically even though I have my SQS as its event trigger. Below is my code.
import json
import boto3
def lambda_handler(event, context):
max=2 # Number of messages to process # a time
if "max" in event:
max=int(event["max"])
else:
max=1
sqs = boto3.resource('sqs') # Get access to resource
queue = sqs.get_queue_by_name(QueueName='mysqs.fifo') # Get queue by name
count=0
# Process messages
for message in queue.receive_messages(MaxNumberOfMessages=max):
body = json.loads(message.body) # Attribute list
print("test")
payload = message[body]
print(str(payload))
count+=1
message.delete()
return {
'statusCode': 200,
'body': str(count)
}

When an AWS Lambda function is configured with Amazon SQS as a trigger, the messages are passed in via the event variable. The code inside the function should not call Amazon SQS directly.
Here is an example of an event being passed to the function:
{
"Records": [
{
"messageId": "11d6ee51-4cc7-4302-9e22-7cd8afdaadf5",
"receiptHandle": "AQEBBX8nesZEXmkhsmZeyIE8iQAMig7qw...",
"body": "Test message.",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1573251510774",
"SequenceNumber": "18849496460467696128",
"MessageGroupId": "1",
"SenderId": "AIDAIO23YVJENQZJOL4VO",
"MessageDeduplicationId": "1",
"ApproximateFirstReceiveTimestamp": "1573251510774"
},
"messageAttributes": {},
"md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-2:123456789012:fifo.fifo",
"awsRegion": "us-east-2"
}
]
}
Your code could then access the messages with:
for record in event['Records']:
payload = record['body']
print(payload)
count += 1
There is no need to delete the message -- it will be automatically deleted once the Lambda function successfully executes.
See: Using AWS Lambda with Amazon SQS - AWS Lambda

Related

AWS Chatbot and EventBridge for Glue Job State Changes Error - Event received is not supported

I am trying to set up the AWS Chatbot with Slack integrations to display error messages for changes in states (errors) for AWS Glue. I have set up AWS EventBridge event pattern to catch Glue Job State Changes as follows:
{
"source": ["aws.glue"],
"detail-type": ["Glue Job State Change"],
"detail": {
"state": [
"FAILED"
]
}
}
This successfully catches all failed Glue Jobs and I have set up an AWS SNS topic as the target using the input transformer.
Input Transformer Input Path
{"jobname":"$.detail.jobName","jobrunid":"$.detail.jobRunId","jobstate":"$.detail.state"}
Input Transformer Input Template
"{\"detail-type\": \"Glue Job <job-name> has entered the state <job-state> with the message <message>.\"}"
AWS SNS has a subscriptions endpoint to the AWS Chatbot which fails to send the notification to Slack.
AWS Chatbot CloudWatch logs after an event using Input Transformer
Event received is not supported (see https://docs.aws.amazon.com/chatbot/latest/adminguide/related-services.html ):
{
"subscribeUrl": null,
"type": "Notification",
"signatureVersion": "1",
"signature": <signature>,
"topicArn": <topic-arn>,
"signingCertUrl": <signing-cert-url>,
"messageId": <message-id>,
"message": "{\"detail-type\": \"Glue Job MyJob has entered the state FAILED with the message SystemExit: None.\"}",
"subject": null,
"unsubscribeUrl": <unsubscribe-url>,
"timestamp": "2022-03-02T12:17:16.879Z",
"token": null
}
When the input is set to 'Matched Events' in the AWS EventBridge Select Target, the Slack Notification will send however it lacks any details.
Slack Notification
Glue Job State Change | eu-west-1 | Account: <account>
Glue Job State Change
AWS EventBridge Matched Events JSON Output
{
"Type" : "Notification",
"MessageId" : <message-id>,
"TopicArn" : <topic-arn>,
"Message" : "{\"detail-type\": [\"Glue Job State Change\"]}",
"Timestamp" : "2022-03-02T11:17:52.443Z",
"SignatureVersion" : "1",
"Signature" : <signature>,
"SigningCertURL" : <signing-cert-url>,
"UnsubscribeURL" : <unsubscribe-url>
}
There are very little differences between the two JSON outputs however the input transformer is considered an unsupported event. Is it possible to generate a custom message when using the AWS Chatbot for errors?
The best solution was to create a Lambda function as the target of the AWS EventBridge which performs a POST to a Slack Webhook.
# Import modules
import logging
import json
import urllib3
# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Define Lambda function
def lambda_handler(event, context):
http = urllib3.PoolManager()
url = <url>
link = <glue-studio-monitoring-link>
message = f"A Glue Job {event['detail']['jobName']} with Job Run ID {event['detail']['jobRunId']} has entered the state {event['detail']['state']} with error message: {event['detail']['message']}. Visit the link for job monitoring {link}"
logger.info(message)
headers = {"Content-type": "application/json"}
data = {'text': message}
response = http.request('POST',
url,
body = json.dumps(data),
headers = headers,
retries = False)
logger.info(response.status)

Execute Lambda Function on CloudFormation Stack delete

Is there a way to trigger Lambda Function declared in the very same CFN template that was used to create a stack when the given stack is being deleted?
Preferably I'd like to implement somewhat opposite to THIS snippet (that is: a relatively simple solution that omits e.g. the need of creating SNS topics etc.).
In advance thx for any advices. Best regards! //M
You can achieve the desired behaviour by creating a CustomResource and hooking it up with your lambda function.
From the documentation page:
With AWS Lambda functions and custom resources, you can run custom code in response to stack events (create, update, and delete)
The lambda function needs to react to a Delete event, so it has to be written to allow for that.
Example lambda function source code
Here's an example for the AWS lambda function configuration, which needs to react to a delete event:
import cfnresponse
def lambda_handler(event, context):
response = {}
if event['RequestType'] == 'Delete':
... # do stuff
response['output'] = ' Delete event.'
cfnresponse.send(event, context, cfnresponse.SUCCESS, response)
Example lambda function
Here's a CloudFormation snippet for the lambda function:
"MyDeletionLambda": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"ZipFile": {
"Fn::Join": [
"\n",
[
"import cfnresponse",
"",
"def lambda_handler(event, context):",
" response = {}",
" if event['RequestType'] == 'Delete':",
" response['output'] = 'Delete event.'",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, response)"
]
]
}
},
"Handler": "index.lambda_handler",
"Runtime": "python3.8"
}
}
}
Example custom resource
Here's a CloudFormation snippet for a custom resource:
"OnStackDeletion": {
"Type": "Custom::LambdaDependency",
"Properties": {
"ServiceToken": {
"Fn::GetAtt": [
"MyDeletionLambda",
"Arn"
]
}
}
}

Extract JSON-like information from payloads produced by AWS SNS

I'm currently working on the implementation of SNS-notifications being the intermediary between S3-bucket-uploads and an upload-handler Lambda Function.
The information flow should look like this:
Upload of a file to an S3-bucket
This should trigger an SNS-topic "upload-notification"
Lambda-function ("upload-handler") are subscribed to this SNS-topic:
depending on which bucket received a file upload, a certain
lambda-function should be triggered by the SNS-notification
--> How can I obtain information like the "S3 bucket name" etc. from the event that triggered the SNS-notification?
I hope for a possibility like with lambda functions where you can extract information e.g. from a JSON-object produced by SNS.
If that doesn't exist, I'd be delighted to learn about other approaches, but somehow I need to extract this information programmatically/automatically from SNS to hand it over to the upload-handler lambda function in Step 3.
Details on the terraform definition blocks:
1. aws_sns_topic_subscription:
resource "aws_sns_topic_subscription" "start_from_upload_topic" {
topic_arn = var.upload_notification_topic_arn
protocol = "lambda"
endpoint = module.start_from_upload_handler.arn
}
2. aws_s3_bucket_notification:
resource "aws_s3_bucket_notification" "start_from_upload_handler" {
for_each = local.input_bucket_id_map
bucket = each.value
topic {
topic_arn = module.upload_notification.topic_setup.topic_arn
events = ["s3:ObjectCreated:*"]
}
}
3. SNS-module "upload_notification"
module "upload_notification" {
source = "../../modules/sns_topic"
name = "${var.platform_settings.prefix}-upload-notification"
key_arn = var.platform_settings.logging_settings.logging_key_arn
allowed_producers = [
"s3.amazonaws.com",
"lambda.amazonaws.com",
"edgelambda.amazonaws.com",
"events.amazonaws.com",
"states.amazonaws.com",
]
allowed_consumers = ["lambda.amazonaws.com",
"edgelambda.amazonaws.com",
"events.amazonaws.com",
"states.amazonaws.com",
]
tags = local.tags
}
As per documentation, the S3 bucket name (amongst other data like region, event time, bucket ARN, source IP etc.) will be inside the event message that is passed through to the Lambda from S3 via SNS in your case.
Records[0].s3.bucket.name
{
"Records":[
{
"s3":{
"bucket":{
"name":"bucket-name",
...
},
...
},
...
}
]
}
SNS needs to be set-up in conjunction with the Lambda-function and S3-uploads like so (in terraform, excluding KMS for this example):
resource "aws_lambda_permission" "start_from_upload_sns_topic" {
statement_id = "AllowExecutionFromSNStopic"
action = "lambda:InvokeFunction"
function_name = module.start_from_upload_handler.arn
principal = "sns.amazonaws.com"
source_arn = var.upload_notification_topic_arn
}
resource "aws_s3_bucket_notification" "start_from_upload_handler" {
for_each = var.input_bucket_name_map
bucket = each.value
topic {
topic_arn = var.upload_notification_topic_arn
events = ["s3:ObjectCreated:*"]
}
}
resource "aws_sns_topic_subscription" "start_from_upload_sns_topic" {
topic_arn = var.upload_notification_topic_arn
protocol = "lambda"
endpoint = module.start_from_upload_handler.arn
}
The JSON-object the lambda-function receives from the SNS-topic looks like so:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:eu-central-1:...",
"Sns": {
"Type": "Notification",
"MessageId": "...",
"TopicArn": "arn:aws:sns:eu-central-1:....",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"eu-central-1\",\"eventTime\":\"2021-10-27T15:29:38.959Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:...\"},\"requestParameters\":{\"sourceIPAddress\":\"....\"},\"responseElements\":{\"x-amz-request-id\":\"..\",\"x-amz-id-2\":\"...\"},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"tf-s3-topic-...\",\"bucket\":{\"name\":\"test-bucket-name\",\"ownerIdentity\":{\"principalId\":\"....\"},\"arn\":\"arn:aws:s3:::test-bucket-name\"},\"object\":{\"key\":\"test_file.json\",\"size\":189,\"eTag\":\"....\",\"versionId\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2021-10-27T15:29:40.086Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "https://sns.eu-central-1.amazonaws.com/SimpleNotificationService...",
"UnsubscribeUrl": "https://sns.eu-central-1.amazonaws.com....",
"MessageAttributes": {}
}
}
]
}
We're interested in the "Message" - body of the incoming JSON-object, and this finally looks indeed like #Ermiya Eskandary mentioned in his answer pointing to the S3-notification-JSON-event-structure:
{
'Records': [
{
's3': {
'bucket': {
'arn': 'arn:aws:s3:...',
'name': 'bucket-name',
},
'object': {
'key': 'upload_file_name.json',
},
},
}
]
}
The take-away here is that one needs to bear in mind that the incoming JSON emitted by SNS has several top-layer dictionary keywords, which need to be "stripped-off" or "dug-through" in order to get to the actual S3-upload-event in the SNS-Message-body, which comes as a string-JSON-format which needs to be loaded into a proper dictionary-object.
Moreover, it is paramount to subscribe the lambda-function to the SNS-topic and allow the SNS-topic in turn to invoke said lambda-function.

Read and Copy S3 inventory data from SNS topic trigger with AWS lambda function

I am a data analyst and new to AWS lambda functions. I have an s3 bucket where I store the Inventory data from our data-lake which is generated using Inventory feature under S3 Management tab.
So lets say the inventory data (reports) looks like this:
s3://my-bucket/allobjects/data/report-1.csv.gz
s3://my-bucket/allobjects/data/report-2.csv.gz
s3://my-bucket/allobjects/data/report-3.csv.gz
Regardless of the file contents, I have an Event setup for s3://my-bucket/allobjects/data/ which notifies an SNS topic during any event like GET or PUT. (I cant change this workflow due to strict governance)
Now, I am trying to create a Lambda Function with this SNS topic as a trigger and simply move the inventory-report files generated by the S3 Inventory feature under
s3://my-bucket/allobjects/data/
and repartition it as follows:
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-1.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-2.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-3.csv.gz
How can I achieve this using the lambda function (node.js or python is fine) reading an SNS topic? Any help is appreciated.
I tried something like this based on some smaple code i found online but it didnt help.
console.log('Loading function');
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
exports.handler = function(event, context) {
console.log("\n\nLoading handler\n\n");
var sns = new AWS.SNS();
sns.publish({
Message: 'File(s) uploaded successfully',
TopicArn: 'arn:aws:sns:_my_ARN'
}, function(err, data) {
if (err) {
console.log(err.stack);
return;
}
console.log('push sent');
console.log(data);
context.done(null, 'Function Finished!');
});
};
The preferred method would be for the Amazon S3 Event to trigger the AWS Lambda function directly. But since you cannot alter this port, the flow would be:
The Amazon S3 Event will send a message to an Amazon SNS topic.
The AWS Lambda function is subscribed to the SNS topic, so it is triggered and receives the message from S3.
The Lambda function extracts the Bucket and Key, then calls S3 to copy_object() to another location. (There is no move command. You will need to copy the object to a new bucket/key.)
The content of the event field is something like:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "...",
"Sns": {
"Type": "Notification",
"MessageId": "1c3189f0-ffd3-53fb-b60b-dd3beeecf151",
"TopicArn": "...",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"ap-southeast-2\",\"eventTime\":\"2019-01-30T02:42:07.129Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:AIDAIZCFQCOMZZZDASS6Q\"},\"requestParameters\":{\"sourceIPAddress\":\"54.1.1.1\"},\"responseElements\":{\"x-amz-request-id\":\"...",\"x-amz-id-2\":\"..."},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"...\",\"bucket\":{\"name\":\"stack-lake\",\"ownerIdentity\":{\"principalId\":\"...\"},\"arn\":\"arn:aws:s3:::stack-lake\"},\"object\":{\"key\":\"index.html\",\"size\":4378,\"eTag\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2019-01-30T02:42:07.212Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "...",
"UnsubscribeUrl": "...",
"MessageAttributes": {}
}
}
]
}
Thus, the name of the uploaded Object needs to be extracted from the Message.
You could use code like this:
import json
def lambda_handler(event, context):
for record1 in event['Records']:
message = json.loads(record1['Sns']['Message'])
for record2 in message['Records']:
bucket = record2['s3']['bucket']['name'])
key = record2['s3']['object']['key'])
# Do something here with bucket and key
return {
'statusCode': 200,
'body': json.dumps(event)
}

AWS Serverless Application Model: Create S3 Event to Lambda

I would like to use the Serverless Application Model(SAM) and CloudFormation to create a simple lambda function which gets triggered when an object is created in a S3 bucket(e.g. thescore-cloudfront-trial). How do I enable the trigger from the S3 bucket to the Lambda Function? Below is my python3 boto3 code.
region = 'us-east-1'
import boto3
test_lambda_template = {
'AWSTemplateFormatVersion': '2010-09-09',
'Transform': 'AWS::Serverless-2016-10-31',
'Resources': {
'CopyS3RajivCloudF': {
'Type': 'AWS::Serverless::Function',
'Properties': {
"CodeUri": 's3://my-tmp/CopyS3Lambda',
"Handler": 'lambda.handler',
"Runtime": 'python3.6',
"Timeout": 300,
"Role": 'my_existing_role_arn'
},
'Events': {
'Type': 'S3',
'Properties': {
"Bucket": "thescore-cloudfront-trial",
"Events": 's3:ObjectCreated:*'
}
}
},
'SrcBucket': {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": 'thescore-cloudfront-trial',
}
}
}
}
conf = config.get_aws_config('development')
client = aws.client(conf, 'cloudformation', region_name=region)
response = client.create_change_set(
StackName='RajivTestStack',
TemplateBody=json.dumps(test_lambda_template),
Capabilities=['CAPABILITY_IAM'],
ChangeSetName='a',
Description='Rajiv ChangeSet Description',
ChangeSetType='CREATE'
)
response = client.execute_change_set(
ChangeSetName='a',
StackName='RajivTestStack',
)
I figured it out with caveats
Caveat 1: The trigger notification will show up in S3 console but not in the Lambda console. My existing python deploy scripts using boto3 s3 and lambda clients(which I want to replace) show the notification in both consoles.
Caveat 2: For monitoring, I see my lambda trigger only when I switch to see the lambda alias view. But I haven't specified an alias for my lambda. So I don't know why I don't see it in the non alias view(just seeing the LATEST version)
I had to modify the Events key like this:
'Events': {
'RajivCopyEvent': {
'Type': 'S3',
'Properties': {
"Bucket": {"Ref": "SrcBucket"},
"Events": "s3:ObjectCreated:*"
}
}
}