Multiple Email Endpoints of a SNS Topic - amazon-web-services

I have a Lambda function that gets triggered by a PUT event in a S3 bucket. This function needs to send email to all subscribers of a SNS topic.
Lambda Function code is:
import json
import boto3
print('Loading function')
s3 = boto3.client('s3')
sns = boto3.client('sns')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
eventname = event['Records'][0]['eventName']
sns_message = str("A new file has been uploaded in our S3 bucket. Please find the details of file uploaded belown\n\nBUCKET NAME: "+ bucket +"\nFILE NAME: " + key)
try:
if eventname == "ObjectCreated:Put":
subject= "New data available in S3 Bucket [" + bucket +"]"
sns_response = sns.publish(TargetArn='<SNS ARN ID>',Message= str(sns_message),Subject= str(subject))
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
However, email is getting triggered only to one email endpoint subscribed to the SNS topic. Any idea what am I missing?

There was an error in my Code. Instead of using TopicArn, I was using TargetArn. Changing the code to TopicArn solved the issue.

Related

Lambda task timeout but no application log

I have a python lambda that triggers by S3 uploads to a specific folder. The lambda function is to process the uploaded file and outputs it to another folder on the same S3 bucket.
The issue is that when I do a bulk upload using AWS console, some files do not get processed. I ended up setting a dead letter queue to catch these invocations. While inspecting the message in the queue, there is a request ID which I tried to find it in the lambda logs.
These are the logs for the request ID:
Now the odd part is that in the python code, the first line after the imports is print('Loading function') which does not show up in the lambda log?
Added the python code here. It should still print the Processing file name: " + key which is inside the handler ya?
import urllib.parse
from datetime import datetime
import boto3
from constants import CONTENT_TYPE, XML_EXTENSION, VALIDATING
from xml_process import *
from s3Integration import download_file
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
print("Processing file name: " + key)
try:
response = s3.get_object(Bucket=bucket, Key=key)
xml_content = response["Body"].read()
content_type = response["ContentType"]
tree = ET.fromstring(xml_content)
key_file_name = key.split("/")[1]
# Creating a temporary copy by downloading file to get the namespaces
temp_file_name = "/tmp/" + key_file_name
download_file(key, temp_file_name)
namespaces = {node[0]: node[1] for _, node in ET.iterparse(temp_file_name, events=['start-ns'])}
for name, value in namespaces.items():
ET.register_namespace(name, value)
# Preparing path for file processing
processed_file = key_file_name.split(".")[0] + "_processed." + key_file_name.split(".")[1]
print(processed_file, "processed")
db_record = XMLMapping(file_path=key,
processed_file_path=processed_file,
uploaded_by="lambda",
status=VALIDATING, uploaded_date=datetime.now(), is_active=True)
session.add(db_record)
session.commit()
if key_file_name.split(".")[1] == XML_EXTENSION:
if content_type in CONTENT_TYPE:
xml_parse(tree, db_record, processed_file, True)
else:
print("Content Type is not valid. Provided value: ", content_type)
else:
print("File extension is not valid. Provided extension: ", key_file_name.split(".")[1])
return "success"
except Exception as e:
print(e)
raise e
I don't think its a permission issue as other files uploaded in the same batch were processed successfully.

Copying files between S3 buckets -- only 2 files copy

I am trying to copy multiple files from one s3 bucket to another s3 bucket using lambda function but it is just copying 2 files in destination s3 bucket.
Here is my code:
# using python and boto3
import json
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
source_bucket_name = event['Records'][0]['s3']['bucket']['name']
file_name = event['Records'][0]['s3']['object']['key']
destination_bucket_name = 'nishantnkd'
copy_object = {'Bucket': source_bucket_name, 'Key': file_name}
s3_client.copy_object(CopySource=copy_object,
Bucket=destination_bucket_name, Key=file_name)
return {'statusCode': 3000,
'body': json.dumps('File has been Successfully Copied')}
I presume that the Amazon S3 bucket is configured to trigger the AWS Lambda function when a new object is created.
When the Lambda function is triggered, it is possible that multiple event records are sent to the function. Therefore, it should loop through the event records like this:
# using python and boto3
import json
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']: # This loop added
source_bucket_name = record['s3']['bucket']['name']
file_name = urllib.parse.unquote_plus(record['s3']['object']['key']) # Note this change too
destination_bucket_name = 'nishantnkd'
copy_object = {'Bucket': source_bucket_name, 'Key': file_name}
s3_client.copy_object(CopySource=copy_object, Bucket=destination_bucket_name, Key=file_name)
return {'statusCode': 3000,
'body': json.dumps('File has been Successfully Copied')}

get the content of a .txt file from S3 and display its content in HTML with lambda

I have a question. I have connected my S3 to Lambda via AWS. I need to get the content of a .txt hosted on my S3 and display that content in HTML.Each time a file is created in S3, the content of the file must be displayed via HTML in Lambda.
That is, if the .txt has "Hello" written in it, I need to display in HTML "Hello".
How can I achieve this? So far, this is what I have
import json
import urllib.parse
import boto3
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
str_response = ""
for key in response:
str_response = str_response + "\t" + key + ":" + str(response[key]) + "\n"
print("Full response from S3: \n"+ str_response+"\n CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
I have searched and I found this
bucket.put_object(Key='index.html', Body=data, ContentType='text/html')
I don't know where it goes or how to use it.
Thanks

Multiple Emails getting sent from SNS triggered by Lambda

I have a Lambda function that gets triggered by a new PUT request in a particular S3 bucket. And sends an email through a SNS topic.
Lambda function code is listed below:
import json
import boto3
print('Loading function')
s3 = boto3.client('s3')
sns = boto3.client('sns')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
eventname = event['Records'][0]['eventName']
sns_message = str("A new file has been uploaded in our S3 bucket. Please find the details of file uploaded belown\n\nBUCKET NAME: "+ bucket +"\nFILE NAME: " + key)
try:
if eventname == "ObjectCreated:Put":
subject= "New data available in S3 Bucket [" + bucket +"]"
sns_response = sns.publish(TargetArn='<SNS ARN ID>',Message= str(sns_message),Subject= str(subject))
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
However, 2 emails are getting triggered when a new object is uploaded to the S3 bucket. One with subject programmed via the lambda function, and other one with AWS Notification containing the JSON as listed below.
{"version":"1.0","timestamp":"2021-05-28T13:38:24.548Z","requestContext":{"requestId":"<Request ID>","functionArn":"<Lambda Function ARN>:$LATEST","condition":"Success","approximateInvokeCount":2},"requestPayload":{"Records":[{"eventVersion":"2.1","eventSource":"aws:s3","awsRegion":"us-east-1","eventTime":"2021-05-28T13:35:24.743Z","eventName":"ObjectCreated:Put","userIdentity":{"principalId":"<my ID>"},"requestParameters":{"sourceIPAddress":"<IP Address>"},"responseElements":{"x-amz-request-id":"ABCD","x-amz-id-2":"EFGH"},"s3":{"s3SchemaVersion":"1.0","configurationId":"<Config ID>","bucket":{"name":"<Bucket Name>","ownerIdentity":{"principalId":"<ID>"},"arn":"<Bucket ARN>"},"object":{"key":"<File Key>","size":632088,"eTag":"<Etag>","versionId":"<Version ID>","sequencer":"<Seq>"}}}]},"responseContext":{"statusCode":200,"executedVersion":"$LATEST"},"responsePayload":null}
I want only 1 email to get triggered. How to implement that?

Dump lambda output to csv and have it email as an attachment

I have a lambda function that generates a list of untagged buckets in AWS environment. Currently I send the output to a slack channel directly. Instead I would like to have my lambda dump the output to a csv file and send it as a report. Here is the code for it, let me know if you need any other details.
import boto3
from botocore.exceptions import ClientError
import urllib3
import json
http = urllib3.PoolManager()
def lambda_handler(event, context):
#Printing the S3 buckets with no tags
s3 = boto3.client('s3')
s3_re = boto3.resource('s3')
buckets = []
print('Printing buckets with no tags..')
for bucket in s3_re.buckets.all():
s3_bucket = bucket
s3_bucket_name = s3_bucket.name
try:
response = s3.get_bucket_tagging(Bucket=s3_bucket_name)
except ClientError:
buckets.append(bucket)
print(bucket)
for bucket in buckets:
data = {"text": "%s bucket has no tags" % (bucket)}
r = http.request("POST", "https://hooks.slack.com/services/~/~/~",
body = json.dumps(data),
headers = {"Content-Type": "application/json"})