Error sending MIME Multipart message through AWS Lambda via Amazon SES - amazon-web-services

I am trying to send an email using Amazon SES, AWS S3, and AWS Lambda together. I have been hitting an error like this for awhile now and I am not completely sure what to do here. I have the stack trace from the error below.
Edit: I have a fully verified Amazon SES domain and I am receiving emails to trigger the Lambda function. I am also able to successfully send emails using the built-in testing features, just not using this function.
{
"errorMessage": "'list' object has no attribute 'encode'",
"errorType": "AttributeError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 222, in lambda_handler\n message = create_message(file_dict, header_from, header_to)\n",
" File \"/var/task/lambda_function.py\", line 175, in create_message\n \"Data\": msg.as_string()\n",
" File \"/var/lang/lib/python3.7/email/message.py\", line 158, in as_string\n g.flatten(self, unixfrom=unixfrom)\n",
" File \"/var/lang/lib/python3.7/email/generator.py\", line 116, in flatten\n self._write(msg)\n",
" File \"/var/lang/lib/python3.7/email/generator.py\", line 195, in _write\n self._write_headers(msg)\n",
" File \"/var/lang/lib/python3.7/email/generator.py\", line 222, in _write_headers\n self.write(self.policy.fold(h, v))\n",
" File \"/var/lang/lib/python3.7/email/_policybase.py\", line 326, in fold\n return self._fold(name, value, sanitize=True)\n",
" File \"/var/lang/lib/python3.7/email/_policybase.py\", line 369, in _fold\n parts.append(h.encode(linesep=self.linesep, maxlinelen=maxlinelen))\n"
]
}
Additionally, here is the relevant code. The start of the code is within a create_message() method
import os
import boto3
import email
import re
from botocore.exceptions import ClientError
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.application import MIMEApplication
. . .
# Create a MIME container.
msg = MIMEMultipart()
# Create a MIME text part.
text_part = MIMEText(body_text, _subtype="html")
# Attach the text part to the MIME message.
msg.attach(text_part)
# Add subject, from and to lines.
msg['Subject'] = subject
msg['From'] = sender
msg['To'] = recipient
# Create a new MIME object.
att = MIMEApplication(file_dict["file"], filename)
att.add_header("Content-Disposition", 'attachment', filename=filename)
# Attach the file object to the message.
msg.attach(att)
message = {
"Source": sender,
"Destinations": recipient,
"Data": msg.as_string() # The error occurs here
}
return message
def send_email(message):
aws_region = os.environ['Region']
# Create a new SES client.
client_ses = boto3.client('ses', region)
# Send the email.
try:
#Provide the contents of the email.
response = client_ses.send_raw_email(
Source=message['Source'],
Destinations=[
message['Destinations']
],
RawMessage={
'Data':message['Data']
}
)
If you have any insight as far as what should be done, that would be greatly appreciated. I've looked at similar questions but they did resolve my issue. Thanks for your help!

Related

Validating and unzipping a zip file in AWS lambda

Requirement states that the lambda function must check the zipfile for any excluded file extensions which have been defined in the function.
I have outlined the steps which are needed for a successful run.
I need to validate it and make sure that the zip file doesn't have the bad extensions. This step seems to be running and the validation is being run.
The file needs to be unzipped.
The file should be unzipped in an 'unzipped' folder in the same directory.
All the above steps are occurring but I seem to be getting an attribute error in my code which has been outlined below. Any ideas/ solutions are greatly appreciated.
import json
import zipfile
import os
import boto3
from urllib.parse import unquote_plus
import io
import re
import gzip
exclude_list = [".exe", ".scr", ".vbs", ".js", ".xml", "docm", ".xps"]
sns = boto3.client('sns' )
def read_nested_zip(tf, bucket, key, s3_client):
print(key)
print ("search for.zip:",re.search(r'\.zip', key, re.IGNORECASE))
## need to add exception handling
##if re.search(r'\.gzip$', key, re.IGNORECASE):
## print ('gzip file found')
## fil = gzip.GzipFile(tf, mode='rb')
if re.search(r'\.zip$', key, re.IGNORECASE):
print ('zip file found')
fil = zipfile.ZipFile(tf, "r").namelist()
else:
fil = ()
print ('no file found')
print (fil)
##with fil as zipf:
##try to narrow scope - run loop else exit
for file in fil:
print(file)
if re.search(r'(\.zip|)$', file, re.IGNORECASE):
childzip = io.BytesIO(fil.read(file))
read_nested_zip(childzip, bucket, key, s3_client)
else:
if any(x in file.lower() for x in exclude_list):
print("Binary, dont load")
print(file)
print(bucket)
print(key)
env = bucket.split('-')[2].upper()
# Copy the parent zip to a separate folder and remove it from the path
copy_source = {'Bucket': bucket, 'Key': key}
s3_client.copy_object(Bucket=bucket, CopySource=copy_source, Key='do_not_load_'+key)
s3_client.delete_object(Bucket = bucket, Key = key)
sns.publish(
TopicArn = 'ARN',
Subject = env + ': S3 upload warning: Non standard File encountered ',
Message = 'Non standard File encountered' + key + ' uploaded to bucket ' + bucket + ' The file has been moved to ' + 'do_not_load_'+key
)
else:
print("File in supported formats, can be loaded " + file)
#folder = re.sub(r"\/[^/]+$", "",key)
folder = "/".join(key.split("/", 2)[:2]) + "/unzipped"
print(folder)
print("Bucket is "+ bucket)
print("file to copy is "+ file)
buffer = io.BytesIO(fil.read(file))
s3_resource = boto3.resource('s3')
s3_resource.meta.client.upload_fileobj(buffer,Bucket=bucket,Key= folder + '/' + file)
s3_resource.Object(bucket, folder + '/' + file).wait_until_exists()
def lambda_handler(event, context):
print(event)
for record in event['Records']:
s3_client = boto3.client('s3')
key = unquote_plus(record['s3']['object']['key'])
print(key)
print (type(key))
size = record['s3']['object']['size']
bucket = record['s3']['bucket']['name']
obj = s3_client.get_object(Bucket=bucket, Key=key)
print(obj)
putObjects = []
with io.BytesIO(obj["Body"].read()) as tf:
# rewind the file
#tf.seek(0)
read_nested_zip(tf, bucket, key, s3_client)
Error code"[ERROR] AttributeError: 'list' object has no attribute 'read'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 85, in lambda_handler
    read_nested_zip(tf, bucket, key, s3_client)
  File "/var/task/lambda_function.py", line 35, in read_nested_zip
    childzip = io.BytesIO(fil.read())
Things I tried:
1.
childzip = io.BytesIO(fil.read(file))
#tried switching the childzip = io.BytesIO(fil.read()) #still failed
changed
childzip = io.BytesIO(fil)
[ERROR] AttributeError: module 'zipfile' has no attribute 'read'
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 85, in lambda_handler
    read_nested_zip(tf, bucket, key, s3_client)
  File "/var/task/lambda_function.py", line 25, in read_nested_zip
    fil = zipfile.read(tf, "r").namelist()
Any ideas are appreciated. Best
As long as the ZIP file is not too big, I'd suggest downloading the ZIP file to the Lambda function's /tmp folder and then using the zipfile context manager to simplify accessing the ZIP file. Alternatively, you can stream the ZIP file but probably still use the context manager.
Note that I've included code that specifically reads the byte content of a file from within the ZIP file. See bytes = myzip.read(name) below.
For example:
import json
import os
import zipfile
import boto3
from urllib.parse import unquote_plus
ZIP_NAME = "/tmp/local.zip"
EXCLUDE_LIST = [".exe", ".scr", ".vbs", ".js", ".xml", "docm", ".xps"]
s3 = boto3.client("s3")
def process_zip(bucket, key):
s3.download_file(bucket, key, ZIP_NAME)
with zipfile.ZipFile(ZIP_NAME, "r") as myzip:
namelist = myzip.namelist()
for name in namelist:
print("Zip contains:", name)
extensions = [os.path.splitext(name)[1] for name in namelist]
print("Extensions:", extensions)
if any(extension in EXCLUDE_LIST for extension in extensions):
print("Banned extensions present in:", extensions)
os.remove(ZIP_NAME)
return
for name in namelist:
print("Zip read:", name)
bytes = myzip.read(name)
# your code here ...
os.remove(ZIP_NAME)
def lambda_handler(event, context):
for record in event.get("Records", []):
key = unquote_plus(record["s3"]["object"]["key"])
bucket = record["s3"]["bucket"]["name"]
if os.path.splitext(key)[1] == ".zip":
process_zip(bucket, key)
return {"statusCode": 200, "body": json.dumps("OK")}

KeyError: 'Records' in AWS Lambda triggered by s3 PUT event

i m trying to create a simple event driven AWS Lambda Python function to extract a ZIP or GZIP attachment from an email stored in S3 by another service (such as Amazon SES).
from __future__ import print_function
import email
import zipfile
import os
import gzip
import string
import boto3
import urllib
print('Loading function')
s3 = boto3.client('s3')
s3r = boto3.resource('s3')
xmlDir = "/tmp/output/"
outputBucket = "" # Set here for a seperate bucket otherwise it is set to the events bucket
outputPrefix = "xml/" # Should end with /
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
# Set outputBucket if required
if not outputBucket:
global outputBucket
outputBucket = bucket
# Use waiter to ensure the file is persisted
waiter = s3.get_waiter('object_exists')
waiter.wait(Bucket=bucket, Key=key)
response = s3r.Bucket(bucket).Object(key)
# Read the raw text file into a Email Object
msg = email.message_from_string(response.get()["Body"].read())
if len(msg.get_payload()) == 2:
# Create directory for XML files (makes debugging easier)
if os.path.isdir(xmlDir) == False:
os.mkdir(xmlDir)
# The first attachment
attachment = msg.get_payload()[1]
# Extract the attachment into /tmp/output
extract_attachment(attachment)
# Upload the XML files to S3
upload_resulting_files_to_s3()
else:
print("Could not see file/attachment.")
return 0
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist '
'and your bucket is in the same region as this '
'function.'.format(key, bucket))
raise e
def extract_attachment(attachment):
# Process filename.zip attachments
if "gzip" in attachment.get_content_type():
contentdisp = string.split(attachment.get('Content-Disposition'), '=')
fname = contentdisp[1].replace('\"', '')
open('/tmp/' + contentdisp[1], 'wb').write(attachment.get_payload(decode=True))
# This assumes we have filename.xml.gz, if we get this wrong, we will just
# ignore the report
xmlname = fname[:-3]
open(xmlDir + xmlname, 'wb').write(gzip.open('/tmp/' + contentdisp[1], 'rb').read())
# Process filename.xml.gz attachments (Providers not complying to standards)
elif "zip" in attachment.get_content_type():
open('/tmp/attachment.zip', 'wb').write(attachment.get_payload(decode=True))
with zipfile.ZipFile('/tmp/attachment.zip', "r") as z:
z.extractall(xmlDir)
else:
print('Skipping ' + attachment.get_content_type())
def upload_resulting_files_to_s3():
# Put all XML back into S3 (Covers non-compliant cases if a ZIP contains multiple results)
for fileName in os.listdir(xmlDir):
if fileName.endswith(".xml"):
print("Uploading: " + fileName) # File name to upload
s3r.meta.client.upload_file(xmlDir+'/'+fileName, outputBucket, outputPrefix+fileName)
on running the function i m getting this error
'Records': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 25, in lambda_handler
for record in event["Records"]:
KeyError: 'Records'
i tried googling and found few telling me to add Mapping Template --https://intellipaat.com/community/18329/keyerror-records-in-aws-s3-lambda-trigger ,
"KeyError: 'Records'" in AWS S3 - Lambda trigger,
following this link but i m getting some other error
'query': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 24, in lambda_handler
for record in event['query']['Records']:
KeyError: 'query'

AWS Lambda bootstrap.py file is throwing error while trying to upload data to elastic search

I'm trying to index pdf documents that are uploaded to s3 bucket. My lambda function is working fine til PDF extraction part. it's establishing connection with elastic search endpoint and while uploading data elastic search for indexing, it's throwing error. Please find lambda function code below. Please help me with this. Thanks in advance.
from __future__ import print_function
import json
import urllib
import boto3
import slate
import elasticsearch
import datetime
es_endpoint = 'search-sdjsf-zrtisx]sdaswasfsjmtsyuih3awvu.us-east-
1.es.amazonaws.com'
es_index = 'pdf_text_extracts'
es_type = 'document'
print('Loading function')
s3 = boto3.client('s3')
# prepare a dict to hold our document data
doc_data = {}
doc_data['insert_time'] =
str(datetime.datetime.isoformat(datetime.datetime.now()))
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
object_key = urllib.unquote_plus(event['Records'][0]['s3']['object']
['key']).decode('utf8')
try:
# get the file data from s3
temp_pdf_file = open('/tmp/tempfile.pdf', 'w')
response = s3.get_object(Bucket=bucket, Key=object_key)
print("CONTENT TYPE: " + response['ContentType'])
# return response['ContentType']
temp_pdf_file.write(response['Body'].read()) # write the object data
to a local file; will be passed to slate
temp_pdf_file.close() # close the temporary file for now
# pull the text from the temporary PDF file using slate
print("Extracting data from: " + object_key)
with open('/tmp/tempfile.pdf') as temp_pdf_file:
doc = slate.PDF(temp_pdf_file)
# store document data to dict
doc_data['source_pdf_name'] = object_key
doc_data['document_text'] = doc[0] # we're only worried about page 1
at this point
#datj=json.dumps(doc_data)
#z=json.loads(datj)
#print(z)
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist
and your bucket is in the same region as this
function.'.format(object_key, bucket))
raise e
# put the data in ES
#try:
es = elasticsearch.Elasticsearch([{'host': es_endpoint, 'port': 443,
'use_ssl': True}]) # hold off on validating certs
es_response = es.index(index=es_index, doc_type=es_type, body=doc_data)
print('Data posted to ES: ' + str(es_response))
#except Exception as e:
#print('Data post to ES failed: ' + str(e))
#raise e
return "Done"
I have removed try and except in last block to find the actual error and its throwing the below error while trying to upload data to elastic search.
Traceback (most recent call last):
File "/var/runtime/awslambda/bootstrap.py", line 576, in <module>
main()
File "/var/runtime/awslambda/bootstrap.py", line 571, in main
handle_event_request(request_handler, invokeid, event_body, context_objs,
invoked_function_arn)
File "/var/runtime/awslambda/bootstrap.py", line 264, in
handle_event_request
result = report_fault_helper(invokeid, sys.exc_info(), None)
File "/var/runtime/awslambda/bootstrap.py", line 315, in report_fault_helper
msgs = [str(value), etype.__name__]
Remove the return "Done" at the end, that's not allowed in a Lambda environment.

Requesting help triggering a lambda function from DynamoDB

I was following the guide posted here on youtube https://www.youtube.com/watch?v=jgiZ9QUYqyM and is defiantly what I want. I posted the code that I had for mine and the image of what everything looks like in my AWS.
I have a dynamodb table and linked it to my s3 bucket with a trigger. That trigger is giving me some error message which is posted above. "Decimal('1') is not JSON serializable". Though I was testing it with the helloworld.
This is the code :
import boto3
import json
import os
s3 = boto3.client('s3')
ddb = boto3.resource('dynamodb')
table = ddb.Table('test_table')
def lambda_handler(event, context):
response = table.scan()
body = json.dumps(response['Items'])
response = s3.put_object(Bucket='s3-testing',
Key = 's3-testing.json' ,
Body=body,
ContentType='application/json')
Can someone point me in the right direction? These are the snippets I got
https://i.stack.imgur.com/I0jAn.png
https://i.stack.imgur.com/2hMc9.png
This is the execution log:
Response:
{
"stackTrace": [
[
"/var/task/lambda_function.py",
20,
"lambda_handler",
"body = json.dumps(response['Items'])"
],
[
"/usr/lib64/python2.7/json/__init__.py",
244,
"dumps",
"return _default_encoder.encode(obj)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
207,
"encode",
"chunks = self.iterencode(o, _one_shot=True)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
270,
"iterencode",
"return _iterencode(o, 0)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
184,
"default",
"raise TypeError(repr(o) + \" is not JSON serializable\")"
]
],
"errorType": "TypeError",
"errorMessage": "Decimal('1') is not JSON serializable"
}
Function log:
START RequestId: 31719509-94c7-11e8-a0d4-a9b76b7b212c Version: $LATEST
Decimal('1') is not JSON serializable: TypeError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 20, in lambda_handler
body = json.dumps(response['Items'])
File "/usr/lib64/python2.7/json/__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib64/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: Decimal('1') is not JSON serializable
Decimal object is not json serializable. Considering casting the Decimal into a float using a helper function. (json.dumps() takes a default function)
import boto3
import json
import os
from decimal import Decimal
s3 = boto3.client('s3')
ddb = boto3.resource('dynamodb')
table = ddb.Table('test_table')
def lambda_handler(event, context):
response = table.scan()
body = json.dumps(response['Items'], default=handle_decimal_type)
response = s3.put_object(Bucket='s3-testing',
Key = 's3-testing.json' ,
Body=body,
ContentType='application/json')
def handle_decimal_type(obj):
if isinstance(obj, Decimal):
if float(obj).is_integer():
return int(obj)
else:
return float(obj)
raise TypeError
The problem is that the Dynamo Python library is converting numeric values to Decimal objects, but those aren't JSON serializable by default, so json.dumps blows up. You will need to provide json.dumps with a converter for Decimal objects.
See Python JSON serialize a Decimal object

Getting "LazyImporter object is not callable" error when trying to send email using python smtplib

Getting "LazyImporter' object is not callable" error when trying to send email
with attachments using python smtplib from gmail.
I have enabled allow less security app setting on in sender gmail
Code:
import smtplib
from email import MIMEBase
from email import MIMEText
from email.mime.multipart import MIMEMultipart
from email import Encoders
import os
def send_email(to, subject, text, filenames):
try:
gmail_user = 'xx#gmail.com'
gmail_pwd = 'xxxx'
msg = MIMEMultipart()
msg['From'] = gmail_user
msg['To'] = ", ".join(to)
msg['Subject'] = subject
msg.attach(MIMEText(text))
for file in filenames:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(file, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition', 'attachment; filename="%s"'% os.path.basename(file))
msg.attach(part)
mailServer = smtplib.SMTP("smtp.gmail.com:587") #465,587
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, to, msg.as_string())
mailServer.close()
print('successfully sent the mail')
except smtplib.SMTPException,error::
print str(error)
if __name__ == '__main__':
attachment_file = ['t1.txt','t2.csv']
to = "xxxxxx#gmail.com"
TEXT = "Hello everyone"
SUBJECT = "Testing sending using gmail"
send_email(to, SUBJECT, TEXT, attachment_file)
Error : File "test_mail.py", line 64, in
send_email(to, SUBJECT, TEXT, attachment_file)
File "test_mail.py", line 24, in send_email
msg.attach(MIMEText(text))
TypeError: 'LazyImporter' object is not callable
Like #How about nope said, with your import statement you are importing the MIMEText module and not the class. I can reproduce the error from your code. When I import from email.mime.text instead, the error disappear.