Stopping multiple AWS RDS instances using lambda function - amazon-web-services

I would like to stop multiple RDS instances using the lambda function. The code I am using is:
import sys
import botocore
import boto3
from botocore.exceptions import ClientError
def lambda_handler(event, context):
rds = boto3.client('rds')
lambdaFunc = boto3.client('lambda')
print('Trying to get Environment variable')
try:
funcResponse = lambdaFunc.get_function_configuration(
FunctionName='lambdaStopRDS'
)
DBinstance = funcResponse['Environment']['Variables']['DBInstanceName']
print('Stopping RDS service for DBInstance : ' + DBinstance)
except ClientError as e:
print(e)
try:
response = rds.stop_db_instance(
DBInstanceIdentifier=DBinstance
)
print('Success :: ')
return response
except ClientError as e:
print(e)
return
{
'message' : "Script execution completed. See Cloudwatch logs for complete output"
}
I am adding the following environment variables:
Key: DBInstanceName
Value: database-1, database-2
and getting the following error:
Trying to get Environment variable
Stopping RDS service for DBInstance : database-1, database-2
An error occurred (InvalidParameterValue) when calling the StopDBInstance operation: Invalid database identifier: database-1, database-2
Here, Keys must be unique, so I can not add another key with same name and add another RDS.
Is there any way to stop multiple RDS instances within same VPC/region without tags?

stop_db_instance takes only one db id, not multiple ones. However, you are trying to pass two of them database-1, database-2. So you have to do in a loop. For example:
try:
db_ids = [v.strip() for v in DBinstance.split(',')]
for db_id in db_ids:
response = rds.stop_db_instance(
DBInstanceIdentifier=db_id
)
print('Success :: ')
return response
except ClientError as e:
print(e)

Related

Cloudformation "update your Lambda function code so that CloudFormation can attach the updated version"

I am deploying the CloudFormation template from this blog post. I had to update the Lambda functions from python 3.6 to 3.9 to get it to work. Now however I get the following error message:
> CloudFormation did not receive a response from your Custom Resource.
> Please check your logs for requestId
> [029f4ea5-cd25-4593-b1ee-d805dd30463f]. If you are using the Python
> cfn-response module, you may need to update your Lambda function code
> so that CloudFormation can attach the updated version.
Below is the lambda code in question - what does it mean to update the Lambda function "so that CloudFormation can attach the updated version"?
import util.cfnresponse
import boto3
import uuid
client = boto3.client('s3')
cfnresponse = util.cfnresponse
def lambda_handler(event, context):
response_data = {}
try:
if event["RequestType"] == "Create":
bucket_name = uuid.uuid4().hex+'-connect'
# response = client.create_bucket(
# Bucket=bucket_name,
# )
response_data["BucketName"] = bucket_name
cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)
cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)
except Exception as e:
print(e)
cfnresponse.send(event, context, cfnresponse.FAILED, response_data)
From what I can tell the response format follows the current version of the response module API?
the cfnrespone lib has changed get updated. Old versions of the lib use the request lib. This CF is over 4 years old so it probably don't work due to this.
You can read about the update on the last rows in the README here:
https://github.com/gene1wood/cfnresponse

Lambda task timeout but no application log

I have a python lambda that triggers by S3 uploads to a specific folder. The lambda function is to process the uploaded file and outputs it to another folder on the same S3 bucket.
The issue is that when I do a bulk upload using AWS console, some files do not get processed. I ended up setting a dead letter queue to catch these invocations. While inspecting the message in the queue, there is a request ID which I tried to find it in the lambda logs.
These are the logs for the request ID:
Now the odd part is that in the python code, the first line after the imports is print('Loading function') which does not show up in the lambda log?
Added the python code here. It should still print the Processing file name: " + key which is inside the handler ya?
import urllib.parse
from datetime import datetime
import boto3
from constants import CONTENT_TYPE, XML_EXTENSION, VALIDATING
from xml_process import *
from s3Integration import download_file
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
print("Processing file name: " + key)
try:
response = s3.get_object(Bucket=bucket, Key=key)
xml_content = response["Body"].read()
content_type = response["ContentType"]
tree = ET.fromstring(xml_content)
key_file_name = key.split("/")[1]
# Creating a temporary copy by downloading file to get the namespaces
temp_file_name = "/tmp/" + key_file_name
download_file(key, temp_file_name)
namespaces = {node[0]: node[1] for _, node in ET.iterparse(temp_file_name, events=['start-ns'])}
for name, value in namespaces.items():
ET.register_namespace(name, value)
# Preparing path for file processing
processed_file = key_file_name.split(".")[0] + "_processed." + key_file_name.split(".")[1]
print(processed_file, "processed")
db_record = XMLMapping(file_path=key,
processed_file_path=processed_file,
uploaded_by="lambda",
status=VALIDATING, uploaded_date=datetime.now(), is_active=True)
session.add(db_record)
session.commit()
if key_file_name.split(".")[1] == XML_EXTENSION:
if content_type in CONTENT_TYPE:
xml_parse(tree, db_record, processed_file, True)
else:
print("Content Type is not valid. Provided value: ", content_type)
else:
print("File extension is not valid. Provided extension: ", key_file_name.split(".")[1])
return "success"
except Exception as e:
print(e)
raise e
I don't think its a permission issue as other files uploaded in the same batch were processed successfully.

AWS Lambda create EC2 and associate EIP

I am trying to deploy an EC2 instance and associate an EIP to it, but I am getting and error when trying to associate the EIP because the instance is not running. This is my code:
import boto3
from botocore.exceptions import ClientError
AMI = 'ami-0bf84....'
INSTANCE_TYPE = 't2.micro'
KEY_NAME = 'EC2company'
SUBNET_ID = 'subnet-065....'
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
instance = ec2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
KeyName=KEY_NAME,
SubnetId=SUBNET_ID,
MaxCount=1,
MinCount=1
)
waiter = ec2.get_waiter('instance_running')
try:
response = ec2.associate_address(
AllocationId='eipalloc-0bc.....',
InstanceId=instance['Instances'][0]['InstanceId'],
)
print(response)
except ClientError as e:
print(e)
I suppose that the issue is related to be applying the waiter in the wrong way, and not sure how i should do it.
As per EC2 waiters, you can create a waiter with:
waiter = client.get_waiter('instance_running')
You then activate the waiter with:
waiter.wait(InstanceIds=['i-xxx']
It polls EC2.Client.describe_instances() every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.

AWS lambda change instance type with boto3

when i try this code to change EC2 instance type, it gives me this error
----------------------------------error Response---------------------------------------------------------------
"errorMessage": "Syntax error in module 'lambda_function': expected an indented block
"Runtime.UserCodeSyntaxError",
--------------------------Lambda code--------------------------
import boto3
def lambda_handler(event, context):
client = boto3.client('ec2')
# Insert your Instance ID here
my_instance = 'i-0cd1cecsdcdodid'
# Stop the instance
client.stop_instances(InstanceIds=[my_instance])
waiter=client.get_waiter('instance_stopped')
waiter.wait(InstanceIds=[my_instance])
# Change the instance type
client.modify_instance_attribute(InstanceId=my_instance, Attribute='instanceType', Value='t2.medium')
# Start the instance
client.start_instances(InstanceIds=[my_instance])
Looks like you have some indentation error in your code. Here is a reformatted version:
import boto3
def lambda_handler(event, context):
client = boto3.client('ec2')
# Insert your Instance ID here
my_instance = 'i-0cd1cecsdcdodid' # Stop the instance
client.stop_instances(InstanceIds=[my_instance])
waiter = client.get_waiter('instance_stopped')
waiter.wait(InstanceIds=[my_instance]) # Change the instance type
client.modify_instance_attribute(InstanceId=my_instance,
Attribute='instanceType', Value='t2.medium') # Start the instance
client.start_instances(InstanceIds=[my_instance])

AWS Lambda error No module named 'StringIO' or No module named 'StringIO'

I try to use AWS Lambda for mass email sending, the code we use as the link below:
https://aws.amazon.com/cn/premiumsupport/knowledge-center/mass-email-ses-lambda/
from __future__ import print_function
import StringIO
import csv
import json
import os
import urllib
import zlib
from time import strftime, gmtime
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import boto3
import botocore
import concurrent.futures
__author__ = 'Said Ali Samed'
__date__ = '10/04/2016'
__version__ = '1.0'
# Get Lambda environment variables
region = os.environ['us-east-1']
max_threads = os.environ['10']
text_message_file = os.environ['email_body.txt']
html_message_file = os.environ['email_body.html']
# Initialize clients
s3 = boto3.client('s3', region_name=region)
ses = boto3.client('ses', region_name=region)
send_errors = []
mime_message_text = ''
mime_message_html = ''
def current_time():
return strftime("%Y-%m-%d %H:%M:%S UTC", gmtime())
def mime_email(subject, from_address, to_address, text_message=None, html_message=None):
msg = MIMEMultipart('alternative')
msg['Subject'] = subject
msg['From'] = from_address
msg['To'] = to_address
if text_message:
msg.attach(MIMEText(text_message, 'plain'))
if html_message:
msg.attach(MIMEText(html_message, 'html'))
return msg.as_string()
def send_mail(from_address, to_address, message):
global send_errors
try:
response = ses.send_raw_email(
Source=from_address,
Destinations=[
to_address,
],
RawMessage={
'Data': message
}
)
if not isinstance(response, dict): # log failed requests only
send_errors.append('%s, %s, %s' % (current_time(), to_address, response))
except botocore.exceptions.ClientError as e:
send_errors.append('%s, %s, %s, %s' %
(current_time(),
to_address,
', '.join("%s=%r" % (k, v) for (k, v) in e.response['ResponseMetadata'].iteritems()),
e.message))
def lambda_handler(event, context):
global send_errors
global mime_message_text
global mime_message_html
try:
# Read the uploaded csv file from the bucket into python dictionary list
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
response = s3.get_object(Bucket=bucket, Key=key)
body = zlib.decompress(response['Body'].read(), 16+zlib.MAX_WBITS)
reader = csv.DictReader(StringIO.StringIO(body),
fieldnames=['from_address', 'to_address', 'subject', 'message'])
# Read the message files
try:
response = s3.get_object(Bucket=bucket, Key=text_message_file)
mime_message_text = response['Body'].read()
except:
mime_message_text = None
print('Failed to read text message file. Did you upload %s?' % text_message_file)
try:
response = s3.get_object(Bucket=bucket, Key=html_message_file)
mime_message_html = response['Body'].read()
except:
mime_message_html = None
print('Failed to read html message file. Did you upload %s?' % html_message_file)
if not mime_message_text and not mime_message_html:
raise ValueError('Cannot continue without a text or html message file.')
# Send in parallel using several threads
e = concurrent.futures.ThreadPoolExecutor(max_workers=max_threads)
for row in reader:
from_address = row['from_address'].strip()
to_address = row['to_address'].strip()
subject = row['subject'].strip()
message = mime_email(subject, from_address, to_address, mime_message_text, mime_message_html)
e.submit(send_mail, from_address, to_address, message)
e.shutdown()
except Exception as e:
print(e.message + ' Aborting...')
raise e
print('Send email complete.')
# Remove the uploaded csv file
try:
response = s3.delete_object(Bucket=bucket, Key=key)
if 'ResponseMetadata' in response.keys() and response['ResponseMetadata']['HTTPStatusCode'] == 204:
print('Removed s3://%s/%s' % (bucket, key))
except Exception as e:
print(e)
# Upload errors if any to S3
if len(send_errors) > 0:
try:
result_data = '\n'.join(send_errors)
logfile_key = key.replace('.csv.gz', '') + '_error.log'
response = s3.put_object(Bucket=bucket, Key=logfile_key, Body=result_data)
if 'ResponseMetadata' in response.keys() and response['ResponseMetadata']['HTTPStatusCode'] == 200:
print('Send email errors saved in s3://%s/%s' % (bucket, logfile_key))
except Exception as e:
print(e)
raise e
# Reset publish error log
send_errors = []
if __name__ == "__main__":
json_content = json.loads(open('event.json', 'r').read())
lambda_handler(json_content, None)
but it has problem when i choose python 2.7.the error is
module initialization error 'us-east-1'
when i choose python 3.6 the error is
Unable to import module 'lambda_function': No module named 'StringIO'
anyone can tell me what is the problem it is ?
From Python v3, the StringIO module has gone. Instead, import the io module and use io.StringIO.
The problem with the v27 version is presumably that the following statement is failing:
region = os.environ['us-east-1']
This will result in a KeyError if us-east-1 is not an available environment variable. Instead use AWS_REGION or AWS_DEFAULT_REGION. See the full list of Lambda environment variables.
Please set the environment variables as described in step 4 of the article:
"Configure Lambda environment variables appropriate to your usage scenario. For example, the following variables would be valid for a given use case:
REGION=us-east-1, MAX_THREADS=10, TEXT_MESSAGE_FILE=email_body.txt, HTML_MESSAGE_FILE=email_body.html."
What was done (as per the code provided in the question) is replacing names of environment variables with their values, which means that python is looking for e.g. 'us-east-1' environment variable which isn't there...
This is the original code
# Get Lambda environment variables
region = os.environ['REGION']
max_threads = os.environ['MAX_THREADS']
text_message_file = os.environ['TEXT_MESSAGE_FILE']
html_message_file = os.environ['HTML_MESSAGE_FILE']
You can also hard-code the values, like below:
# Get Lambda environment variables
region = 'us-east-1'
max_threads = '10'
text_message_file = 'email_body.txt'
html_message_file = 'email_body.html'
but I'd suggest to set the environment variables instead (and use the version of script provided by the article author). When it comes to setting environment variables in Lambda, see this article :)