AWS lambda function not giving correct output - amazon-web-services

I am new to python using boto3 for AWS. I am creating a lambda function that will return orphaned snapshots list. Code is -
def lambda_handler(event, context):
ec2_resource = boto3.resource('ec2')
# Make a list of existing volumes
all_volumes = ec2_resource.volumes.all()
volumes = [volume.volume_id for volume in all_volumes]
# Find snapshots without existing volume
snapshots = ec2_resource.snapshots.filter(OwnerIds=['self'])
# Create list of all snapshots
osl =[]
for snapshot in snapshots:
if snapshot.volume_id not in volumes:
osl.append(snapshot)
print('\n Snapshot ID is :- '+str(snapshot))
#snapshot.delete()
continue
for tag in snapshot.tags:
if tag['Key'] == 'Name':
value=tag['Value']
print('\n Snapshot Tags are:- '+str(tag))
break
print('Total orphaned snapshots are:- '+str(len(osl)))
This returns list of snapshots & tags too in incorrect format.
Surprisingly, when I run same code in another account, it gives lambda function error -
I have created same permissions IAM role. But different results in different accounts is something i am not gettting.

You are printing snapshot object but you want to print id.
Second problem is all snapshot do not need to have tags, you need to check if snapshot have tags or not.
def lambda_handler(event, context):
ec2_resource = boto3.resource('ec2')
# Make a list of existing volumes
all_volumes = ec2_resource.volumes.all()
volumes = [volume.volume_id for volume in all_volumes]
# Find snapshots without existing volume
snapshots = ec2_resource.snapshots.filter(OwnerIds=['self'])
# Create list of all snapshots
osl =[]
for snapshot in snapshots:
if snapshot.volume_id not in volumes:
osl.append(snapshot.id)
print('\n Snapshot ID is :- ' + snapshot.id)
for snapshot in snapshots:
if snapshot.volume_id not in volumes:
osl.append(snapshot.id)
print('\n Snapshot ID is :- ' + snapshot.id)
if snapshot.tags:
for tag in snapshot.tags:
if tag['Key'] == 'Name':
value = tag['Value']
print('\n Snapshot Tags are '+ tag['Key'] + ", value = " + value)
break
print('Total orphaned snapshots are:- '+str(len(osl)))

Related

AWS Lambda failing to fetch EC2 AZ details

I am trying to create lambda script using Python3.9 which will return total ec2 servers in AWS account, their status & details. Some of my code snippet is -
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print("AvailabilityZone: ", instance_details['AvailabilityZone'])
print("\nInstanceId: ", instance_details["InstanceId"])
print("\nInstanceType: ",instance_details['InstanceType'])
On ruunning this code i get error -
If I comment AZ details, code works fine.If I create a new function with only AZ parameter in it, all AZs are returned. Not getting why it fails in above mentioned code.
In python, its always a best practice to use get method to fetch value from list or dict to handle exception.
AvailibilityZone is actually present in Placement dict and not under instance details. You can check the entire response structure from below boto 3 documentation
Reference
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
print(f"\nInstanceId: {instance_details.get('InstanceId')}")
print(f"\nInstanceType: {instance_details.get('InstanceType')}")
The problem is that in response of describe_instances availability zone is not in first level of instance dictionary (in your case instance_details). Availability zone is under Placement dictionary, so what you need is
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")

AWS CLI Query to find Shared Security Groups

I am trying to write a query to return results of any referenced security group not owned by the current account.
This means I am trying to show security groups that are being used as part of a peering connection from another VPC.
There are a couple of restrictions.
Show the entire security group details (security group id, description)
Only show security groups where IpPermissions.UserIdGroupPairs has a Value and where that value is not equal to the owner of the security group
I am trying to write this using a single AWS CLI cmd vs a bash script or python script.
Any thoughts?
Heres what I have so far.
aws ec2 describe-security-groups --query "SecurityGroups[?IpPermissions.UserIdGroupPairs[*].UserId != '`aws sts get-caller-identity --query 'Account' --output text`']"
Following is Python 3.8 based AWS Lambda, but you can change a bit to use as python script file to execute on any supported host machine.
import boto3
import ast
config_service = boto3.client('config')
# Refactor to extract out duplicate code as a seperate def
def lambda_handler(event, context):
results = get_resource_details()
for resource in results:
if "configuration" in resource:
config=ast.literal_eval(resource)["configuration"]
if "ipPermissionsEgress" in config:
ipPermissionsEgress=config["ipPermissionsEgress"]
for data in ipPermissionsEgress:
for userIdGroupPair in data["userIdGroupPairs"]:
if userIdGroupPair["userId"] != "123456789111":
print(userIdGroupPair["groupId"])
elif "ipPermissions" in config:
ipPermissions=config["ipPermissions"]
for data in ipPermissions:
for userIdGroupPair in data["userIdGroupPairs"]:
if userIdGroupPair["userId"] != "123456789111":
print(userIdGroupPair["groupId"])
def get_resource_details():
query = "SELECT configuration.ipPermissions.userIdGroupPairs.groupId,configuration.ipPermissionsEgress.userIdGroupPairs.groupId,configuration.ipPermissionsEgress.userIdGroupPairs.userId,configuration.ipPermissions.userIdGroupPairs.userId WHERE resourceType = 'AWS::EC2::SecurityGroup' AND configuration <> 'null'"
results = config_service.select_resource_config(
Expression=query,
Limit=100
) # you might need to refacor to add support huge list of records using NextToken
return results["Results"]

How to list AWS tagged Hosted Zones using ResourceGroupsTaggingAPI boto3

I am trying to retrieve all the AWS resources tagged using the boto3 ResourceGroupsTaggingAPI, but I can't seem to retrieve the Hosted Zones which have been tagged.
tagFilters = [{'Key': 'tagA', 'Values': 'a'}, {'Key': 'tagB', 'Values': 'b'}]
client = boto3.client('resourcegroupstaggingapi', region_name = 'us-east-2')
paginator = self.client.get_paginator('get_resources')
page_list = paginator.paginate(TagFilters = tagFilters)
# filter and get iterable object arn
# Refer filtering with JMESPath => http://jmespath.org/
arns = page_list.search("ResourceTagMappingList[*].ResourceARN")
for arn in arns:
print(arn)
I noticed through the Tag Editor in the AWS Console (which I guess is using the ResourceGroupsTaggingAPI) when the region is set to All the tagged Hosted zones can be retrieved (since global) while when a specific region is set the tagged Hosted Zones are not shown in the results. Is there a way to set the boto3 client region to all?, or is there another way to do this?
I have already tried
client = boto3.client('resourcegroupstaggingapi')
which returns an empty result
(https://console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=us-east-1)
You need to iterate it over all regions,
ec2 = boto3.client('ec2')
region_response = ec2.describe_regions()
#print('Regions:', region_response['Regions'])
for this_region_info in region_response['Regions']:
region = this_region_info["RegionName"]
my_config = Config(
region_name = region
)
client = boto3.client('resourcegroupstaggingapi', config=my_config)

list automated RDS snapshots created today and copy to other region using boto3

We are building an automated DR cold site on other region, currently are working on retrieving a list of RDS automated snapshots created today, and passed them to another function to copy them to another AWS region.
The issue is with RDS boto3 client where it returned a unique format of date, making filtering on creation date more difficult.
today = (datetime.today()).date()
rds_client = boto3.client('rds')
snapshots = rds_client.describe_db_snapshots(SnapshotType='automated')
harini = "datetime("+ today.strftime('%Y,%m,%d') + ")"
print harini
print snapshots
for i in snapshots['DBSnapshots']:
if i['SnapshotCreateTime'].date() == harini:
print(i['DBSnapshotIdentifier'])
print (today)
despite already converted the date "harini" to the format 'SnapshotCreateTime': datetime(2015, 1, 1), the Lambda function still unable to list out the snapshots.
The better method is to copy the files as they are created by invoking a lambda function using a cloud watch event.
See step by step instruction:
https://geektopia.tech/post.php?blogpost=Automating_The_Cross_Region_Copy_Of_RDS_Snapshots
Alternatively, you can issue a copy for each snapshot regardless of the date. The client will raise an exception and you can trap it like this
# Written By GeekTopia
#
# Copy All Snapshots for an RDS Instance To a new region
# --Free to use under all conditions
# --Script is provied as is. No Warranty, Express or Implied
import json
import boto3
from botocore.exceptions import ClientError
import time
destinationRegion = "us-east-1"
sourceRegion = 'us-west-2'
rdsInstanceName = 'needbackups'
def lambda_handler(event, context):
#We need two clients
# rdsDestinationClient -- Used to start the copy processes. All cross region
copies must be started from the destination and reference the source
# rdsSourceClient -- Used to list the snapshots that need to be copied.
rdsDestinationClient = boto3.client('rds',region_name=destinationRegion)
rdsSourceClient=boto3.client('rds',region_name=sourceRegion)
#List All Automated for A Single Instance
snapshots = rdsSourceClient.describe_db_snapshots(DBInstanceIdentifier=rdsInstanceName,SnapshotType='automated')
for snapshot in snapshots['DBSnapshots']:
#Check the the snapshot is NOT in the process of being created
if snapshot['Status'] == 'available':
#Get the Source Snapshot ARN. - Always use the ARN when copying snapshots across region
sourceSnapshotARN = snapshot['DBSnapshotArn']
#build a new snapshot name
sourceSnapshotIdentifer = snapshot['DBSnapshotIdentifier']
targetSnapshotIdentifer ="{0}-ManualCopy".format(sourceSnapshotIdentifer)
targetSnapshotIdentifer = targetSnapshotIdentifer.replace(":","-")
#Adding a delay to stop from reaching the api rate limit when there are large amount of snapshots -
#This should never occur in this use-case, but may if the script is modified to copy more than one instance.
time.sleep(.2)
#Execute copy
try:
copy = rdsDestinationClient.copy_db_snapshot(SourceDBSnapshotIdentifier=sourceSnapshotARN,TargetDBSnapshotIdentifier=targetSnapshotIdentifer,SourceRegion=sourceRegion)
print("Started Copy of Snapshot {0} in {2} to {1} in {3} ".format(sourceSnapshotIdentifer,targetSnapshotIdentifer,sourceRegion,destinationRegion))
except ClientError as ex:
if ex.response['Error']['Code'] == 'DBSnapshotAlreadyExists':
print("Snapshot {0} already exist".format(targetSnapshotIdentifer))
else:
print("ERROR: {0}".format(ex.response['Error']['Code']))
return {
'statusCode': 200,
'body': json.dumps('Opearation Complete')
}
The code below will take automated snapshots created today.
import boto3
from datetime import date, datetime
region_src = 'us-east-1'
client_src = boto3.client('rds', region_name=region_src)
date_today = datetime.today().strftime('%Y-%m-%d')
def get_db_snapshots_src():
response = client_src.describe_db_snapshots(
SnapshotType = 'automated',
IncludeShared=False,
IncludePublic=False
)
snapshotsInDay = []
for i in response["DBSnapshots"]:
if i["SnapshotCreateTime"].strftime('%Y-%m-%d') == date.isoformat(date.today()):
snapshotsInDay.append(i)
return snapshotsInDay

Is there anyway to fetch tags of a RDS instance using boto3?

rds_client = boto3.client('rds', 'us-east-1')
instance_info = rds_client.describe_db_instances( DBInstanceIdentifier='**myinstancename**')
But the instance_info doesn't contain any tags I set in the RDS instance. I want to fetch the instances that has env='production' in them and want to exclude env='test'. Is there any method in boto3 that fetched the tags as well?
Only through boto3.client("rds").list_tags_for_resource
Lists all tags on an Amazon RDS resource.
ResourceName (string) --
The Amazon RDS resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
import boto3
rds_client = boto3.client('rds', 'us-east-1')
db_instance_info = rds_client.describe_db_instances(
DBInstanceIdentifier='**myinstancename**')
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(
ResourceName=each_db['DBInstanceArn'],
Filters=[{
'Name': 'env',
'Values': [
'production',
]
}])
Either use a simple exclusion over the simple filter, or you can dig through the documentation to build complicated JMESPath filter using
paginators.
Notes : AWS resource tags is not a universal implementation. So you must always refer to the boto3 documentation.
Python program will show you how to list all rds instance, there type and status.
list_rds_instances.py
import boto3
#connect ot rds instance
client = boto3.client('rds')
#rds_instance will have all rds information in dictionary.
rds_instance = client.describe_db_instances()
all_list = rds_instance['DBInstances']
print('RDS Instance Name \t| Instance Type \t| Status')
for i in rds_instance['DBInstances']:
dbInstanceName = i['DBInstanceIdentifier']
dbInstanceEngine = i['DBInstanceClass']
dbInstanceStatus = i['DBInstanceStatus']
print('%s \t| %s \t| %s' %(dbInstanceName, dbInstanceEngine, dbInstanceStatus))
Important Note: While working with boto3 you need to setup your credentials in two files ~/.aws/credentials and ~/.aws/config
~/.aws/credentials
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
~/.aws/config
[default]
region=ap-south-1