I am trying to create lambda script using Python3.9 which will return total ec2 servers in AWS account, their status & details. Some of my code snippet is -
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print("AvailabilityZone: ", instance_details['AvailabilityZone'])
print("\nInstanceId: ", instance_details["InstanceId"])
print("\nInstanceType: ",instance_details['InstanceType'])
On ruunning this code i get error -
If I comment AZ details, code works fine.If I create a new function with only AZ parameter in it, all AZs are returned. Not getting why it fails in above mentioned code.
In python, its always a best practice to use get method to fetch value from list or dict to handle exception.
AvailibilityZone is actually present in Placement dict and not under instance details. You can check the entire response structure from below boto 3 documentation
Reference
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
print(f"\nInstanceId: {instance_details.get('InstanceId')}")
print(f"\nInstanceType: {instance_details.get('InstanceType')}")
The problem is that in response of describe_instances availability zone is not in first level of instance dictionary (in your case instance_details). Availability zone is under Placement dictionary, so what you need is
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
Related
So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code
If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)
Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.
I am having some trouble writing this function that will automatically start a stopped instance. Relatively new to this and just playing around.
I am able to start only a single instance by hardcoding the instance ID.
I am trying to filter for any instance with a state that is stopped.
This is my code:
import json
import boto3
region = 'us-east-1'
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values':['stopped']}])
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name=region)
ec2.start_instances(InstanceIds=instances)
This Lambda function is trigged by an event.
I am getting an "invalid type for parameter InstanceIds" error. Must be list or tuple. Tried to loop through it but no link. Wondering if there is a simpler way to do this or if you have a suggestion?
You should iterate instances collection to get IDs (instance.id):
all_instance_ids=[]
for instance in instances:
all_instance_ids.append(instance.id)
I have a simple Lambda function the its job is to make HTTP GET to a certain server.
I need to run many copies (hundreds) of the function at the same time and I want to have a distinct source IP address for each HTTP GET coming from each Lambda.
My questions:
How do I make sure that each 'copy' of the Lambda function will have its own IP address?
How do I use boto API invoke call in order to tell AWS that I need N concurrent copies of my Lambda? I am looking here but I can not find the argument that sets the number of concurrent copies.
Thanks
Avishay
As for question #2 I am using the following code in order to invoke N concurrent copies of the Lambda function.
import boto3, json
from concurrent.futures import ThreadPoolExecutor
N = 5
unique_ips = set()
lambda_client = boto3.client('lambda', region_name='us-west-2')
def _lambda_caller(idx):
test_event = dict(idx=idx)
res = lambda_client.invoke(
FunctionName='SimpleHTTPGetter',
InvocationType='RequestResponse',
Payload=json.dumps(test_event),
)
data = json.loads(res['Payload']._raw_stream.data)
print('Thread {} is done'.format(idx))
unique_ips.add(data['body'])
with ThreadPoolExecutor(max_workers=N) as executor:
for i in range(0,N):
future = executor.submit(_lambda_caller,i)
executor.shutdown()
print('Done')
My Lambda code (short version)
import json
import socket
def lambda_handler(event, context):
print('-- HTTP Client started')
hostname = socket.gethostname()
ip = socket.gethostbyname(hostname)
print('My IP address is {}:'.format(ip))
return {
"statusCode": 200,
"body": ip
}
You need to create a VPC, Make sure you attach a subnet that allows internet access. And then attach a security policy to your lambda.
Step by Step here:
https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
We are building an automated DR cold site on other region, currently are working on retrieving a list of RDS automated snapshots created today, and passed them to another function to copy them to another AWS region.
The issue is with RDS boto3 client where it returned a unique format of date, making filtering on creation date more difficult.
today = (datetime.today()).date()
rds_client = boto3.client('rds')
snapshots = rds_client.describe_db_snapshots(SnapshotType='automated')
harini = "datetime("+ today.strftime('%Y,%m,%d') + ")"
print harini
print snapshots
for i in snapshots['DBSnapshots']:
if i['SnapshotCreateTime'].date() == harini:
print(i['DBSnapshotIdentifier'])
print (today)
despite already converted the date "harini" to the format 'SnapshotCreateTime': datetime(2015, 1, 1), the Lambda function still unable to list out the snapshots.
The better method is to copy the files as they are created by invoking a lambda function using a cloud watch event.
See step by step instruction:
https://geektopia.tech/post.php?blogpost=Automating_The_Cross_Region_Copy_Of_RDS_Snapshots
Alternatively, you can issue a copy for each snapshot regardless of the date. The client will raise an exception and you can trap it like this
# Written By GeekTopia
#
# Copy All Snapshots for an RDS Instance To a new region
# --Free to use under all conditions
# --Script is provied as is. No Warranty, Express or Implied
import json
import boto3
from botocore.exceptions import ClientError
import time
destinationRegion = "us-east-1"
sourceRegion = 'us-west-2'
rdsInstanceName = 'needbackups'
def lambda_handler(event, context):
#We need two clients
# rdsDestinationClient -- Used to start the copy processes. All cross region
copies must be started from the destination and reference the source
# rdsSourceClient -- Used to list the snapshots that need to be copied.
rdsDestinationClient = boto3.client('rds',region_name=destinationRegion)
rdsSourceClient=boto3.client('rds',region_name=sourceRegion)
#List All Automated for A Single Instance
snapshots = rdsSourceClient.describe_db_snapshots(DBInstanceIdentifier=rdsInstanceName,SnapshotType='automated')
for snapshot in snapshots['DBSnapshots']:
#Check the the snapshot is NOT in the process of being created
if snapshot['Status'] == 'available':
#Get the Source Snapshot ARN. - Always use the ARN when copying snapshots across region
sourceSnapshotARN = snapshot['DBSnapshotArn']
#build a new snapshot name
sourceSnapshotIdentifer = snapshot['DBSnapshotIdentifier']
targetSnapshotIdentifer ="{0}-ManualCopy".format(sourceSnapshotIdentifer)
targetSnapshotIdentifer = targetSnapshotIdentifer.replace(":","-")
#Adding a delay to stop from reaching the api rate limit when there are large amount of snapshots -
#This should never occur in this use-case, but may if the script is modified to copy more than one instance.
time.sleep(.2)
#Execute copy
try:
copy = rdsDestinationClient.copy_db_snapshot(SourceDBSnapshotIdentifier=sourceSnapshotARN,TargetDBSnapshotIdentifier=targetSnapshotIdentifer,SourceRegion=sourceRegion)
print("Started Copy of Snapshot {0} in {2} to {1} in {3} ".format(sourceSnapshotIdentifer,targetSnapshotIdentifer,sourceRegion,destinationRegion))
except ClientError as ex:
if ex.response['Error']['Code'] == 'DBSnapshotAlreadyExists':
print("Snapshot {0} already exist".format(targetSnapshotIdentifer))
else:
print("ERROR: {0}".format(ex.response['Error']['Code']))
return {
'statusCode': 200,
'body': json.dumps('Opearation Complete')
}
The code below will take automated snapshots created today.
import boto3
from datetime import date, datetime
region_src = 'us-east-1'
client_src = boto3.client('rds', region_name=region_src)
date_today = datetime.today().strftime('%Y-%m-%d')
def get_db_snapshots_src():
response = client_src.describe_db_snapshots(
SnapshotType = 'automated',
IncludeShared=False,
IncludePublic=False
)
snapshotsInDay = []
for i in response["DBSnapshots"]:
if i["SnapshotCreateTime"].strftime('%Y-%m-%d') == date.isoformat(date.today()):
snapshotsInDay.append(i)
return snapshotsInDay
I am trying to execute the following code
def createSecurityGroup(self, securitygroupname):
conn = boto3.resource('ec2')
response = conn.create_security_group(GroupName=securitygroupname, Description = 'test')
VPC_NAT_SecurityObject = createSecurityGroup("mysecurity_group")
response_egress_all = VPC_NAT_SecurityObject.authorize_egress(
IpPermissions=[{'IpProtocol': '-1'}])
and getting the below exception
EXCEPTION :
An error occurred (InvalidParameterValue) when calling the AuthorizeSecurityGroupEgress operation: Only Amazon VPC security
groups may be used with this operation.
I tried several different combinations but not able to set the protocol to all . I used '-1' as explained in the boto3 documentation. Can somebody pls suggest how to get this done.
(UPDATE)
1.boto3.resource("ec2") class actually a high level class wrap around the client class. You must create an extract class instantiation using boto3.resource("ec2").Vpc in order to attach to specific VPC ID e.g.
import boto3
ec2_resource = boto3.resource("ec2")
myvpc = ec2_resource.Vpc("vpc-xxxxxxxx")
response = myvpc.create_security_group(
GroupName = securitygroupname,
Description = 'test')
2.Sometime it is straightforward to use boto3.client("ec2") If you check boto3 EC2 client create_security_group, you will see this:
response = client.create_security_group(
DryRun=True|False,
GroupName='string',
Description='string',
VpcId='string'
)
If you use automation script/template to rebuild the VPC, e.g. salt-cloud, you need give the VPC a tag name in order to acquire it automatically from boto3 script. This will save all the hassle when AWS migrate all the AWS resources ID from 8 alphanumeric to 12 or 15 character.
Another option is using cloudformation that let you put everything and specify variable in a template to recreate the VPC stack.