Grab Public IP Of a New Running Instance and send it via SNS - amazon-web-services

So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code

If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)

Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.

Related

AWS Lambda failing to fetch EC2 AZ details

I am trying to create lambda script using Python3.9 which will return total ec2 servers in AWS account, their status & details. Some of my code snippet is -
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print("AvailabilityZone: ", instance_details['AvailabilityZone'])
print("\nInstanceId: ", instance_details["InstanceId"])
print("\nInstanceType: ",instance_details['InstanceType'])
On ruunning this code i get error -
If I comment AZ details, code works fine.If I create a new function with only AZ parameter in it, all AZs are returned. Not getting why it fails in above mentioned code.
In python, its always a best practice to use get method to fetch value from list or dict to handle exception.
AvailibilityZone is actually present in Placement dict and not under instance details. You can check the entire response structure from below boto 3 documentation
Reference
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
print(f"\nInstanceId: {instance_details.get('InstanceId')}")
print(f"\nInstanceType: {instance_details.get('InstanceType')}")
The problem is that in response of describe_instances availability zone is not in first level of instance dictionary (in your case instance_details). Availability zone is under Placement dictionary, so what you need is
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")

AWS CLI Query to find Shared Security Groups

I am trying to write a query to return results of any referenced security group not owned by the current account.
This means I am trying to show security groups that are being used as part of a peering connection from another VPC.
There are a couple of restrictions.
Show the entire security group details (security group id, description)
Only show security groups where IpPermissions.UserIdGroupPairs has a Value and where that value is not equal to the owner of the security group
I am trying to write this using a single AWS CLI cmd vs a bash script or python script.
Any thoughts?
Heres what I have so far.
aws ec2 describe-security-groups --query "SecurityGroups[?IpPermissions.UserIdGroupPairs[*].UserId != '`aws sts get-caller-identity --query 'Account' --output text`']"
Following is Python 3.8 based AWS Lambda, but you can change a bit to use as python script file to execute on any supported host machine.
import boto3
import ast
config_service = boto3.client('config')
# Refactor to extract out duplicate code as a seperate def
def lambda_handler(event, context):
results = get_resource_details()
for resource in results:
if "configuration" in resource:
config=ast.literal_eval(resource)["configuration"]
if "ipPermissionsEgress" in config:
ipPermissionsEgress=config["ipPermissionsEgress"]
for data in ipPermissionsEgress:
for userIdGroupPair in data["userIdGroupPairs"]:
if userIdGroupPair["userId"] != "123456789111":
print(userIdGroupPair["groupId"])
elif "ipPermissions" in config:
ipPermissions=config["ipPermissions"]
for data in ipPermissions:
for userIdGroupPair in data["userIdGroupPairs"]:
if userIdGroupPair["userId"] != "123456789111":
print(userIdGroupPair["groupId"])
def get_resource_details():
query = "SELECT configuration.ipPermissions.userIdGroupPairs.groupId,configuration.ipPermissionsEgress.userIdGroupPairs.groupId,configuration.ipPermissionsEgress.userIdGroupPairs.userId,configuration.ipPermissions.userIdGroupPairs.userId WHERE resourceType = 'AWS::EC2::SecurityGroup' AND configuration <> 'null'"
results = config_service.select_resource_config(
Expression=query,
Limit=100
) # you might need to refacor to add support huge list of records using NextToken
return results["Results"]

AWS Fargate - How to get the public ip address of task by using python boto3

I am creating new fargate task by using the following python script.
import boto3
import json
def handler():
client = boto3.client('ecs')
response = client.run_task(
cluster='fargate-learning', # name of the cluster
launchType = 'FARGATE',
taskDefinition='fargate-learning:1', # replace with your task definition name and revision
count = 1,
platformVersion='LATEST',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': [
'subnet-0a024d8ac87668b64', # replace with your public subnet or a private with NAT
],
'assignPublicIp': 'ENABLED'
}
})
print(response)
return str(response)
if __name__ == '__main__':
handler()
And here is the response I am getting from boto3.
https://jsonblob.com/5faf3ae6-bc31-11ea-8cae-53bd90c38587
I can not see the public ip address in response although the script is assigning the public ip address and I can see it on website.
So, how can I get this public ip address by using boto3?
Thanks
This can be done in two steps:
Use describe_tasks to get ENI id associated with your fargate awsvpc interface. The value of eni, e.g. eni-0c866df3faf8408d0, will be given in attachments and details from the result of the call.
Once you have the eni, then you can use EC2.NetworkInterface. For example:
eni_id = 'eni-0c866df3faf8408d0' # from step 1
eni = boto3.resource('ec2').NetworkInterface(eni_id)
print(eni.association_attribute['PublicIp'])
Tried implementing #Marcin 's answers as a function. Hope this can be helpful
def get_service_ips(cluster, tasks):
tasks_detail = ecs.describe_tasks(
cluster=cluster,
tasks=tasks
)
# first get the ENIs
enis = []
for task in tasks_detail.get("tasks", []):
for attachment in task.get("attachments", []):
for detail in attachment.get("details", []):
if detail.get("name") == "networkInterfaceId":
enis.append(detail.get("value"))
# now the ips
ips = []
for eni in enis:
eni_resource = boto3.resource("ec2").NetworkInterface(eni)
ips.append(eni_resource.association_attribute.get("PublicIp"))
return ips
as a gist here.

Is there anyway to fetch tags of a RDS instance using boto3?

rds_client = boto3.client('rds', 'us-east-1')
instance_info = rds_client.describe_db_instances( DBInstanceIdentifier='**myinstancename**')
But the instance_info doesn't contain any tags I set in the RDS instance. I want to fetch the instances that has env='production' in them and want to exclude env='test'. Is there any method in boto3 that fetched the tags as well?
Only through boto3.client("rds").list_tags_for_resource
Lists all tags on an Amazon RDS resource.
ResourceName (string) --
The Amazon RDS resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
import boto3
rds_client = boto3.client('rds', 'us-east-1')
db_instance_info = rds_client.describe_db_instances(
DBInstanceIdentifier='**myinstancename**')
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(
ResourceName=each_db['DBInstanceArn'],
Filters=[{
'Name': 'env',
'Values': [
'production',
]
}])
Either use a simple exclusion over the simple filter, or you can dig through the documentation to build complicated JMESPath filter using
paginators.
Notes : AWS resource tags is not a universal implementation. So you must always refer to the boto3 documentation.
Python program will show you how to list all rds instance, there type and status.
list_rds_instances.py
import boto3
#connect ot rds instance
client = boto3.client('rds')
#rds_instance will have all rds information in dictionary.
rds_instance = client.describe_db_instances()
all_list = rds_instance['DBInstances']
print('RDS Instance Name \t| Instance Type \t| Status')
for i in rds_instance['DBInstances']:
dbInstanceName = i['DBInstanceIdentifier']
dbInstanceEngine = i['DBInstanceClass']
dbInstanceStatus = i['DBInstanceStatus']
print('%s \t| %s \t| %s' %(dbInstanceName, dbInstanceEngine, dbInstanceStatus))
Important Note: While working with boto3 you need to setup your credentials in two files ~/.aws/credentials and ~/.aws/config
~/.aws/credentials
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
~/.aws/config
[default]
region=ap-south-1

Configuring munin server for use with AWS autoscaling?

I am planning to use AWS autoscaling groups for my webservers. As a monitoring solution I am using munin at the moment. In the configuration file on the munin master server, you have to give IP addresses or host names for every host you want to monitor.
Now with autoscaling the number of instances will change frequently, and writing static information in the munin config does not seem to fit well in this environment. I could probably query all server addresses I want to monitor and write the munin master configuration file then, but this seems not like a good approach to me.
What is the preferred way of using munin in such an environment? Does someone use munin with autoscaling?
In general I would like to keep using munin and not switch to another monitoring solution because I wrote quite a lot of specific plugins that I rely on. However if you have another monitoring solution that will probably let me keep my plugins I am also open for that.
One year ago we used munin as alternative monitoring system and I will tell you one: I don't like it at all.
We had some automation for auto scaling system in nagios too, but this is also ugly way to monitor large amount of AWS instances because nagios starts to lag/crash after some amount of monitoring instances.
If you have more that 150-200 instances to monitor I suggest you to use some commercial services like StackDriver or other alternatives.
I stumbled across this old topic because I was looking for a solution to the same problem. Finally I found a way that works for me which I would like to share with you. The tl;dr summary
use AWS Python API to get all instances in the same VPC the munin master is in
test if munin port 4949 is open on the instances found to detect munin nodes
create munin.conf from a munin.base.conf (without nodes) and append entries for all the nodes found
run the script on the munin master all 5 minutes via cron
Finally, here is my Python script which does all the magic:
#! /usr/bin/python
import boto3
import requests
import argparse
import shutil
import socket
socketTimeout = 2
ec2 = boto3.client('ec2')
def getVpcId():
response = requests.get('http://169.254.169.254/latest/meta-data/instance-id')
instance_id = response.text
response = ec2.describe_instances(
Filters=[
{
'Name' : 'instance-id',
'Values' : [ instance_id ]
}
]
)
return response['Reservations'][0]['Instances'][0]['VpcId']
def findNodes(tag):
result = []
vpcId = getVpcId()
response = ec2.describe_instances(
Filters=[
{
'Name' : 'tag-key',
'Values' : [ tag ]
},
{
'Name' : 'vpc-id',
'Values' : [ vpcId ]
}
]
)
for reservation in response['Reservations']:
for instance in reservation['Instances']:
result.append(instance)
return result
def getInstanceTag(instance, tagName):
for tag in instance['Tags']:
if tag['Key'] == tagName:
return tag['Value']
return None
def isMuninNode(host):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(socketTimeout)
try:
s.connect((host, 4949))
s.shutdown(socket.SHUT_RDWR)
return True
except Exception as e:
return False
finally:
s.close()
def appendNodesToConfig(nodes, target, tag):
with open(target, "a") as file:
for node in nodes:
hostname = getInstanceTag(node, tag)
if hostname.endswith('.'):
hostname = hostname[:-1]
if hostname <> None and isMuninNode(hostname):
file.write('[' + hostname + ']\n')
file.write('\taddress ' + hostname + '\n')
file.write('\tuse_node_name yes\n\n')
parser = argparse.ArgumentParser("muninconf.py")
parser.add_argument("baseconfig", help="base munin config to append nodes to")
parser.add_argument("target", help="target munin config")
args = parser.parse_args()
base = args.baseconfig
target = args.target
shutil.copyfile(base, target)
nodes = findNodes('CNAME')
appendNodesToConfig(nodes, target, 'CNAME')
For the API calls to work you have to setup AWS API credentials or assign an IAM role with the required permissions (ec2:DescribeInstances as a bare minimum) to your munin master instance (which is my prefered method).
Some final implementation notes:
I have a tag named CNAME assigned to all my AWS instances which holds the internal DNS host name. Therefore I filter for this tag and use the value as the node name and address for the munin configuration. You probably have to change this for your setup.
Another option would be to assign a specific tag to all the instances you want to monitor with munin. You could then filter for this tag and probably also skip the check for the open munin port.
Hope this is of some help.
Cheers,
Oliver