How to be alerted of AWS volumes with State=Available? - amazon-web-services

How can I get an email alert when there are 1 or more EBS volumes in AWS with a state of 'Available'?
In AWS we have a team of people who manage EC2 instances. Sometimes instances are deleted and redundant volumes are left, showing as State = Available (seen here https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Volumes:sort=state).
I would like to be notified by e-mail when this happens, so I can manually review and delete them as required. A scheduled check and alert (e-mail) of once per day would be ok.
I think this should be possible via AWS Cloudwatch, but I can't see how to do it...

This is what I am using in an AWS Lambda process:
import boto3
ec2 = boto3.resource('ec2')
sns = boto3.client('sns')
def chk_vols(event, context):
vol_array = ec2.volumes.all()
vol_avail = []
for v in vol_array:
if v.state == 'available':
vol_avail.append(v.id)
if vol_avail:
sns.publish(
TopicArn='arn:aws:sns:<your region>:<your account>:<your topic>',
Message=str(vol_avail),
Subject='AWS Volumes Available'
)

Related

Grab Public IP Of a New Running Instance and send it via SNS

So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code
If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)
Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.

AWS Lambda - Copy monthly snapshots to another region

I am trying to run a lambda that will kick off on a schedule to copy all snapshots taken the day prior to another region for DR purposes. I have a bit of code but it seems to not work as intended.
Symptoms:
It's grabbing the same snapshots multiple times and copying them
It always errors out on 2 particular snapshots, I don't know enough coding to write a log to figure out why. These snapshots work if I manually copy them though.
import boto3
from datetime import date, timedelta
SOURCE_REGION = 'us-east-1'
DEST_REGION = 'us-west-2'
ec2_source = boto3.client('ec2', region_name = SOURCE_REGION)
ec2_destination = boto3.client('ec2', region_name = DEST_REGION)
snaps = ec2_source.describe_snapshots(OwnerIds=['self'])['Snapshots']
yesterday = date.today() - timedelta(days = 1)
yesterday_snaps = [ s for s in snaps if s['StartTime'].date() == yesterday ]
for yester_snap in yesterday_snaps:
DestinationSnapshot = ec2_destination.copy_snapshot(
SourceSnapshotId = yester_snap['SnapshotId'],
SourceRegion = SOURCE_REGION,
Encrypted = True,
KmsKeyId='REMOVED FOR SECURITY',
DryRun = False
)
DestinationSnapshotID = DestinationSnapshot['SnapshotId']
ec2_destination.create_tags(Resources=[DestinationSnapshotID],
Tags=yester_snap['Tags']
)
waiter = ec2_destination.get_waiter('snapshot_completed')
waiter.wait(
SnapshotIds=[DestinationSnapshotID],
DryRun=False,
WaiterConfig={'Delay': 10,'MaxAttempts': 123}
)
Debugging
You can debug by simply putting print() statements in your code.
For example:
for yester_snap in yesterday_snaps:
print('Copying:', yester_snap['SnapshotId'])
DestinationSnapshot = ec2_destination.copy_snapshot(...)
The logs will appear in CloudWatch Logs. You can access the logs via the Monitoring tab in the Lambda function. Make sure the Lambda function has AWSLambdaBasicExecutionRole permissions so that it can write to CloudWatch Logs.
Today/Yesterday
Be careful about your definition of yesterday. Amazon EC2 instances run in the UTC timezone, so your concept of a today and yesterday might not match what is happening.
It might be better to add a tag to snapshots after they are copied (eg 'copied') rather than relying on dates to figure out which ones to copy.
CloudWatch Events rule
Rather than running this program once per day, an alternative method would be:
Create an Amazon CloudWatch Events rule that triggers on Snapshot creation:
{
"source": [
"aws.ec2"
],
"detail-type": [
"EBS Snapshot Notification"
],
"detail": {
"event": [
"createSnapshot"
]
}
}
Configure the rule to trigger an AWS Lambda function
In the Lambda function, copy the Snapshot that was just created
This way, the Snapshots are created immediately and there is no need to search for them or figure out which Snapshots to copy

Launching EC2 in multiple regions using boto3

I am using below code to launch EC2 instance
import boto3
client = boto3.client('ec2',region_name='us-east-1')
resp = client.run_instances(ImageId='ami-01e3b8c3a51e88954',
InstanceType='t2.micro',
MinCount=1,MaxCount=1)
for instance in resp['Instances']:
print(instance['InstanceId'])
This code is working.But my requirement now is to launch the instance in multiple regions at a time.
Can anyone suggest how to achieve this ?
First, you would need to find the ami ID's for each region. AMI's are not cross-region, therefore, for each region you should find the AMI ID's.
Then you would do something like:
import boto3
regions = {
'us-east-1': 'ami-01e3b8c3a51e88954',
'eu-west-1': 'ami-XXXXXXXXXXXXXXXXX',
}
for region in regions:
region_client = boto3.client('ec2', region_name=region)
resp = region_client.run_instances(ImageId=regions[region],
InstanceType='t2.micro',
MinCount=1, MaxCount=1)
for instance in resp['Instances']:
print(instance['InstanceId'])

Trying to disable all the Cloud Watch alarms in one shot

My organization is planning for a maintenance window for the next 5 hours. During that time, I do not want Cloud Watch to trigger alarms and send notifications.
Earlier, when I had to disable 4 alarms, I have written the following code in AWS Lambda. This worked fine.
import boto3
import collections
client = boto3.client('cloudwatch')
def lambda_handler(event, context):
response = client.disable_alarm_actions(
AlarmNames=[
'CRITICAL - StatusCheckFailed for Instance 456',
'CRITICAL - StatusCheckFailed for Instance 345',
'CRITICAL - StatusCheckFailed for Instance 234',
'CRITICAL - StatusCheckFailed for Instance 123'
]
)
But now, I was asked to disable all the alarms which are 361 in number. So, including all those names would take a lot of time.
Please let me know what I should do now?
Use describe_alarms() to obtain a list of them, then iterate through and disable them:
import boto3
client = boto3.client('cloudwatch')
response = client.describe_alarms()
names = [[alarm['AlarmName'] for alarm in response['MetricAlarms']]]
disable_response = client.disable_alarm_actions(AlarmNames=names)
You might want some logic around the Alarm Name to only disable particular alarms.
If you do not have the specific alarm arns, then you can use the logic in the previous answer. If you have a specific list of arns that you want to disable, you can fetch names using this:
def get_alarm_names(alarm_arns):
names = []
response = client.describe_alarms()
for i in response['MetricAlarms']:
if i['AlarmArn'] in alarm_arns:
names.append(i['AlarmName'])
return names
Here's a full tutorial: https://medium.com/geekculture/terraform-structure-for-enabling-disabling-alarms-in-batches-5c4f165a8db7

How to find unused VPC in AWS account

Is there any way to find unused VPCs in an AWS account?
I mean the VPCs that don't have any EC2 instances, RDS and other services associated with it.
One way is to just search with VPC ID in running instances, RDS and for other services to find out whether it is in use or not. Is there any other way or AWS CLI to find unused VPCs?
There are many resources that be included in a VPC, such as:
Amazon EC2 instances
Amazon RDS instances
Amazon Redshift instances
Amazon Elasticache instances
Elastic Load Balancers
Elastic Network Interfaces
and so on!
Rather than trying to iterate through each of these services, you could iterate through the Elastic Network Interfaces (ENIs), since everything connects to a VPC via an ENI.
Here's a command you could run using the AWS Command-Line Interface (CLI) that shows ENIs attached to a given VPC:
aws ec2 describe-network-interfaces --filters 'Name=vpc-id,Values=vpc-abcd1234' --query 'NetworkInterfaces[*].NetworkInterfaceId'
If no ENIs are returned, then you'd probably call it an unused VPC.
This might sound crazy, but I am pretty sure you can attempt to delete the VPC. It should protect from deletion any VPC that has resources running in it. Of course, you should give this a quick try before you do it. But its probably the fastest/cleanest.
Please use the following script to identify the unused Subnets for your AWS accounts in all regions:
USAGE:
Please add Account list in accounts variable as accounts=["a1","a2","a3"]
It will query and provide the list of subnets in all the regions for
respective accounts
A single CSV file will be created at end of each run for one account
Logic:
Query all the subnets across all the regions for an AWS account
Get currently available IP details for the subnet(It is provided by AWS API)
Get Subnet CIDR, calculate total IPs count, and subtract 5 counts (5
because 2 are used for Network and Broadcast and the other 3 are
reserved by AWS by default)
Then, Subtract Total IPs - Available = Currently used IP. If Used IP
= 0 , subnet can be cleaned
import boto3
import sys
import csv
import ipaddress
def describe_regions(session):
try:
aws_regions = []
ec2_client = session.client('ec2')
response_regions = ec2_client.describe_regions()['Regions']
for region in response_regions:
aws_regions.append(region['RegionName'])
return aws_regions
except Exception:
print("Unexpected error:", sys.exc_info()[0])
def describe_vpc(ec2,aws_region,writer,profile_name):
try:
response_vpc = ec2.describe_vpcs()['Vpcs']
for vpc in response_vpc:
print('=' * 50)
count = 0
filters = [
{'Name': 'vpc-id',
'Values': [vpc['VpcId']]}
]
response_subnets = ec2.describe_subnets(Filters=filters)['Subnets']
for subnets in response_subnets:
count += 1
total_count = (ipaddress.ip_network(subnets['CidrBlock']).num_addresses) - 5
Used_IP = total_count - subnets['AvailableIpAddressCount']
writer.writerow({"Account": profile_name, "VpcId": vpc['VpcId'], "VpcCidr": vpc['CidrBlock'], "Region": aws_region,
"Subnet": subnets['CidrBlock'], "SubnetId": subnets['SubnetId'], "AvailableIPv4": subnets['AvailableIpAddressCount'], "Total_Network_IP": str(total_count),
"AvailabilityZone": subnets['AvailabilityZone'],"Used_IP": str(Used_IP)})
print({"Account": profile_name, "VpcId": vpc['VpcId'], "VpcCidr": vpc['CidrBlock'], "Region": aws_region,
"Subnet": subnets['CidrBlock'], "SubnetId": subnets['SubnetId'], "AvailableIPv4": subnets['AvailableIpAddressCount'], "Total_Network_IP": str(total_count),
"AvailabilityZone": subnets['AvailabilityZone'],"Used_IP": str(Used_IP)})
print('='*50)
except Exception:
print("Unexpected error:", sys.exc_info()[0])
def main():
try:
accounts=["<Account names here as list>"]
for profile in accounts:
session = boto3.session.Session(
profile_name=profile
)
file_name = profile
print("File Name: " +file_name)
profile_name = profile
print("Profile_name: " +profile_name)
with open(file_name + ".csv", "w", newline="") as csvfile:
fieldnames = [
"Account", "VpcId",
"VpcCidr", "Region",
"Subnet", "SubnetId",
"AvailableIPv4","Total_Network_IP",
"AvailabilityZone","Used_IP"
]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
aws_regions = describe_regions(session)
for aws_region in aws_regions:
ec2 = session.client('ec2', region_name=aws_region)
print("Scanning region: {}".format(aws_region))
describe_vpc(ec2,aws_region, writer, profile_name)
except Exception:
print("Unexpected error:", sys.exc_info()[0])
raise
if __name__ == "__main__":
main()
This AWS Knowledge Center post will give good help. It contains even better aws-cli commands to use. https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-dependency-error-delete-vpc/