I'm try to find a way to determine orphan security groups so I can clean up and get rid of them. Does anyone know of a way to discover unused security groups.
Either through the console or with the command line tools will work (Running command line tools on linux and OSX machines).
Note: this only considers security use in EC2, not other services like RDS. You'll need to do more work to include security groups used outside EC2. The good thing is you can't easily (might not even be possible) to delete active security groups if you miss one associated w/another service.
Using the newer AWS CLI tool, I found an easy way to get what I need:
First, get a list of all security groups
aws ec2 describe-security-groups --query 'SecurityGroups[*].GroupId' --output text | tr '\t' '\n'
Then get all security groups tied to an instance, then piped to sort then uniq:
aws ec2 describe-instances --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --output text | tr '\t' '\n' | sort | uniq
Then put it together and compare the 2 lists and see what's not being used from the master list:
comm -23 <(aws ec2 describe-security-groups --query 'SecurityGroups[*].GroupId' --output text | tr '\t' '\n'| sort) <(aws ec2 describe-instances --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --output text | tr '\t' '\n' | sort | uniq)
If you select all of your security groups in the EC2 console, then press actions -> Delete Security Groups, a popup will appear telling you that you cannot delete security groups that are attached to instances, other security groups, or network interfaces, and it will list the security groups that you can delete; ie the unused security groups.
NOTE: according to #andrewlorien’s comment this does not work for all types of AWS services.
This is the sample code written in boto (Python SDK for AWS) to list the Security Group against number of instances it is associated with.
You may use this logic to obtain the same in command line as well
Boto Code
import boto
ec2 = boto.connect_ec2()
sgs = ec2.get_all_security_groups()
for sg in sgs:
print sg.name, len(sg.instances())
Output
Security-Group-1 0
Security-Group-2 1
Security-Group-3 0
Security-Group-4 3
After about a year of unaudited use, I found it necessary to audit my AWS EC2 security groups and clean up legacy, unused groups.
This was a daunting task to perform via the web GUI, so I looked to the AWS CLI to make the task easier. I found a start on how to do this at StackOverflow, but it was far from complete. So I decided to write my own script. I used the AWS CLI, MySQL and some “Bash-foo” to perform the following:
Get a list of all EC2 security groups.
I store the group-id, group-name and description in a tabled called “groups” in a MySQL database called aws_security_groups on the localhost. The total number of groups found is reported to the user.
Get a list of all security groups associated with each of the following services and exclude them from the table:
EC2 Istances
EC2 Elastic Load Balancers
AWS RDS Instances
AWS OpsWorks (shouldn’t be removed per Amazon)
Default security groups (Can’t be deleted)
ElastiCache
For each service I report a count of the number of groups left in the table after the exclusion is complete.
Finally I display the group-id, group-name and description for the groups that are left. These are the “unused” groups that need to be audited and/or deleted. I’ve found that SG’s between instances and Elastic Load Balancers (ELBs) often refer to each other. It’s best practice to do some manual investigation to ensure they are truly not in use prior to removing the cross references and deleting the security groups. But my script at least pares this down to something mor manageable.
NOTES:
1. You will want to create a file to store your MySQL host, username and password and point the $DBCONFIG variable to it. It should be structured like this:
[mysql]
host=your-mysql-server-host.com
user=your-mysql-user
password=your-mysql-user-password
You can change the name of the database if you wish – make sure to change the $DB variable in the script
Let me know if you find this useful or have any comments,fixes or enhancements.
Here is the script.
#!/bin/bash
# Initialize Variables
DBCONFIG="--defaults-file=mysql-defaults.cnf"
DB="aws_security_groups"
SGLOOP=0
EC2LOOP=0
ELBLOOP=0
RDSLOOP=0
DEFAULTLOOP=0
OPSLOOP=0
CACHELOOP=0
DEL_GROUP=""
# Function to report back # of rows
function Rows {
ROWS=`echo "select count(*) from groups" | mysql $DBCONFIG --skip-column-names $DB`
# echo -e "Excluding $1 Security Groups.\nGroups Left to audit: "$ROWS
echo -e $ROWS" groups left after Excluding $1 Security Groups."
}
# Empty the table
echo -e "delete from groups where groupid is not null" | mysql $DBCONFIG $DB
# Get all Security Groups
aws ec2 describe-security-groups --query "SecurityGroups[*].[GroupId,GroupName,Description]" --output text > /tmp/security_group_audit.txt
while IFS=$'\t' read -r -a myArray
do
if [ $SGLOOP -eq 0 ];
then
VALUES="(\""${myArray[0]}"\",\""${myArray[1]}"\",\""${myArray[2]}"\")"
else
VALUES=$VALUES",(\""${myArray[0]}"\",\""${myArray[1]}"\",\""${myArray[2]}"\")"
fi
let SGLOOP="$SGLOOP + 1"
done < /tmp/security_group_audit.txt
echo -e "insert into groups (groupid, groupname, description) values $VALUES" | mysql $DBCONFIG $DB
echo -e $SGLOOP" security groups total."
# Exclude Security Groups assigned to Instances
for groupId in `aws ec2 describe-instances --output json | jq -r ".Reservations[].Instances[].SecurityGroups[].GroupId" | sort | uniq`
do
if [ $EC2LOOP -eq 0 ];
then
DEL_GROUP="'$groupId'"
else
DEL_GROUP=$DEL_GROUP",'$groupId'"
fi
let EC2LOOP="$EC2LOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "EC2 Instance"
DEL_GROUP=""
# Exclude groups assigned to Elastic Load Balancers
for elbGroupId in `aws elb describe-load-balancers --output json | jq -c -r ".LoadBalancerDescriptions[].SecurityGroups" | tr -d "\"[]\"" | sort | uniq`
do
if [ $ELBLOOP -eq 0 ];
then
DEL_GROUP="'$elbGroupId'"
else
DEL_GROUP=$DEL_GROUP",'$elbGroupId'"
fi
let ELBLOOP="$ELBLOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "Elastic Load Balancer"
DEL_GROUP=""
# Exclude groups assigned to RDS
for RdsGroupId in `aws rds describe-db-instances --output json | jq -c -r ".DBInstances[].VpcSecurityGroups[].VpcSecurityGroupId" | sort | uniq`
do
if [ $RDSLOOP -eq 0 ];
then
DEL_GROUP="'$RdsGroupId'"
else
DEL_GROUP=$DEL_GROUP",'$RdsGroupId'"
fi
let RDSLOOP="$RDSLOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "RDS Instances"
DEL_GROUP=""
# Exclude groups assigned to OpsWorks
for OpsGroupId in `echo -e "select groupid from groups where groupname like \"AWS-OpsWorks%\"" | mysql $DBCONFIG $DB`
do
if [ $OPSLOOP -eq 0 ];
then
DEL_GROUP="'$OpsGroupId'"
else
DEL_GROUP=$DEL_GROUP",'$OpsGroupId'"
fi
let OPSLOOP="$OPSLOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "OpsWorks"
DEL_GROUP=""
# Exclude default groups (can't be deleted)
for DefaultGroupId in `echo -e "select groupid from groups where groupname like \"default%\"" | mysql $DBCONFIG $DB`
do
if [ $DEFAULTLOOP -eq 0 ];
then
DEL_GROUP="'$DefaultGroupId'"
else
DEL_GROUP=$DEL_GROUP",'$DefaultGroupId'"
fi
let DEFAULTLOOP="$DEFAULTLOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "Default"
DEL_GROUP=""
# Exclude Elasticache groups
for CacheGroupId in `aws elasticache describe-cache-clusters --output json | jq -r ".CacheClusters[].SecurityGroups[].SecurityGroupId" | sort | uniq`
do
if [ $CACHELOOP -eq 0 ];
then
DEL_GROUP="'$CacheGroupId'"
else
DEL_GROUP=$DEL_GROUP",'$CacheGroupId'"
fi
let CACHELOOP="$CACHELOOP + 1"
done
echo -e "delete from groups where groupid in ($DEL_GROUP)" | mysql $DBCONFIG $DB
Rows "ElastiCache"
# Display Security Groups left to audit / delete
echo "select * from groups order by groupid" | mysql $DBCONFIG $DB | sed 's/groupid\t/groupid\t\t/'
And here is the sql to create the database.
-- MySQL dump 10.13 Distrib 5.5.41, for debian-linux-gnu (x86_64)
--
-- Host: localhost Database: aws_security_groups
-- ------------------------------------------------------
-- Server version 5.5.40-log
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET #OLD_TIME_ZONE=##TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0 */;
--
-- Table structure for table `groups`
--
DROP TABLE IF EXISTS `groups`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `groups` (
`groupid` varchar(12) DEFAULT NULL,
`groupname` varchar(200) DEFAULT NULL,
`description` varchar(200) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
--
-- Dumping data for table `groups`
--
LOCK TABLES `groups` WRITE;
/*!40000 ALTER TABLE `groups` DISABLE KEYS */;
/*!40000 ALTER TABLE `groups` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=#OLD_TIME_ZONE */;
/*!40101 SET SQL_MODE=#OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=#OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=#OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=#OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=#OLD_SQL_NOTES */;
-- Dump completed on 2015-01-27 16:07:44
Among other functions, both ScoutSuite and Prowler report unused EC2 Security Groups. Both are open source.
A boto example printing the Group IDs and Names only of the security groups that have no current instances.
It also shows how to specify which region you are concerned with.
import boto
import boto.ec2
EC2_REGION='ap-southeast-2'
ec2region = boto.ec2.get_region(EC2_REGION)
ec2 = boto.connect_ec2(region=ec2region)
sgs = ec2.get_all_security_groups()
for sg in sgs:
if len(sg.instances()) == 0:
print ("{0}\t{1}".format(sg.id, sg.name))
To confirm which security groups are still being used you should reverse or remove the if len(sg.instances()) == 0 test and print the len(sg.instances()) value out.
E.g.
print ("{0}\t{1}\t{2} instances".format(sg.id, sg.name, len(sg.instances())))
Using the node.js AWS SDK I can confirm that AWS doesn't allow you to delete security groups that are in use. I wrote a script that simply tries to delete all groups and gracefully handles the errors. This works for classic and the modern VPC. The error message can be seen below.
Err { [DependencyViolation: resource sg-12345678 has a dependent object]
message: 'resource sg-12345678 has a dependent object',
code: 'DependencyViolation',
time: Mon Dec 07 2015 12:12:43 GMT-0500 (EST),
statusCode: 400,
retryable: false,
retryDelay: 30 }
To the SGs attached to the network interfaces:
By name:
aws ec2 describe-network-interfaces --output text --query NetworkInterfaces[*].Groups[*].GroupName | tr -d '\r' | tr "\t" "\n" | sort | uniq
By id:
aws ec2 describe-network-interfaces --output text --query NetworkInterfaces[*].Groups[*].GroupId | tr -d '\r' | tr "\t" "\n" | sort | uniq
I was searching for the same info.
How to find all security groups that are not attached to any resource? And I found this:
Using AWS config rule "EC2_SECURITY_GROUP_ATTACHED_TO_ENI," I got a list of checks that non-default security groups are attached to Amazon Elastic Compute Cloud (EC2) instances or elastic network interfaces (ENIs). The rule returns NON_COMPLIANT if the security group is not associated with an EC2 instance or an ENI.
This is a very old question and I'm sure there are more ways to skin this AWS cat, but here's my solution in bash (you'll need jq for this to work):
REGION="eu-west-1"
SGLIST=$(aws ec2 describe-security-groups --query 'SecurityGroups[*].GroupId' | jq -r .[])
echo $SGLIST | xargs -n1 | while read SG; do [ "$(aws ec2 describe-network-interfaces --filters Name=group-id,Values=$SG --region $REGION | jq .NetworkInterfaces)" != '[]' ] || echo $SG; done
Remember to replace REGION with whatever region you're using.
The 1st step is to get a list of security groups.
Then we're checking for each security group if there's a network interface associated with it - this is not limited to EC2 instances, it checks anything that has a network interface (LBs, RDS, etc).
For reference see here.
Unfortunately the chosen answer is not as accurate as I need (I've tried to investigate the why, but I've preferred to implement it).
If I check ALL NetworkInterfaces, looking for attachments to any SecurityGroup, It gets me partial results. If I check only on EC2Instances, it gets me back partial results as well.
So that's my approach to the problem:
I get ALL EC2 SecurityGroups -> all_secgrp
I get ALL EC2 Instances -> all_instances
For each Instance, I get all SecurityGroups attached to it
I remove from all_secgrp each of these SecurityGroup (because attached)
For each SecurityGroup, I check an association with any NetworkInterfaces (using the filter function and filtering using that security-group-id)
IF no association is found, I remove the security-group from all_secgrp
Attached you can see a snippet of code. Don't complain for efficiency, but try to optimize it if you want.
all_secgrp = list(ec2_connector.security_groups.all())
all_instances = ec2_connector.instances.all()
for single_instance in all_instances:
instance_secgrp = ec2_connector.Instance(single_instance.id).security_groups
for single_sec_grp in instance_secgrp:
if ec2.SecurityGroup(id=single_sec_grp['GroupId']) in all_secgrp:
all_secgrp.remove(ec2.SecurityGroup(id=single_sec_grp['GroupId']))
all_secgrp_detached_tmp = all_secgrp[:]
for single_secgrp in all_secgrp_detached_tmp:
try:
print(single_secgrp.id)
if len(list(ec2_connector.network_interfaces.filter(Filters=[{'Name': 'group-id', 'Values': [single_secgrp.id]}]))) > 0:
all_secgrp.remove(single_secgrp)
except Exception:
all_secgrp.remove(single_secgrp)
return all_secgrp_detached
There's a tool in the AWS marketplace that makes this a lot easier. It shows you which groups are attached/detached for easy deletion, but it also compares your VPC Flow Logs against the security group rules and shows you which SG rules are in use or unused. AWS posted an ELK-stack solution to do this, but it was ridiculously complex.
Here's the tool, and a disclaimer that I worked on it. But I hope you all find it pertinent:
https://www.piasoftware.net/single-post/2018/04/24/VIDEO-Watch-as-we-clean-up-EC2-security-groups-in-just-a-few-minutes
This is a difficult problem, if you have security groups that reference other security groups in the rules. If so, you'll have to resolve DependencyErrors, which is not trivial.
If you are only using IP addresses, then this solution will work, after you create a boto3 client:
# pull all security groups from all vpcs in the given profile and region and save as a set
all_sgs = {sg['GroupId'] for sg in client.describe_security_groups()['SecurityGroups']}
# create a new set for all of the security groups that are currently in use
in_use = set()
# cycle through the ENIs and add all found security groups to the in_use set
for eni in client.describe_network_interfaces()['NetworkInterfaces']:
for group in eni['Groups']:
in_use.add(group['GroupId'])
unused_security_groups = all_sgs - in_use
for security_group in unused_security_groups:
try:
response = client.delete_security_group(GroupId=security_group)
except ClientError as e:
if e.response['Error']['Code'] == 'DependencyViolation':
print('EC2/Security Group Dependencies Exist')
else:
print('Unexpected error: {}'.format(e))
Related
I'm new to the AWS CLI and I am trying to build a CSV server inventory of my project's AWS RDS instances that includes their tags.
I have done so successfully with EC2 instances using this:
aws ec2 describe-isntances\
--query 'Reservations[*].Instances[*].[PrivateIpAddress, InstanceType, [Tags[?Key=='Name'.Value] [0][0], [Tags[?Key=='ENV'.Value] [0][0] ]'\
--output text | sed -E 's/\s+/,/g' >> ec2list.csv
The above command gives me a CSV with the Ip address, instance type, as well as the values of the listed tags.
However, I am currently trying to do so unsuccessfully on RDS instances with this:
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [Tags[?Key=='Component'.Value] [0][0], [Tags[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv
The RDS command only returns the instance arn and identifier but the tag values show up as none even though they definitely do have a value.
What modifications need to be made to my RDS query to show the tag values/is this even possible? Thanks
Probably you will need one more command https://docs.aws.amazon.com/AmazonRDS/latest/APIReference//API_ListTagsForResource.html.
You can wrap the 2 scripts in shell script like the below example.
#!/bin/bash
ARNS=$(aws rds describe-db-instances --query "DBInstances[].DBInstanceArn" --output text)
for line in $ARNS; do
TAGS=$(aws rds list-tags-for-resource --resource-name "$line" --query "TagList[]")
echo $line $TAGS
done
Realized that tags can be displayed in my original query. It does not use Tags like EC2 instances but TagList. E.g,
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [TagList[?Key=='Component'.Value] [0][0], [TagList[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv
I am looking to create a Powershell code that works with AWS: to list EC2 Key Pairs that are not in use by instances.
aws ec2 --profile default describe-key-pairs --query
KeyPairs[].[KeyName] --output text |xargs -I {} aws ec2 --profile
default describe-instances --filters Name=key-name,Values={} --query
Reservations[].Instances[].[KeyName,InstanceId] --output text| uniq
this code dosent work for powershell
You need to work with AWS Powershell Module.
The trick here is work with 2 cmdlet function - Get-EC2instance and Get-EC2KeyPair. First of all, you need to get all the keys that in use right now.
Later, you need to get all key pairs and filter them by basic for each and if statement.
Take a look at the following code snippet :
Import-Module AWSPowerShell
$keysInUse = #()
$keysNotInUse = #()
#Set AWS Credential
Set-AWSCredential -AccessKey "AccessKey" -SecretKey "SecretKey"
#Get ec2 key name from each instance
$allInstancesKeys = (Get-EC2instance -Region "YourRegion").Instances.KeyName
#Get all key based on region and check if there's an instance who use this key
Get-EC2KeyPair -Region "YourRegion" | % {
if($_.KeyName -notin $allInstancesKeys)
{
$keysNotInUse += $_.KeyName
}
else
{
$keysInUse += $_.KeyName
}
}
Write-Output "Keys not in use: $($keysNotInUse -join ',')\n
Keys in use: $($keysInUse -join ',')"
The instances i own and key name :
Output :
How to create new AccessKey and SecretKey - Managing Access Keys for Your AWS Account.
AWSPowerShell Module installation.
More about Get-EC2KeyPair Cmdlet.
From the docs :
Describes the specified key pairs or all of your key pairs.
More about Get-EC2instance
Returns information about instances that you own.
I have a Jenkins 2.0 job where I require the user to select the list of servers to execute the job against via a Jenkins "Active Choices Reactive Parameter". These servers which the job will execute against are AWS EC2 instances. Instead of hard-coding the available servers in the "Active Choices Reactive Parameter", I'd like to query the AWS CLI to get a list of servers.
A few notes:
I've assigned the Jenkins 2.0 EC2 an IAM role which has sufficient privileges to query AWS via the CLI.
The AWS CLI is installed on the Jenkins EC2.
The "Active Choices Reactive Parameter" will return a list of checkboxes if I hardcode values in a Groovy script, such as:
return ["10.1.1.1", "10.1.1.2", 10.1.1.3"]
I know my awk commands can be improved, I'm not yet sure how, but my primary goal is to get the list of servers dynamically loaded in Jenkins.
I can run the following command directly on the EC2 instance which is hosting Jenkins:
aws ec2 describe-instances --region us-east-2 --filters
"Name=tag:Env,Values=qa" --query
"Reservations[*].Instances[*].PrivateIpAddress" | grep -o
'\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' | awk
{'printf $0 ", "'} | awk {'print "[" $0'} | awk {'gsub(/^[ \t]+|[
\t]+$/, ""); print'} | awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
This will return the following, which is in the format expected by the "Active Choices Reactive Parameter":
["10.1.1.1", "10.1.1.2", 10.1.1.3"]
So, in the "Script" textarea of the "Active Choices Reactive Parameter", I have the following script. The problem is that my server list is never populated. I've tried numerous variations of this script without luck. Can someone please tell me where I've went wrong and what I can do to correct this script so that my list of server IP addresses is dynamically loaded into a Jenkins job?
def standardOut = new StringBuffer(), standardErr = new StringBuffer()
def command = $/
aws ec2 describe-instances --region us-east-2 --filters "Name=tag:Env,Values=qaint" --query "Reservations[*].Instances[*].PrivateIpAddress" |
grep -o '\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' |
awk {'printf $0 ", "'} |
awk {'print "[" $0'} |
awk {'gsub(/^[ \t]+|[ \t]+$/, ""); print'} |
awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
/$
def proc = command.execute()
proc.consumeProcessOutput(standardOut, standardErr)
proc.waitForOrKill(1000)
return standardOut
I tried to execute your script and the standardErr had some errors, Looks like groovy didn't like the double quotes in the AWS CLI. Here is a cleaner way to do without using awk
def command = 'aws ec2 describe-instances \
--filters Name=tag:Name,Values=Test \
--query Reservations[*].Instances[*].PrivateIpAddress \
--output text'
def proc = command.execute()
proc.waitFor()
def output = proc.in.text
def exitcode= proc.exitValue()
def error = proc.err.text
if (error) {
println "Std Err: ${error}"
println "Process exit code: ${exitcode}"
return exitcode
}
//println output.split()
return output.split()
This script works with Jenkins Active Choices Parameter, and returns the list of IP addresses:
def aws_cmd = 'aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
Name=tag:env,Values=dev \
--query Reservations[].Instances[].PrivateIpAddress[] \
--region us-east-2 \
--output text'
def aws_cmd_output = aws_cmd.execute()
// probably is required if execution takes long
//aws_cmd_output.waitFor()
def ip_list = aws_cmd_output.text.tokenize()
return ip_list
Is this below given command will work or not to delete older than month AWS EC2 Snapshot.
aws describe-snapshots | grep -v (date +%Y-%m-) | grep snap- | awk '{print $2}' | xargs -n 1 -t aws delete-snapshot
Your command won't work mostly because of a typo: aws describe-snapshots should be aws ec2 describe-snapshots.
Anyway, you can do this without any other tools than aws:
snapshots_to_delete=$(aws ec2 describe-snapshots --owner-ids xxxxxxxxxxxx --query 'Snapshots[?StartTime<=`2017-02-15`].SnapshotId' --output text)
echo "List of snapshots to delete: $snapshots_to_delete"
# actual deletion
for snap in $snapshots_to_delete; do
aws ec2 delete-snapshot --snapshot-id $snap
done
Make sure you always know what are you deleting. By echo $snap, for example.
Also, adding --dry-run to aws ec2 delete-snapshot can show you that there are no errors in request.
Edit:
There are two things to pay attention at in the first command:
--owner-ids - you account unique id. Could easily be found manually in top right corner of AWS Console: Support->Support Center->Account Number xxxxxxxxxxxx
--query - JMESPath query which gets only snapshots created later than specified date (e.g.: 2017-02-15): Snapshots[?StartTime>=`2017-02-15`].SnapshotId
+1 to #roman-zhuzha for getting me close. i did have trouble when $snapshots_to_delete didn't parse into a long string of snapshots separated by whitespaces.
this script, below, does parse them into a long string of snapshot ids separated by whitespaces on my Ubuntu (trusty) 14.04 in bash with awscli 1.16:
#!/usr/bin/env bash
dry_run=1
echo_progress=1
d=$(date +'%Y-%m-%d' -d '1 month ago')
if [ $echo_progress -eq 1 ]
then
echo "Date of snapshots to delete (if older than): $d"
fi
snapshots_to_delete=$(aws ec2 describe-snapshots \
--owner-ids xxxxxxxxxxxxx \
--output text \
--query "Snapshots[?StartTime<'$d'].SnapshotId" \
)
if [ $echo_progress -eq 1 ]
then
echo "List of snapshots to delete: $snapshots_to_delete"
fi
for oldsnap in $snapshots_to_delete; do
# some $oldsnaps will be in use, so you can't delete them
# for "snap-a1234xyz" currently in use by "ami-zyx4321ab"
# (and others it can't delete) add conditionals like this
if [ "$oldsnap" = "snap-a1234xyz" ] ||
[ "$oldsnap" = "snap-c1234abc" ]
then
if [ $echo_progress -eq 1 ]
then
echo "skipping $oldsnap known to be in use by an ami"
fi
continue
fi
if [ $echo_progress -eq 1 ]
then
echo "deleting $oldsnap"
fi
if [ $dry_run -eq 1 ]
then
# dryrun will not actually delete the snapshots
aws ec2 delete-snapshot --snapshot-id $oldsnap --dry-run
else
aws ec2 delete-snapshot --snapshot-id $oldsnap
fi
done
Switch these variables as necesssary:
dry_run=1 # set this to 0 to actually delete
echo_progress=1 # set this to 0 to not echo stmnts
Change the date -d string to a human readable version of the number of days, months, or years back you want to delete "older than":
d=$(date +'%Y-%m-%d' -d '15 days ago') # half a month
Find your account id and update these XXXX's to that number:
--owner-ids xxxxxxxxxxxxx \
Here is an example of where you can find that number:
If running this in a cron, you only want to see errors and warnings. A frequent warning will be that there are snapshots in use. The two example snapshot id's (snap-a1234xyz, snap-c1234abc) are ignored since they would otherwise print something like:
An error occurred (InvalidSnapshot.InUse) when calling the DeleteSnapshot operation: The snapshot snap-a1234xyz is currently in use by ami-zyx4321ab
See the comments near "snap-a1234xyx" example snapshot id for how to handle this output.
And don't forget to check on the handy examples and references in the 1.16 aws cli describe-snapshots manual.
you can use 'self' in '--owner-ids' and delete the snapshots created before a specific date (e.g. 2018-01-01) with this one-liner command:
for i in $(aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?StartTime<=`2018-01-01`].SnapshotId' --output text); do echo Deleting $i; aws ec2 delete-snapshot --snapshot-id $i; sleep 1; done;
Date condition must be within Parenthesis ()
aws ec2 describe-snapshots \
--owner-ids 012345678910 \
--query "Snapshots[?(StartTime<='2020-03-31')].[SnapshotId]"
I'm trying to remove all my AWS EC2 snapshots except the last 6 with this script:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
# Backup script
Volume="{VOL-DATA}"
Owner="{OWNER}"
Description="{DESCRIPTION}"
Local_numbackups=6
Local_region="us-west-1"
# Remove old snapshots associated to a description, keep the last $Local_numbackups
aws ec2 describe-snapshots --filters Name=description,Values=$Description | grep "SnapshotId" | head -n -$Local_numbackups | awk '{print $2}' | sed -e 's/,//g' | xargs -n 1 -t aws ec2 delete-snapshot --snapshot-id
However it doesn't work. It deletes instances, but not the oldest ones. Why?
You're trying to do something too complex to be handled (gracefully) in one line, so we'll need to break it down a bit. First, let's get the snapshots sorted by age, oldest to newest:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n
Then we can drop the StartTime field to get the snapshot ID alone:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'
head (or tail) aren't really suitable for discarding the fixed number of snapshots we want to keep. We need to filter those out another way. So, putting it altogether:
# Get array of snapshot IDs sorted by age (oldest to newest)
snapshots=($(aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'))
# Get number of snapshots
count=${#snapshots[#]}
if [ "$count" -lt "$Local_numbackups" ]; then
echo "We already have less than $Local_numbackups snapshots"
exit 0
else
# Drop the last (newest) $Local_numbackups IDs from the array
snapshots=(${snapshots[#]:0:$((count - Local_numbackups))})
# Loop through the remaining snapshots and delete
for snapshot in ${snapshots[#]}; do
aws ec2 delete-snapshot --snapshot-id $snapshot
done
fi
(While it's obviously possible to do this in bash with the AWS CLI, it's complex enough that I'd personally rather use a more robust language and the AWS SDK.)
Here is a sample.
days2keep="30"
region="us-west-2"
name="jdoe"
#date - -v is for Osx
cutoffdate=`date -j -v-${days2keep}d '+%Y-%m-%d'`
echo "Finding list of snapshots before $cutoffdate "
oldsnapids=$(aws ec2 describe-snapshots --region $region --filters Name=tag:Name,Values=$name --query Snapshots[?StartTime\<=\`$cutoffdate\`].SnapshotId --output text)
for snapid in $oldsnapids
do
echo Deleting snapshot $snapid
aws ec2 delete-snapshot --snapshot-id $snapid --region $region
done
We can delete all old snapshots using below steps:-
List out all snapshots ID's they are old and put in one file like:- /opt/snapshot.txt
And then use "aws configure" command for setup access AWS account from command line, at this time we need to provide credentials:-
Such as:
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXX
Default region name [None]: XXXXXXXXXXXXXXXX
After that we can use below shell script, we need to give snapshots ID's file name
Codes:
#!/bin/bash
list=$(cat /opt/snapshot.txt)
for i in $list
do
aws ec2 delete-snapshot --snapshot-id $i
if [ $? -eq 0 ]; then
echo Going Good
else
echo FAIL
fi
done
Thanks