Automating network interface related configurations on red hat ami-7.5 - amazon-web-services

I have an ENI created, and I need to attach it as a secondary ENI to my EC2 instance dynamically using cloud formation. As I am using red hat AMI, I have to go ahead and manually configure RHEL which includes steps as mentioned in below post.
Manually Configuring secondary Elastic network interface on Red hat ami- 7.5
Can someone please tell me how to automate all of this using cloud formation. Is there a way to do all of it using user data in a cloud formation template? Also, I need to make sure that the configurations remain even if I reboot my ec2 instance (currently the configurations get deleted after reboot.)

Though it's not complete automation but you can do below to make sure that the ENI comes up after every reboot of your ec2 instance (only for RHEL instances). If anyone has any better suggestion, kindly share.
vi /etc/systemd/system/create.service
Add below content
[Unit]
Description=XYZ
After=network.target
[Service]
ExecStart=/usr/local/bin/my.sh
[Install]
WantedBy=multi-user.target
Change permissions and enable the service
chmod a+x /etc/systemd/system/create.service
systemctl enable /etc/systemd/system/create.service
Below shell script does the configuration on rhel for ENI
vi /usr/local/bin/my.sh
add below content
#!/bin/bash
my_eth1=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/0e:3f:96:77:bb:f8/local-ipv4s/`
echo "this is the value--" $my_eth1 "hoo"
GATEWAY=`ip route | awk '/default/ { print $3 }'`
printf "NETWORKING=yes\nNOZEROCONF=yes\nGATEWAYDEV=eth0\n" >/etc/sysconfig/network
printf "\nBOOTPROTO=dhcp\nDEVICE=eth1\nONBOOT=yes\nTYPE=Ethernet\nUSERCTL=no\n" >/etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1
ip route add default via $GATEWAY dev eth1 tab 2
ip rule add from $my_eth1/32 tab 2 priority 600
Start the service
systemctl start create.service
You can check if the script ran fine or not by --
journalctl -u create.service -b

Still need to figure out the joining of the secondary ENI from Linux, but this was the Python script I wrote to have the instance find the corresponding ENI and attach it to itself. Basically the script works by taking a predefined naming tag for both the ENI and Instance, then joins the two together.
Pre-reqs for setting this up are:
IAM role on the instance to allow access to S3 bucket where script is stored
Install pip and the AWS CLI in the user data section
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli --upgrade
aws configure set default.region YOUR_REGION_HERE
pip install boto3
sleep 180
Note on sleep 180 command: I have my ENI swap out on instance in an autoscaling group. This allows an extra 3 min for the other instance to shut down and drop the ENI, so the new one can pick it up. May or may not be necessary for your use case.
AWS CLI command in user data to download the file onto the instance (example below)
aws s3api get-object --bucket YOURBUCKETNAME --key NAMEOFOBJECT.py /home/ec2-user/NAMEOFOBJECT.py
# coding: utf-8
import boto3
import sys
import time
client = boto3.client('ec2')
# Get the ENI ID
eni = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_id = eni['NetworkInterfaces'][0]['NetworkInterfaceId']
# Get ENI status
eni_status = eni['NetworkInterfaces'][0]['Status']
print('Current Status: {}\n'.format(eni_status))
# Detach if in use
if eni_status == 'in-use':
eni_attach_id = eni['NetworkInterfaces'][0]['Attachment']['AttachmentId']
eni_detach = client.detach_network_interface(
AttachmentId=eni_attach_id,
DryRun=False,
Force=False
)
print(eni_detach)
# Wait until ENI is available
print('start\n-----')
while eni_status != 'available':
print('checking...')
eni_state = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_status = eni_state['NetworkInterfaces'][0]['Status']
print('ENI is currently: ' + eni_status + '\n')
if eni_status != 'available':
time.sleep(10)
print('end')
# Get the instance ID
instance = client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the tag name of your instance here']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
)
instance_id = instance['Reservations'][0]['Instances'][0]['InstanceId']
# Attach the ENI
response = client.attach_network_interface(
DeviceIndex=1,
DryRun=False,
InstanceId=instance_id,
NetworkInterfaceId=eni_id
)

Related

Why don't I create Amazon lightsailclient and set up UserData?

Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud

How to delete Rds instance along with Rds cluster using boto3 in AWS

I need to delete the RDS instance and RDS cluster using boto3
Note : I am creating the RDS instance under the cluster in Amazon console.
I used the below codes however getting error:
Syntax error in module 'lambda_function': expected an indented block (lambda_function.py, line 25)
ec2 = boto3.client('ec2')
# Get list of regions
regions = ec2.describe_regions().get('Regions',[] )
# Iterate over regions
for region in regions:
# Running following for a particular region
print ("*************** Checking region -- %s " % region['RegionName'])
reg=region['RegionName']
####### deleting rds cluster ###############
print ("++++++++++++++ Deleting RDS cluster ++++++++++++++")
client = boto3.client('rds')
response = client.describe_db_instance(Filters=[{'Name': 'string'}]
for instance in response ["DBInstances"]:
print ("About to delete %s | in %s" % (instance['DBInstanceIdentifier']))
response = client.delete_db_instance(DBInstanceIdentifier=instance['DBInstanceIdentifier'])
SkipFinalSnapshot=True
DeleteAutomatedBackups=True
I need to delete RDS cluster and RDS db instances
NOTE: It would be better if it is possible in all regions in my account
Your for loop at the bottom is not indented properly, should be something like below:
for cluster in result ["rds"]:
print ("About to delete %s | in %s" % (cluster['DBInstanceIdentifier']))
response = client.delete_db_instance(DBInstanceIdentifier=cluster['DBInstanceIdentifier'])
SkipFinalSnapshot=True
DeleteAutomatedBackups=True
Effectively, to delete a db cluster from boto3, you need to:
Delete all instances in the cluster
Delete the cluster
Note: both are required. If you do not delete the cluster, you will have a cluster with zero instances, and if you do not delete the instances, you will get the following error:
botocore.errorfactory.InvalidDBClusterStateFault: An error occurred (InvalidDBClusterStateFault) when calling the DeleteDBCluster operation: Cluster cannot be deleted, it still contains DB instances in non-deleting state.
Great - so how do we do it?
instance_name = 'instance-name'
rds.delete_db_instance(
DBInstanceIdentifier=instance_name,
SkipFinalSnapshot=False,
DeleteAutomatedBackups=True,
)
cluster_name = 'cluster-name.cluster-abcdefgh.rds.amazonaws.com'
rds.delete_db_cluster(
DBClusterIdentifier=cluster_name,
SkipFinalSnapshot=False,
FinalDBSnapshotIdentifier=cluster_name,
)
A more proper solution would be to describe your clusters (rds.describe_clusters), then to describe the instances on those clusters (rds.describe_db_instances). As you iterate through the instances on each cluster, you delete each instance. After you have deleted all instances on the cluster, you can delete the cluster.
The below is untested but is definitely close; community, feel free to update where needed.
# pull our rds clusters
db_clusters = rds.describe_db_clusters()
# for each cluster
for cluster in db_clusters['DBClusters']:
# grab the cluster identifier
cluster_name = cluster['DBClusterIdentifier']
db_instances = rds.describe_db_instances(Filters=[
{
'Name': 'db-cluster-id',
'Values': [
cluster_name,
]
})
# for each instance on this cluster
for instance in db_instances['DBInstances']:
instance_name = instance['DBInstanceIdentifier']
print(f'Now deleting {instance_name}'.)
# delete the instance
rds.delete_db_instance(
DBInstanceIdentifier=instance_name,
SkipFinalSnapshot=False,
DeleteAutomatedBackups=True,
)
# now that we have finished deleting the instances, we can delete the cluster
print(f'Now deleting {cluster_name}'.)
rds.delete_db_cluster(
DBClusterIdentifier=cluster_name,
SkipFinalSnapshot=False,
FinalDBSnapshotIdentifier=current_stage_db1a_cluster_name,
)

Using Terraform how to get EC2 to reference a Cloudformation Datomic instance

Given the Datomic Cloudformation template (described here and here), I can deploy a Datomic instance in AWS. I can also use Terraform to automate this.
Using Terraform, how do we put a load balancer in front of the instance in that instance in the Cloudformation template?
Using Terraform, how do we put a Route53 domain name in front of the Datomic instance (or load balancer) in the Cloudformation template?
The Datomic Cloudformation template looks like this:
cf.json
{"Resources":
{"LaunchGroup":
{"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":
{"MinSize":{"Ref":"GroupSize"},
"Tags":
[{"Key":"Name",
"Value":{"Ref":"AWS::StackName"},
"PropagateAtLaunch":"true"}],
"MaxSize":{"Ref":"GroupSize"},
"AvailabilityZones":{"Fn::GetAZs":""},
"LaunchConfigurationName":{"Ref":"LaunchConfig"}}},
"LaunchConfig":
{"Type":"AWS::AutoScaling::LaunchConfiguration",
"Properties":
{"ImageId":
{"Fn::FindInMap":
["AWSRegionArch2AMI", {"Ref":"AWS::Region"},
{"Fn::FindInMap":
["AWSInstanceType2Arch", {"Ref":"InstanceType"}, "Arch"]}]},
"UserData":
{"Fn::Base64":
{"Fn::Join":
["\n",
["exec > >(tee \/var\/log\/user-data.log|logger -t user-data -s 2>\/dev\/console) 2>&1",
{"Fn::Join":["=", ["export XMX", {"Ref":"Xmx"}]]},
{"Fn::Join":["=", ["export JAVA_OPTS", {"Ref":"JavaOpts"}]]},
{"Fn::Join":
["=",
["export DATOMIC_DEPLOY_BUCKET",
{"Ref":"DatomicDeployBucket"}]]},
{"Fn::Join":
["=", ["export DATOMIC_VERSION", {"Ref":"DatomicVersion"}]]},
"cd \/datomic", "cat <<EOF >aws.properties",
"host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/local-ipv4`",
"alt-host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/public-ipv4`",
"aws-dynamodb-region=us-east-1\naws-transactor-role=datomic-aws-transactor-10\naws-peer-role=datomic-aws-peer-10\nprotocol=ddb\nmemory-index-max=256m\nport=4334\nmemory-index-threshold=32m\nobject-cache-max=128m\nlicense-key=\naws-dynamodb-table=your-system-name",
"EOF", "chmod 744 aws.properties",
"AWS_ACCESS_KEY_ID=\"${DATOMIC_READ_DEPLOY_ACCESS_KEY_ID}\" AWS_SECRET_ACCESS_KEY=\"${DATOMIC_READ_DEPLOY_AWS_SECRET_KEY}\" aws s3 cp \"s3:\/\/${DATOMIC_DEPLOY_BUCKET}\/${DATOMIC_VERSION}\/startup.sh\" startup.sh",
"chmod 500 startup.sh", ".\/startup.sh"]]}},
"InstanceType":{"Ref":"InstanceType"},
"InstanceMonitoring":{"Ref":"InstanceMonitoring"},
"SecurityGroups":{"Ref":"SecurityGroups"},
"IamInstanceProfile":{"Ref":"InstanceProfile"},
"BlockDeviceMappings":
[{"DeviceName":"\/dev\/sdb", "VirtualName":"ephemeral0"}]}}},
"Mappings":
{"AWSInstanceType2Arch":
{"m3.large":{"Arch":"64h"},
"c4.8xlarge":{"Arch":"64h"},
"t2.2xlarge":{"Arch":"64h"},
"c3.large":{"Arch":"64h"},
"hs1.8xlarge":{"Arch":"64h"},
"i2.xlarge":{"Arch":"64h"},
"r4.4xlarge":{"Arch":"64h"},
"m1.small":{"Arch":"64p"},
"m4.large":{"Arch":"64h"},
"m4.xlarge":{"Arch":"64h"},
"c3.8xlarge":{"Arch":"64h"},
"m1.xlarge":{"Arch":"64p"},
"cr1.8xlarge":{"Arch":"64h"},
"m4.10xlarge":{"Arch":"64h"},
"i3.8xlarge":{"Arch":"64h"},
"m3.2xlarge":{"Arch":"64h"},
"r4.large":{"Arch":"64h"},
"c4.xlarge":{"Arch":"64h"},
"t2.medium":{"Arch":"64h"},
"t2.xlarge":{"Arch":"64h"},
"c4.large":{"Arch":"64h"},
"c3.2xlarge":{"Arch":"64h"},
"m4.2xlarge":{"Arch":"64h"},
"i3.2xlarge":{"Arch":"64h"},
"m2.2xlarge":{"Arch":"64p"},
"c4.2xlarge":{"Arch":"64h"},
"cc2.8xlarge":{"Arch":"64h"},
"hi1.4xlarge":{"Arch":"64p"},
"m4.4xlarge":{"Arch":"64h"},
"i3.16xlarge":{"Arch":"64h"},
"r3.4xlarge":{"Arch":"64h"},
"m1.large":{"Arch":"64p"},
"m2.4xlarge":{"Arch":"64p"},
"c3.4xlarge":{"Arch":"64h"},
"r3.large":{"Arch":"64h"},
"c4.4xlarge":{"Arch":"64h"},
"r3.xlarge":{"Arch":"64h"},
"m2.xlarge":{"Arch":"64p"},
"r4.16xlarge":{"Arch":"64h"},
"t2.large":{"Arch":"64h"},
"m3.xlarge":{"Arch":"64h"},
"i2.4xlarge":{"Arch":"64h"},
"r4.8xlarge":{"Arch":"64h"},
"i3.large":{"Arch":"64h"},
"r3.8xlarge":{"Arch":"64h"},
"c1.medium":{"Arch":"64p"},
"r4.2xlarge":{"Arch":"64h"},
"i2.8xlarge":{"Arch":"64h"},
"m3.medium":{"Arch":"64h"},
"r3.2xlarge":{"Arch":"64h"},
"m1.medium":{"Arch":"64p"},
"i3.4xlarge":{"Arch":"64h"},
"m4.16xlarge":{"Arch":"64h"},
"i3.xlarge":{"Arch":"64h"},
"r4.xlarge":{"Arch":"64h"},
"c1.xlarge":{"Arch":"64p"},
"t1.micro":{"Arch":"64p"},
"c3.xlarge":{"Arch":"64h"},
"i2.2xlarge":{"Arch":"64h"},
"t2.small":{"Arch":"64h"}},
"AWSRegionArch2AMI":
{"ap-northeast-1":{"64p":"ami-eb494d8c", "64h":"ami-81f7cde6"},
"ap-northeast-2":{"64p":"ami-6eb66a00", "64h":"ami-f594489b"},
"ca-central-1":{"64p":"ami-204bf744", "64h":"ami-5e5be73a"},
"us-east-2":{"64p":"ami-5b42643e", "64h":"ami-896c4aec"},
"eu-west-2":{"64p":"ami-e52d3a81", "64h":"ami-55091e31"},
"us-west-1":{"64p":"ami-97cbebf7", "64h":"ami-442a0a24"},
"ap-southeast-1":{"64p":"ami-db1492b8", "64h":"ami-3e90165d"},
"us-west-2":{"64p":"ami-daa5c6ba", "64h":"ami-cb5030ab"},
"eu-central-1":{"64p":"ami-f3f02b9c", "64h":"ami-d564bcba"},
"us-east-1":{"64p":"ami-7f5f1e69", "64h":"ami-da5110cc"},
"eu-west-1":{"64p":"ami-66001700", "64h":"ami-77465211"},
"ap-southeast-2":{"64p":"ami-32cbdf51", "64h":"ami-66647005"},
"ap-south-1":{"64p":"ami-82126eed", "64h":"ami-723c401d"},
"sa-east-1":{"64p":"ami-afd7b9c3", "64h":"ami-ab9af4c7"}}},
"Parameters":
{"InstanceType":
{"Description":"Type of EC2 instance to launch",
"Type":"String",
"Default":"c3.large"},
"InstanceProfile":
{"Description":"Preexisting IAM role \/ instance profile",
"Type":"String",
"Default":"datomic-aws-transactor-10"},
"Xmx":
{"Description":"Xmx setting for the JVM",
"Type":"String",
"AllowedPattern":"\\d+[GgMm]",
"Default":"2625m"},
"GroupSize":
{"Description":"Size of machine group",
"Type":"String",
"Default":"1"},
"InstanceMonitoring":
{"Description":"Detailed monitoring for store instances?",
"Type":"String",
"Default":"true"},
"JavaOpts":
{"Description":"Options passed to Java launcher",
"Type":"String",
"Default":""},
"SecurityGroups":
{"Description":"Preexisting security groups.",
"Type":"CommaDelimitedList",
"Default":"datomic"},
"DatomicDeployBucket":
{"Type":"String",
"Default":"deploy-a0dbc565-faf2-4760-9b7e-29a8e45f428e"},
"DatomicVersion":{"Type":"String", "Default":"0.9.5561.50"}},
"Description":"Datomic Transactor Template"}
samples/cf-template.properties
#################################################################
# AWS instance and group settings
#################################################################
# required
# AWS instance type. See http://aws.amazon.com/ec2/instance-types/ for
# a list of legal instance types.
aws-instance-type=c3.large
# required, see http://docs.amazonwebservices.com/general/latest/gr/rande.html#ddb_region
aws-region=us-east-1
# required
# Enable detailed monitoring of AWS instances.
aws-instance-monitoring=true
# required
# Set group size >1 to create a standby pool for High Availability.
aws-autoscaling-group-size=1
# required, default = 70% of AWS instance RAM
# Passed to java launcher via -Xmx
java-xmx=
#################################################################
# Java VM options
#
# If you set the java-opts property, it will entirely replace the
# value used by bin/transactor, which you should consult as a
# starting point if you are configuring GC.
#
# Note that the single-quoting is necessary due to the whitespace
# between options.
#################################################################
# java-opts='-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly'
#################################################################
# security settings
#
# You must specify at least one of aws-ingress-grops or
# aws-ingress-cidrs to allows peers to connect!
#################################################################
# required
# The transactor needs to run in a security group that opens the
# transactor port to legal peers. If you specify a security group,
# `bin/transactor ensure-cf ...` will ensure that security group
# allows ingress on the transactor port.
aws-security-group=datomic
# Comma-delimited list of security groups. Security group syntax:
# group-name or aws-account-id:group-name
aws-ingress-groups=datomic
# Comma-delimited list of CIDRS.
# aws-ingress-cidrs=0.0.0.0/0
#################################################################
# datomic deployment settings
#################################################################
# required, default = VERSION number of Datomic you deploy from
# Which Datomic version to run.
datomic-version=
# required
# download Datomic from this bucket on startup. You typically will not change this.
datomic-deploy-s3-bucket=some-value
Unless you can't easily avoid it, I wouldn't recommend mixing Cloudformation with Terraform because it's going to make it a pain to do a lot of things. Normally I'd only recommend it for things such as the rare occurrences that Cloudformation covers a resource but not Terraform.
If you do need to do this you should be in luck because your Cloudformation template adds a tag to the autoscaling group with your instance(s) in that you can use to then link a load balancer to the autoscaling group and have the instances attach themselves to the load balancer as they are created (and detach when they are being deleted).
Unfortunately the Cloudformation template doesn't simply output the autoscaling group name so you'll probably need to do this in two separate terraform apply actions (probably keeping the configuration in separate folders).
Assuming something like this for your Cloudformation stack:
resource "aws_cloudformation_stack" "datomic" {
name = "datomic-stack"
...
}
Then a minimal example looks something like this:
data "aws_autoscaling_groups" "datomic" {
filter {
name = "key"
values = ["AWS::StackName"]
}
filter {
name = "value"
values = ["datomic-stack"]
}
}
resource "aws_lb_target_group" "datomic" {
name = "datomic-lb-tg"
port = 80
protocol = "HTTP"
vpc_id = "${var.vpc_id}"
}
resource "aws_lb" "datomic" {
name = "datomic-lb"
internal = false
security_groups = ["${var.security_group_id}"]
subnets = ["${var.subnet_id"]
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = "${data.aws_autoscaling_groups.datomic.names[0]}"
alb_target_group_arn = "${aws_alb_target_group.datomic.arn}"
}
resource "aws_lb_listener" "datomic" {
load_balancer_arn = "${aws_lb.datomic.arn}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_lb_target_group.datomic.arn}"
type = "forward"
}
}
The above config will find the autoscaling group created by the Cloudformation template and then attach it to an application load balancer that listens for HTTP traffic and forwards HTTP traffic to the Datomic instances.
It's trivial from here to add a Route53 record to the load balancer but because your instances are in an autoscaling group you can't easily add Route53 records for these instances (and probably shouldn't need to).

Cloning existing EMR cluster into a new one using boto3

When creating a new cluster using boto3, I want to use configuration from existing clusters (which is terminated) and thus clone it.
As far as I know, emr_client.run_job_flow requires all the configuration(Instances, InstanceFleets etc) to be provided as parameters.
Is there any way I can clone from existing cluster like I can do from aws console for EMR.
What i can recommend you, is using the AWS CLI to fire your Cluster.
It permit to versioning your cluster configuration and you can easily load steps configuration with a json file.
aws create-cluster --name "Cluster's name" --ec2-attributes KeyName=SSH_KEY --instance-type m3.xlarge --release-label emr-5.2.1 --log-uri s3://mybucket/logs/ --enable-debugging --instance-count 1 --use-default-roles --applications Name=Spark --steps file://step.json
Where step.json looks like :
[
{
"Name": "Step #1",
"Type":"SPARK",
"Jar":"command-runner.jar",
"Args":
[
"--deploy-mode", "cluster",
"--class", "com.your.data.set.class",
"s3://path/to/your/spark-job.jar",
"-c", "s3://path/to/your/config/or/not",
"--aws-access-key", "ACCESS_KEY",
"--aws-secret-key", "SECRET_KEY"
],
"ActionOnFailure": "CANCEL_AND_WAIT"
}
]
(Multiple steps is okey too)
After that you can always startUp the same configured Cluster.
And for example Schedule the whole Cluster and steps from one AirFlow job.
But if you really want to use Boto3, i suppose that the describe_cluster() method can help you to get the whole informations and use the returned object to Fire Up a new one.
There is no way to get "emr export cli" through command line.
You should parse the parameter what you want to clone, through "describe-cluster".
See below sample,
https://github.com/awslabs/aws-support-tools/tree/master/EMR/Get_EMR_CLI_Export
import boto3
import json
import sys
cluster_id = sys.argv[1]
client = boto3.client('emr')
clst = client.describe_cluster(ClusterId=cluster_id)
...
awscli += ' --steps ' + '\'' + json.dumps(cli_steps) + '\''
...
awscli += ' --instance-groups ' + '\'' + json.dumps(cli_igroups) + '\''
print(awscli)
It works parsing the parameters from “describe-cluster” at first, and make strings to fit “create-cluster” of aws-cli.

Getting a list of instances in an EC2 auto scale group?

Is there a utility or script available to retrieve a list of all instances from AWS EC2 auto scale group?
I need a dynamically generated list of production instance to hook into our deploy process. Is there an existing tool or is this something I am going to have to script?
Here is a bash command that will give you the list of IP addresses of your instances in an AutoScaling group.
for ID in $(aws autoscaling describe-auto-scaling-instances --region us-east-1 --query AutoScalingInstances[].InstanceId --output text);
do
aws ec2 describe-instances --instance-ids $ID --region us-east-1 --query Reservations[].Instances[].PublicIpAddress --output text
done
(you might want to adjust the region and to filter per AutoScaling group if you have several of them)
On a higher level point of view - I would question the need to connect to individual instances in an AutoScaling Group. The dynamic nature of AutoScaling would encourage you to fully automate your deployment and admin processes. To quote an AWS customer : "If you need to ssh to your instance, change your deployment process"
--Seb
The describe-auto-scaling-groups command from the AWS Command Line Interface looks like what you're looking for.
Edit: Once you have the instance IDs, you can use the describe-instances command to fetch additional details, including the public DNS names and IP addresses.
You can use the describe-auto-scaling-instances cli command, and query for your autoscale group name.
Example:
aws autoscaling describe-auto-scaling-instances --region us-east-1
--query 'AutoScalingInstances[?AutoScalingGroupName==`YOUR_ASG`]' --output text
Hope that helps
You can also use below command to fetch private ip address without any jq/awk/sed/cut
$ aws autoscaling describe-auto-scaling-instances --region us-east-1 --output text \
--query "AutoScalingInstances[?AutoScalingGroupName=='ASG-GROUP-NAME'].InstanceId" \
| xargs -n1 aws ec2 describe-instances --instance-ids $ID --region us-east-1 \
--query "Reservations[].Instances[].PrivateIpAddress" --output text
courtesy this
I actually ended up writing a script in Python because I feel more comfortable in Python then Bash,
#!/usr/bin/env python
"""
ec2-autoscale-instance.py
Read Autoscale DNS from AWS
Sample config file,
{
"access_key": "key",
"secret_key": "key",
"group_name": "groupName"
}
"""
from __future__ import print_function
import argparse
import boto.ec2.autoscale
try:
import simplejson as json
except ImportError:
import json
CONFIG_ACCESS_KEY = 'access_key'
CONFIG_SECRET_KEY = 'secret_key'
CONFIG_GROUP_NAME = 'group_name'
def main():
arg_parser = argparse.ArgumentParser(description=
'Read Autoscale DNS names from AWS')
arg_parser.add_argument('-c', dest='config_file',
help='JSON configuration file containing ' +
'access_key, secret_key, and group_name')
args = arg_parser.parse_args()
config = json.loads(open(args.config_file).read())
access_key = config[CONFIG_ACCESS_KEY]
secret_key = config[CONFIG_SECRET_KEY]
group_name = config[CONFIG_GROUP_NAME]
ec2_conn = boto.connect_ec2(access_key, secret_key)
as_conn = boto.connect_autoscale(access_key, secret_key)
try:
group = as_conn.get_all_groups([group_name])[0]
instances_ids = [i.instance_id for i in group.instances]
reservations = ec2_conn.get_all_reservations(instances_ids)
instances = [i for r in reservations for i in r.instances]
dns_names = [i.public_dns_name for i in instances]
print('\n'.join(dns_names))
finally:
ec2_conn.close()
as_conn.close()
if __name__ == '__main__':
main()
Gist
The answer at https://stackoverflow.com/a/12592543/20774 was helpful in developing this script.
Use the below snippet for sorting out ASGs with specific tags and listing out its instance details.
#!/usr/bin/python
import boto3
ec2 = boto3.resource('ec2', region_name='us-west-2')
def get_instances():
client = boto3.client('autoscaling', region_name='us-west-2')
paginator = client.get_paginator('describe_auto_scaling_groups')
groups = paginator.paginate(PaginationConfig={'PageSize': 100})
#print groups
filtered_asgs = groups.search('AutoScalingGroups[] | [?contains(Tags[?Key==`{}`].Value, `{}`)]'.format('Application', 'CCP'))
for asg in filtered_asgs:
print asg['AutoScalingGroupName']
instance_ids = [i for i in asg['Instances']]
running_instances = ec2.instances.filter(Filters=[{}])
for instance in running_instances:
print(instance.private_ip_address)
if __name__ == '__main__':
get_instances()
for ruby using aws-sdk gem v2
First create ec2 object as this:
ec2 = Aws::EC2::Resource.new(region: 'region',
credentials: Aws::Credentials.new('IAM_KEY', 'IAM_SECRET')
)
instances = []
ec2.instances.each do |i|
p "instance id---", i.id
instances << i.id
end
This will fetch all instance ids in particular region and can use more filters like ip_address.