How to automate EC2 instance startup and ssh connect - amazon-web-services

At the moment I connect with the following step manually:
Open EC2-Instance web
Under Actions -> Instance State click Start
Look at Connect tab
Manually copy the ssh command e.g.:
ssh -i "mykey.pem" ubuntu#ec2-13-112-241-333.ap-northeast-1.compute.amazonaws.com
What's the best practice so that I can streamline these stems through command line in my local computer? So that I can just use one command.

An approach with awscli would be
# Start the instance
aws ec2 start-instances --instance-ids i-xxxxxxxxxxx
status=0
# Wait for the instance until the 2/2 checks are passed
while [ $status -lt 2]
do
status=`aws ec2 describe-instance-status --instance-ids i-xxxxxxxxxxx --filters Name="instance-status.reachability,Values=passed" | grep '"Status": "passed"' | wc -l`
# add sleep time
done
# Associate an Elastic IP if already have one allocated (skip if not reqd)
aws ec2 associate-address --instance-id i-xxxxxxxxxxx --public-ip elastic_ip
# Get the Public DNS, (If the instance has only PrivateIp, grep "PrivateIpAddress")
public_dns=`aws ec2 describe-instances --instance-ids i-xxxxxxxxxxx | grep "PublicDnsName" | head -1 | awk -F: '{print $2}' | sed 's/\ "//g;s/",//g'`
ssh -i key.pem username#public_dns

Related

How force remove network interfaces? AWS - Error detaching network interface

I created stack with nested stacks, there is some network interfaces, VPC etc.
I try to remove network interface but I can't because I'm getting an error
Error detaching network interface
eni-0d3be6d4c7869686a: You are not allowed to manage 'ela-attach' attachments.
Do you have any ideas how to force remove?
I had the same issue with multiple CF Stacks.
Stacks fail deleting when there are AWS constructs in use attach to the VPC. One approach that worked for me was to use the following script to find the dependancies and then delete them manually before deleting the VPC. (Delete all dependancies that come out of the script manually, and try deleting Network Interfaces at last). Once done, Then tried deleting the CF stacks from the mgmt console, which worked without any issue.
Let us know if this worked.
#!/bin/bash
vpc="vpc-xxxxxxxxxxxxx"
aws ec2 describe-internet-gateways --filters 'Name=attachment.vpc-id,Values='$vpc | grep InternetGatewayId
aws ec2 describe-subnets --filters 'Name=vpc-id,Values='$vpc | grep SubnetId
aws ec2 describe-route-tables --filters 'Name=vpc-id,Values='$vpc | grep RouteTableId
aws ec2 describe-network-acls --filters 'Name=vpc-id,Values='$vpc | grep NetworkAclId
aws ec2 describe-vpc-peering-connections --filters 'Name=requester-vpc-info.vpc-id,Values='$vpc | grep VpcPeeringConnectionId
aws ec2 describe-vpc-endpoints --filters 'Name=vpc-id,Values='$vpc | grep VpcEndpointId
aws ec2 describe-nat-gateways --filter 'Name=vpc-id,Values='$vpc | grep NatGatewayId
aws ec2 describe-security-groups --filters 'Name=vpc-id,Values='$vpc | grep GroupId
aws ec2 describe-instances --filters 'Name=vpc-id,Values='$vpc | grep InstanceId
aws ec2 describe-vpn-connections --filters 'Name=vpc-id,Values='$vpc | grep VpnConnectionId
aws ec2 describe-vpn-gateways --filters 'Name=attachment.vpc-id,Values='$vpc | grep VpnGatewayId
aws ec2 describe-network-interfaces --filters 'Name=vpc-id,Values='$vpc | grep NetworkInterfaceId
Reference : https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-dependency-error-delete-vpc/
Find the resource that the ENI is attached to. It could be a Lambda function or ELB, for example. Was that resource created outside of your CloudFormation stack? If so, you'll need to delete that resource. If it was created within the CloudFormation stack, then you might just need to wait and retry (e.g. if a warm Lambda function was holding on to the ENI).
Steps are described in more detail here. Other ideas here.

Launch ECS container instance to cluster and run task definition using userdata

I am trying to launch an ECS contianer instance and passing through userdata to register it to a cluster and also start run a task definition.
When the task is complete the instance will be terminated.
I am using the guide on AWS docs to start a task at container launch.
Below userdata(cluster and task def params omitted)
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=my_cluster
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
When the instance is created it is launched to the default cluster not the one I specify in the userdata and no tasks are started.
I have deconstructed the above script to work out where it is failing but Ive had no luck.
Any help would be appreciated.
From the AWS Documentation.
Configure your Amazon ECS container instance with user data, such as
the agent environment variables from Amazon ECS Container Agent
Configuration. Amazon EC2 user data scripts are executed only one
time, when the instance is first launched.
By default, your container instance launches into your default
cluster. To launch into a non-default cluster, choose the Advanced
Details list. Then, paste the following script into the User data
field, replacing your_cluster_name with the name of your cluster.
So, in order for you to be able to add that EC2 instance to your ECS cluster, You should change this variable to the name of your cluster:
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
Change your_cluster_name to whatever the name is of your cluster.

AWS EC2 user data docker system prune before start ecs task

I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?

How to catch AWS EC2 Instance IPs dynamically?

How to catch few AWS EC2 Instances IPs and put them to a script variable if its generates every time randomly and automatically?
I was trying to make it with
echo "$(curl http://169.254.169.254/latest/meta-data/public-ipv4/) master" >> /etc/hosts
but it is just the IP of one of them.
Also was trying with
aws ec2 describe-instances ... but don't know how to separate clear IP with other information. Any suggestions with awk \ sed?
Use the AWS Command-Line Interface (CLI) with a --query parameter:
aws ec2 describe-instances --query 'Reservations[*].Instances[*].{ID:InstanceId,Public:PublicIpAddress,Private:PrivateIpAddress}' --output text
i-2da518a2 172.31.15.3 None
i-6d261640 172.31.27.232 56.64.218.82
i-b3aa3476 172.31.5.0 None
i-6c57c951 172.31.20.243 56.79.129.118
i-192b95c1 172.31.28.76 56.253.207.57
i-af413c91 172.31.27.17 None
You can also output as JSON, which is easier to parse.
End command is
echo "$(aws ec2 describe-instances --filters Name="tag-value",Values="nagios" |grep PrivateIpAddress | awk '{gsub(",","",$2); gsub("\"","",$2); print $2}' | head -n 1) master" >> /file
To catch a dynamic ip address from your aws instance with tag and put it to any file
For example if you want to get all the private IP's which are behind a load balancer and pass it to a file.
/usr/bin/aws --output text --query "Reservations[].Instances[].PrivateIpAddress" ec2 describe-instances --instance-ids aws --output text --query "LoadBalancerDescriptions[0].Instances[*].InstanceId" elb describe-load-balancers --load-balancer-name <loadbalancer name> > hosts.txt
hope it helps....

Why my same ".s3cfg" file is not working on different machines

I have server ec2 instance where I got error :
$ sh ami-backup.sh
----------------------------------
Thu Sep 24 10:37:47 UTC 2015
----------------------------------
Unable to locate credentials. You can configure credentials by running "aws configure".
Same script worked on my local machine so I coped ".s3cfg" to that server still it gives same error "Unable to locate credentials"
On my local machine :
ashish#ashishk:~$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxx | awk '{ print $8 }' | sort -n | grep "i-"
i-127fb8df
i-1effb6d3
i-29efe0e4
i-29fd04e4
i-d5888618
On my server (ec2 instance) with same ".s3cfg" :
$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxx | awk '{ print $8 }' | sort -n | grep "i-" > /tmp/instanceid.txt
Unable to locate credentials. You can configure credentials by running "aws configure".
Why my same ".s3cfg" file is not working on different machines ! Please let me know what is wrong here Or if I copy ".s3cfg" from machine to another machine will it work or i have to run "aws configure" & configure on new machine also ?
On your server, use aws configure to set your AWS Access key ID, Secret Key ID and other things, before you run your command.