Terminate AWS EC2 instance when SSM Run Command status changes - amazon-web-services

I would like to (1) launch an AWS EC2 instance, (2) run a shell script (that sends output to an S3 bucket) and (3) terminate the instance automatically when the script terminates, all remotely without logging into the instance. I have managed to get parts (1) and (2) working using the AWS CLI commands aws ec2 run-instances and aws ssm send-command. I am struggling with part (3) - getting the instance to terminate automatically when the script completes.
I have seen in the AWS docs that you can use CloudWatch to monitor the SSM Run Command status, and I thought that this might be a solution - when the status changes, terminate the instance. Is this a feasible option? If so, how do you implement it using AWS CLI?

Within the ssm script, you can issue a command to the operating system to shutdown the computer. If you launched the instance with a Shutdown behavior of Terminate, then this will terminate the instance.
Alternatively, the script can retrieve the Instance ID of the instance it is running on, and issue the aws ec2 terminate-instances command, specifying its own Instance ID.
See: Self-Terminating AWS EC2 Instance?

Related

How to know EC2 instance stopped time?

I really need to know about the stopped time of AWS EC2 instances. I have checked with AWS cloudtrail, but its not easy to find the exact stopped EC2 instance. Is possible to see exact time of stopped EC2 instances by aws-cli commands or any boto3 script?
You can get this info from StateTransitionReason in describe-instances AWS CLI when you search for stopped instances:
aws ec2 describe-instances --filter Name=instance-state-name,Values=stopped --query 'Reservations[].Instances[*].StateTransitionReason' --output text
Example output:
User initiated (2020-12-03 07:16:35 GMT)
AWS Config keeps track of the state of resources as they change over time.
From What Is AWS Config? - AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
Thus, you could look back through the configuration history of the Amazon EC2 instance and extract times for when the instance changed to a Stopped state.
Sometimes time is missing from StateTransitionReason, you can use CloudTrail and search for Resource Name = instance ID to find out StopInstance(s) API calls.
By default you can track back 90 days, or indefinitely if you create your own trail.

Automating the installation of CloudWatch agent

I just want to know if there are other ways to approach this problem:
I have an AWS multi account setup. The EC2's are going to be monitored over all the accounts and when alerts are triggered via sns there is a mail going to be sent. For all EC2's with Windows Server 2016 and later, Amazon Linux and Ubuntu 16.04 and 18.04 the SSM agents come pre-installed. That way I can push the CloudWatch agent via System Manager Run Command to the EC2's per AWS account.
I was wondering is there a more simple way that i can force that CloudWatch Agent is installed with every new EC2 deployed in an AWS account, without installing the agent manually on the instance or via Run Command?
I was thinking working with tags, something like: "IsMonitored" and as value true or false. for example everyday at 17hr a Lambda function will go over all the instances in that account and search for IsMonitored = false, Get that instance ID and with a (boto3?) cript push the agent on that instance. This seemed to complicated so i wanted to check if there is maybe other simple solutions that would do the same.
Thanks in advance,
Iman
To install a cloudwatch agent in each instance particular region you can implement by shell script.
The approach is:
Manual work is create some default configuration file in parameter store for both the type of instance a. for windows b. for linux based
In shell script
For particular region
Get the total number of ec2 instance id list
Check the platform which type of machine is using Windows or Linux based
If the platform is Windows then add Windows type configuration file from parameter store else add Linux configuration file
For getting platform name :
platform=$(aws ec2 describe-instances --instance-ids <instance id> --query 'Reservations[*].Instances[*].[Platform]' --output text)
For installing packages :
aws ssm send-command --instance-ids <instance id> --document-name "AWS-ConfigureAWSPackage" --parameters "name=AmazonCloudWatchAgent,action=Install,installationType=Uninstall and reinstall" --comment "Install CloudWatch Agent on EC2 Windows/Linux machine"
For start CWagent :
aws ssm send-command --instance-ids $one_instance --document-name "AmazonCloudWatch-ManageAgent" --parameters "mode=ec2,optionalRestart=yes,optionalConfigurationSource=ssm,action=configure,optionalConfigurationLocation=AmazonLinuxCloudWatchAgentConfig" --comment "Configure CloudWatch Agent on EC2 Linux machine"
For more reference you can use this link.
One simpler approach could be using prebaked AMI. First, spin up an EC2 with the normal AMI you use. Next, install the CloudWatch agent and create an image. From now on, you can spin up EC2's using the new AMI which has CloudWatch agent preinstalled.
If prebaked AMI doesn't work for you, I recommend using an infrastructure-as-code (IaC) tool like Ansible to automate the installation process.

How to automatically start, execute and stop EC2?

I want to test my Python library in GPU machine once a day.
I decided to use AWS EC2 for testing.
However, the fee of gpu machine is very high, so I want to stop the instance after the test ends.
Thus, I want to do the followings once a day automatically
Start EC2 instance (which is setup manually)
Execute command (test -> push logs to S3)
Stop EC2 (not remove)
How to do this?
It is very simple...
Run script on startup
To run a script automatically when the instance starts (every time it starts, not just the first time), put your script in this directory:
/var/lib/cloud/scripts/per-boot/
Stop instance when test has finished
Simply issue a shutdown command to the operating system at the end of your script:
sudo shutdown now -h
You can push script logs to custom coudwatch namespaces. Like when the process ends publish a state to cloudwatch. In cloudwatch create alarms based on the state of process, so if it has a completed state trigger an AWS lambda function that will stop instance after completion of your job.
Also if you want to start and stop on specific time you can use ec2 instance scheduler to start/stop instances. It just works like a cron job at specific intervals.
You can use the aws cli
To start an instance you would do the following
aws ec2 start-instances --instance-ids i-1234567890abcdef0
and to stop the instance you would do the following
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
To execute commands inside the machine, you will need to ssh into it and run the commands that you need, then you can use the aws cli to upload files to s3
aws s3 cp test.txt s3://mybucket/test2.txt
I suggest reading the aws cli documentation, you will find most if not all what you need to automate aws commands there.
I created a shell script to start an EC2 instance -if not already running,- connect via SSH and, if you want, run a command.
https://gist.github.com/jotaelesalinas/396812f821785f76e5e36cf928777a12
You can use it in three different ways:
./ec2-start-and-ssh.sh -i <instance id> -s
will show status information about your instance: running state and private and public IP addresses.
./ec2-start-and-ssh.sh -i <instance id>
will connect and leave you inside the default shell.
./ec2-start-and-ssh.sh -i <instance id> <command>
will run whatever command you specify, e.g.:
./ec2-start-and-ssh.sh -i <instance id> ./run.sh
./ec2-start-and-ssh.sh -i <instance id> sudo poweroff
I use the last two commands to run periodic jobs minimizing billing costs.
I hope this helps!

ec2 instance with role AmazonEC2RoleforSSM is unable to do EC2 operations in ansible

I have an instance with AmazonEC2RoleforSSM role. I want to run ansible task in this machine which commissions ec2 instances, without setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
This doesn't work as expected, it always needs to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Is there a way to do this?
Jaks, could you explain a little bit more about what you're trying to do?
Having an instance profile with the AmazonEC2RoleforSSM policy will allow the instance to call the Systems Manager APIs and be treated as a managed instance, allowing you to use features like Run Command, Inventory, Patch Manager and the like. It will not, however, grant the instance permission to call EC2 APIs (e.g. run-instances).
What is the specific operation you're performing that's failing and what error message are you getting?
AWS Systems Manager requires the SSM Role to be attached in order to execute a SSM Agent in the EC2 instance. Once SSM agent was installed into a particular EC2 instance, you could freely exec commands from AWS Systems Manager.
I guess after the installation of SSM agent, you can execute ansible script freely (it's not related with access key issue). Is that OK ?
Documents to execute commands with SSM:
Executing Commands Using Systems Manager Run Command
Executing Commands from the Console

How to run a script on an EC2 instance remotely?

I have an EC2 instance and I need to download a file from its D drive through my program. Currently, it's a very annoying process because I can't access the instance directly from my local machine. The way what I am doing now is running a script on the instance and the instance uploads the file I need to S3 and my program access S3 to read the file.
Just wonder whether there is any simple way to access the drive on the instance instead of going through S3?
I have used AWS DataPipeline and its task runner to execute scripts on a remote instance. The taskrunner waits for a pipeline event published to its worker group.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
I use it to execute shell script and commands on a schedule. The script to run should be uploaded to S3, and the Data pipeline template specifies the script's path. Works great for periodic tasks. You can do anything you want on the remote box via the script.
You cannot directly download the file from EC2, but via s3( or maybe using scp command) from your remote ec2.
But to simplify this annoying process you can use AWS Systems Manager.
AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/