I have started to use Amazon EC2 extensively and inorder to contain costs, I manually shutdown the instances before I leave work and bring them up when I come in. Sometimes I forget to shut them down. Is there a mechanism within the Amazon dashboard (or any other way) to automatically shut down the instances at say 6pm and bring them up at 6am? I am happy to write scripts or programs if there are any API's available. If you have some code written already, it would be great if you can share.
There are 2 solutions.
AWS Data Pipeline - You can schedule the instance start/stop just like cron. It will cost you one hour of t1.micro instance for every start/stop
AWS Lambda - Define a lambda function that gets triggered at a pre defined time. Your lambda function can start/stop instances. Your cost will be very minimal or $0
I used Data Pipeline for a long time before moving to Lambda. Data Pipeline is very trivial. Just paste the AWS CLI commands to stop and start instances. Lambda is more involved.
Using AWS Lambda is definitely simple with scheduled event. You can find the complete code script here
For example, to start EC2 instances
From AWS Console, configure a lambda function
Copy following code to "Edit code inline"
import boto3
def lambda_handler(event, context):
client = boto3.client('ec2')
response = client.start_instances(
InstanceIds=[
'i-xxxxxx',
'i-xxxxxx',
]
)
print response
Give it basic execution role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:StartInstances"
],
"Resource": [
"arn:aws:logs:*:*:*",
"arn:aws:ec2:*:*:*"
]
}
]
}
Click create lambda function and add Event Source (CloudWatch events - schedule)
Finally, test your function to see if it work correctly
I actually happened to have a cron job running for this. I'll put up my 'start instance' code here (for reference to get you started)
#!/usr/bin/python
import boto.ec2
import boto
import boto.regioninfo
import time
import sys
import boto.ec2.connection
from boto.ec2 import EC2Connection
from boto.regioninfo import RegionInfo
from boto import ec2
from boto import regioninfo
ec2_region = ec2.get_region(aws_access_key_id='QWERTACCESSKEY', aws_secret_access_key='QWERTSECRETKEY', region_name='ap-southeast-2')
conn = boto.ec2.connection.EC2Connection(aws_access_key_id='QWERTACCESSKEY', aws_secret_access_key='QWERTSECRETKEY', region=ec2_region)
instance1 = conn.get_only_instances(instance_ids=['i-xxxxxxx1'])
instance2 = conn.get_only_instances(instance_ids=['i-xxxxxxx2'])
instance3 = conn.get_only_instances(instance_ids=['i-xxxxxxx3'])
instance4 = conn.get_only_instances(instance_ids=['i-xxxxxxx4'])
def inst(): # updates states of instances
instance1[0].update()
instance2[0].update()
instance3[0].update()
instance4[0].update()
def startservers():
inst()
if instance1[0].state in 'stopped':
instance1[0].start()
if instance2[0].state in 'stopped':
instance2[0].start()
if instance3[0].state in 'stopped':
instance3[0].start()
if instance4[0].state in 'stopped':
instance4[0].start()
def check(): # a double check to make sure the servers are down
inst()
while instance1[0].state in 'stopped' or instance2[0].state in 'stopped' or instance3[0].state in 'stopped' or instance4[0].state in 'stopped':
startservers()
time.sleep(30)
startservers()
time.sleep(60)
check()
This is my cronjob
31 8 * * 1-5 python /home/user/startaws
This runs at 8:31am from monday to friday.
Please Note
This script works fine for me BUT it is definitely possible to make it much cleaner and simpler. There are also much better ways than this too. (I wrote this in a hurry so I'm sure I have some unnecessary lines of code in there) I hope it gives you an idea on how to start off :)
If your use case permits terminating and re-launching, you might consider Scheduled Scaling in an Auto Scaling group. AWS recently added tools to manage scheduling rules in the Management Console UI.
But I don't believe Auto Scaling will cover starting and stopping the same instance, preserving state.
You don't need to write any scripts to achieve this, just use AWS console API. This is a perfect use case for AutoScaling with Scheduled Scaling. Steps:
Create a new Autoscaling group create a new autoscaling group. Example group: MY_AUTOSCALING_GROUP1
In your case you need to create 2 new recurring schedules create new schedule. Example:
Scale up every morning
aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaleup-schedule --auto-scaling-group-name MY_AUTOSCALING_GROUP1 --recurrence "0 6 * * *" --desired-capacity 5
Scale down every evening
aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaledown-schedule --auto-scaling-group-name MY_AUTOSCALING_GROUP1 --recurrence "0 18 * * *" --desired-capacity 0
5 EC2 instances will start every morning (6 AM) and terminate in evening (6 PM) everyday by applying above scheduled actions to your autoscaling group.
Can you just start with a simple solution like a cron job and AWS CLI?
start.sh
# some preparations
# ...
$(aws ec2 start-instances --instance-ids $LIST_OF_IDS)
$(aws ec2 wait instance-running --instance-ids $LIST_OF_IDS)
stop.sh
# some preparations
# ...
$(aws ec2 stop-instances --instance-ids $LIST_OF_IDS)
$(aws ec2 wait instance-stopped --instance-ids $LIST_OF_IDS)
Then crontab -e and add something like
* 6 * * 1,2,3,4,5 start.sh
* 18 * * 1,2,3,4,5 stop.sh
Related
I need help with one AWS Step. I know we can send an SNS Notification when the Instance is Stopped, terminated, starting and pending stages. But how do we send a notification when the EC2 instance is rebooted?
Thanks!
If a reboot is issued within the instance, then this will not be detected by AWS. It is just the operating system doing its own things.
If a reboot is issued through the AWS management console or an API call, the instance does not actually change state. From Instance Lifecycle - Amazon Elastic Compute Cloud:
Rebooting an instance is equivalent to rebooting an operating system. The instance remains on the same host computer and maintains its public DNS name, private IP address, and any data on its instance store volumes. It typically takes a few minutes for the reboot to complete, but the time it takes to reboot depends on the instance configuration.
Therefore, the only way to react to the reboot command being issued via the AWS console or API is to create an AWS CloudWatch Events rule that receives all Amazon EC2 events and then checks whether it is specifically for a RebootInstances command.
The rule would look like:
{
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RebootInstances"
]
}
}
It can then trigger an Amazon SNS notification, which will include the instanceId.
However, the notification is not very pretty — it consists of a blob of JSON. If you want to send a nicer-looking message, see: amazon web services - Email notification through SNS and Lambda - Stack Overflow
Instead of monitoring Cloudtrail you could create a cron entry on instance that will execute #reboot (example here) and send sns notification using aws cli sns publish.
You can use the crontab with #reboot against a script to run.
$ crontab -e
#reboot $(python /home/ec2-user/sms.py)
for the python script sms.py at /home/ec2-user
sms.py (change the AWS region if required):
import boto3
import json
client = boto3.client('sns', region_name='us-west-2')
msg = 'Instance reboot!'
response = client.publish(
PhoneNumber='+1XXXXXXXXXX',
Message=msg)
print(response)
Make sure boto3 is installed on the instance - $ pip install boto3 --user
I am scheduling ec2 instance shutdown everyday at 8 PM using chalice and lambda function.
I have configured the chalice but not able to trigger or integrate python script using chalice
import boto3
#creating session to connect to aws
#defining instances to be started or stopped
myins = ['i-043ae2fbfc26d423f','i-0df3f5ead69c6428c','i-0bac8502574c0cf1d','i-02e866c4c922f1e27','i-0f8a5591a7704f98e','i-08319c36611d11fa1','i-047fc5fc780935635']
#starting ec2 instances if stopped
ec2 = boto3.resource('ec2')
ec2client = boto3.client('ec2')
for instance in ec2.instances.all():
for i in myins:
if i == instance.id and instance.state['Name'] == "running":
ec2client.stop_instances(InstanceIds=[i])
I want to stop instance using chalice
AWS Instance Scheduler does the job that you are looking for. We have used it for several months. It works as expected. You may check this reference: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-instance-scheduler/
I am trying to find out all EC2 instances in 10 different accounts which are running non-amazon AMI images. Following CLI command gives me the list of all AMI's:
aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[ImageId]' | sort | uniq -c
I think I can modify this further to get all non-amazon AMI's but is there a way to run this across 10 different accounts in one call?
is there a way to run this across 10 different accounts in one call?
No, that's not possible. You need to write a loop that iterates over each account, calling ec2 describe-instances once for each account.
Here's a script that can find instances using AMIs where the Owner is not amazon:
import boto3
ec2_client = boto3.client('ec2', region_name='ap-southeast-2')
instances = ec2_client.describe_instances()
# Get a set of AMIs used on all the instances
images = set(i['ImageId'] for r in instances['Reservations'] for i in r['Instances'])
# Find which of these are owned by Amazon
amis = ec2_client.describe_images(ImageIds=list(images), Owners=['amazon'])
amazon_amis = [i['ImageId'] for i in amis['Images']]
# Which instances are not using Amazon images?
non_amazon_instances = [(i['InstanceId'], i['ImageId']) for r in instances['Reservations'] for i in r['Instances'] if i['ImageId'] not in amazon_amis]
for i in non_amazon_instances:
print(f"{i[0]} uses {i[1]}")
A few things to note:
Deprecated AMIs might not have accessible information, so might be marked a non-Amazon.
This script, as written, only works on one region. You could change it to loop through regions.
This script, as written, only works on one account. You would need a way to loop through credentials for other accounts.
Use AWS config
Create an agregator in root or delegated account(wait for the agregator to load)
Create query
SELECT
accountId,
resourceId,
configuration.keyName,
availabilityZone
WHERE
resourceType = 'AWS::EC2::Instance'
AND configuration.state.name = 'running'
more details
https://aws.amazon.com/blogs/mt/org-aggregator-delegated-admin/
If I have a bash script sitting in an EC2 instance, is there a way that lambda could trigger it?
The trigger for lambda would be coming from RDS. So a table in mysql gets updated and a specific column in that table gets updated to "Ready", Lambda would have to pull the ID of that row with a "Ready" status and send that ID to the bash script.
Let's assume some things. First, you know how to set up a "trigger" using sns (see here) and how to hang a lambda script off of said trigger. Secondly, you know a little about python (Lambda's syntax offerings are Node, Java, and Python) because this example will be in Python. Additionally, I will not cover how to query a database with mysql. You did not mention whether your RDS instance was MySQL, Postgress, or otherwise. Lastly, you need to understand how to allow permission across AWS resources with IAM roles and policies.
The following script will at least outline the method of firing a script to your instance (you'll have to figure out how to query for relevant information or pass that information into the SNS topic), and then run the shell command on an instance you specify.
import boto3
def lambda_handler(event, context):
#query RDS to get ID or get from SNS topic
id = *queryresult*
command = 'sh /path/to/scriptoninstance' + id
ssm = boto3.client('ssm')
ssmresponse = ssm.send_command(InstanceIds=['i-instanceid'], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command] } )
I would probably have two flags for the RDS row. One that says 'ready' and one that says 'identified'. So SNS topic triggers lambda script, lambda script looks for rows with 'ready' = true and 'identified' = false, change 'identified' to true (to make sure other lambda scripts that could be running at the same time aren't going to pick it up), then fire script. If script doesn't run successfully, change 'identified' back to false to make sure your data stays valid.
Using Amazon EC2 Simple Systems Manager, you can configure an SSM document to run a script on an instance, and pass that script a parameter. The Lambda instance would need to run the SSM send-command, targeting the instance by its instance id.
Sample SSM document:
run_my_example.json:
{
"schemaVersion": "1.2",
"description": "Run shell script to launch.",
"parameters": {
"taskId":{
"type":"String",
"default":"",
"description":"(Required) the Id of the task to run",
"maxChars":16
}
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["run_my_example.sh"]
}
]
}
}
}
The above SSM document accepts taskId as a parameter.
Save this document as a JSON file, and call create-document using the AWS CLI:
aws ssm create-document --content file:///tmp/run_my_example.json --name "run_my_example"
You can review the description of the SSM document by calling describe-document:
aws ssm describe-document --name "run_my_example"
You can specify the taskId parameter and run the command by using the document name with the send-command
aws ssm send-command --instance-ids i-12345678 --document-name "run_my_example" --parameters --taskid=123456
NOTES
Instances must be running the latest version of the SSM agent.
You will need to have some logic in the Lambda script to identify the instance ids of the server EG look up the instance id of a specifically tagged instance.
I think you could use the new EC2 Run Command feature to accomplish this.
There are few things to consider one of them is:
Security. As of today lambda can't run in VPC. Which means your EC2 has to have a wide open inbound security group.
I would suggest take a look at the messaging queue (say SQS ). This would solve a lot of headache.
That's how it might work:
Lambda. Get message; send to SQS
EC2. Cron job that gets trigger N number of minutes. pull message from sqs; process message.
Is there a way to automatically publish a message on SQS when an EC2 instance starts?
For example, could you use Cloudwatch to set an alarm that fires whenever an instance starts up? All the alarms on Cloudwatch appear to be related to a specific EC2 instance, rather than the EC2 service in general.
To better understand this question and offer a more accurate answer, further information is needed.
Are we talking about:
New instance created and started from any AMI ?
New instance created and started from a specific AMI?
Starting an existing AMI that is just in the stopped state?
Or creating a new instance inside a scale group?
These all affect the way you would create your cloudwatch alarm.
For example if it were an existing ec2 you would use status checks as per:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html
Though if it were a brand new Ec2 instance created you would need to use more advanced cloudtrail log alarms as per:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cw_create_alarms.html
However after that point it would follow the same basic logic and that is:
Create an Alarm that triggers a SNS as per:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ConsoleAlarms.html
have that SNS Notifier publish to a SQS topic as per:
https://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.html
As always though there are many ways to skin a cat.
If it were a critical event and I want the same typical response from every start event I personally would consider bootstrap scripts pushed out from puppet or chef, this way a fast change for all events is centralised in a single script.
Step1 : Create a Cloud watch Rule to notifiy the creation . As per the lifecycle of EC2 Instance when lauch button is pressed . Instance goes from Pending state to Running state and If instance is moved from stop state to running state. So create the Rule for Pending state
Create a Cloud watch Rule as specified in the image screenshot
Step2 : Create a Step function . Because cloud Trail logs all the event in the account with a delay of atleast 20 min . This step function is usefull if you want the name of user who has created the instance .
{
"StartAt": "Wait",
"States": {
"Wait": {
"Type": "Wait",
"Seconds": 1800,
"Next": "Ec2-Alert"
},
"Ec2-Alert":{
"Type": "Task",
"Resource":"arn:aws:lambda:ap-south-1:321039853697:function:EC2-Creation-Alert",
"End": true
}
}
}
Step3 : Create a SNS topic for notification
Step4 : Write a lambda function to fetch the log from cloud trail and get the user name who has created the instance .
import json
import os
import subprocess
import boto3
def lambda_handler(event, context):
client = boto3.client('cloudtrail')
client1 = boto3.client('sns')
Instance=event["detail"]["instance-id"]
response = client.lookup_events(
LookupAttributes=[
{
'AttributeKey': 'ResourceName',
'AttributeValue': Instance
},
],
MaxResults=1)
test=response['Events']
st="".join(str(x) for x in test)
print(st)
user=st.split("Username")[1]
finalname=user.split(",")
Creator=finalname[0]
#print(st[st.find("Username")])
Email= "Hi All ,\n\n\n The User%s has created new EC2-Instance in QA account and the Instance id is %s \n\n\n Thank you \n\n\n Regard's lamda"%(Creator,Instance)
response = client1.publish(
TopicArn='arn:aws:sns:ap-south-1:321039853697:Ec2-Creation-Alert',
Message=Email
)
# TODO implement
return {
'statusCode': 200,
}
Note: This code trigger an notification if the instance is changed from stop state to running state or a new instance is launched .