I need help with one AWS Step. I know we can send an SNS Notification when the Instance is Stopped, terminated, starting and pending stages. But how do we send a notification when the EC2 instance is rebooted?
Thanks!
If a reboot is issued within the instance, then this will not be detected by AWS. It is just the operating system doing its own things.
If a reboot is issued through the AWS management console or an API call, the instance does not actually change state. From Instance Lifecycle - Amazon Elastic Compute Cloud:
Rebooting an instance is equivalent to rebooting an operating system. The instance remains on the same host computer and maintains its public DNS name, private IP address, and any data on its instance store volumes. It typically takes a few minutes for the reboot to complete, but the time it takes to reboot depends on the instance configuration.
Therefore, the only way to react to the reboot command being issued via the AWS console or API is to create an AWS CloudWatch Events rule that receives all Amazon EC2 events and then checks whether it is specifically for a RebootInstances command.
The rule would look like:
{
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RebootInstances"
]
}
}
It can then trigger an Amazon SNS notification, which will include the instanceId.
However, the notification is not very pretty — it consists of a blob of JSON. If you want to send a nicer-looking message, see: amazon web services - Email notification through SNS and Lambda - Stack Overflow
Instead of monitoring Cloudtrail you could create a cron entry on instance that will execute #reboot (example here) and send sns notification using aws cli sns publish.
You can use the crontab with #reboot against a script to run.
$ crontab -e
#reboot $(python /home/ec2-user/sms.py)
for the python script sms.py at /home/ec2-user
sms.py (change the AWS region if required):
import boto3
import json
client = boto3.client('sns', region_name='us-west-2')
msg = 'Instance reboot!'
response = client.publish(
PhoneNumber='+1XXXXXXXXXX',
Message=msg)
print(response)
Make sure boto3 is installed on the instance - $ pip install boto3 --user
Related
I have an ec2 instance with a instance profile attached to it. This instance profile has permissions to publish messages to a sns topic. When I remote into the ec2 instance and issue a command like
aws sns publish --topic-arn topic_arn --message hello
This works.
Now I'm trying to do the same with a simple curl command and this is what I use after remoting into the ec2 instance
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
I get the following error
<Code>MissingAuthenticationToken</Code>
<Message>Request is missing Authentication Token</Message>
I was hoping that the curl would attach the instance profile details when sending the request (like when using the aws cli) but it does not seem to do so. Does anyone know how I can overcome this ?
When you do:
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
you are directly making a request to the SNS endpoint. Which means that you have to sign your request with AWS credentials from your instance profile. If you don't want to use AWS CLI or any AWS SDK for accessing the SNS, you have to program the entire signature procedure yourself as described in the docs.
That's why, when you are using AWS CLI
aws sns publish --topic-arn topic_arn --message hello
everything works, because the AWS CLI makes a signed request to the SNS endpoint on your behalf.
I have a python script on an EC2 which needs to run daily without anyone manually kicking it off. My current setup uses a scheduled Lambda function to send an SSM Document as a command to the EC2. The SSM Document includes a short "runShellScript" command to run the python script. (see below for the SSM document and abbreviated lambda function). This process works fine.
The issue is that I need the logs to stream to CloudWatch. I'm aware that CloudWatch can retrieve log files which are sitting on the EC2; However I want Cloudwatch to capture the logs directly from stdout (standard-out), rather than taking the log files.
When I manually run the SSM Document via the "Run Command" section of the AWS UI, it sends it to Cloudwatch beautifully since I directly configure the CloudWatch as part of the Run Command kickoff. However I don't see anywhere to configure Cloudwatch as part of the Document.
How can I adjust my SSM Document (or any piece of this process) to stream the logs to CloudWatch?
I'm open to changing schemaVersions in the document if that would help. I already looked through the SSM Parameter documentation for this but could not find an answer there.
Here is the relevant section of the Lambda function:
def lambda_handler(event, context):
# Execute the script
ssm = boto3.client('ssm', region_name=region)
ssm_response = ssm.send_command(InstanceIds=instances, DocumentName='CustomRunScript', Comment='Starting init script from lambda prod')
print('SSM response is: ', ssm_response)
Here is my SSM Document:
{
"schemaVersion": "1.2",
"description": "Custom Run Script",
"parameters": {},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": [
"/usr/bin/python3 /home/app/init.py"
]
}
]
}
}
}
I think you're looking for CloudWatchOutputConfig.
def lambda_handler(event, context):
# Execute the script
ssm = boto3.client('ssm', region_name=region)
ssm_response = ssm.send_command(
InstanceIds=instances,
DocumentName='CustomRunScript',
Comment='Starting init script from lambda prod',
CloudWatchOutputConfig={
'CloudWatchLogGroupName': 'some-group-name',
'CloudWatchOutputEnabled': True,
},
)
print('SSM response is: ', ssm_response)
When you send a command by using Run Command, you can specify where you want to send the command output. By default, Systems Manager returns only the first 2,500 characters of the command output. If you want to view the full details of the command output, you can specify an Amazon Simple Storage Service (Amazon S3) bucket. Or you can specify Amazon CloudWatch Logs. If you specify CloudWatch Logs, Run Command periodically sends all command output and error logs to CloudWatch Logs. You can monitor output logs in near real-time, search for specific phrases, values, or patterns, and create alarms based on the search.
Note you will need to provide proper IAM permissions for the Lambda to access the log group. Those permissions are listed in the reference below.
See Configuring Amazon CloudWatch Logs for Run Command
I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.
If I have a bash script sitting in an EC2 instance, is there a way that lambda could trigger it?
The trigger for lambda would be coming from RDS. So a table in mysql gets updated and a specific column in that table gets updated to "Ready", Lambda would have to pull the ID of that row with a "Ready" status and send that ID to the bash script.
Let's assume some things. First, you know how to set up a "trigger" using sns (see here) and how to hang a lambda script off of said trigger. Secondly, you know a little about python (Lambda's syntax offerings are Node, Java, and Python) because this example will be in Python. Additionally, I will not cover how to query a database with mysql. You did not mention whether your RDS instance was MySQL, Postgress, or otherwise. Lastly, you need to understand how to allow permission across AWS resources with IAM roles and policies.
The following script will at least outline the method of firing a script to your instance (you'll have to figure out how to query for relevant information or pass that information into the SNS topic), and then run the shell command on an instance you specify.
import boto3
def lambda_handler(event, context):
#query RDS to get ID or get from SNS topic
id = *queryresult*
command = 'sh /path/to/scriptoninstance' + id
ssm = boto3.client('ssm')
ssmresponse = ssm.send_command(InstanceIds=['i-instanceid'], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command] } )
I would probably have two flags for the RDS row. One that says 'ready' and one that says 'identified'. So SNS topic triggers lambda script, lambda script looks for rows with 'ready' = true and 'identified' = false, change 'identified' to true (to make sure other lambda scripts that could be running at the same time aren't going to pick it up), then fire script. If script doesn't run successfully, change 'identified' back to false to make sure your data stays valid.
Using Amazon EC2 Simple Systems Manager, you can configure an SSM document to run a script on an instance, and pass that script a parameter. The Lambda instance would need to run the SSM send-command, targeting the instance by its instance id.
Sample SSM document:
run_my_example.json:
{
"schemaVersion": "1.2",
"description": "Run shell script to launch.",
"parameters": {
"taskId":{
"type":"String",
"default":"",
"description":"(Required) the Id of the task to run",
"maxChars":16
}
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["run_my_example.sh"]
}
]
}
}
}
The above SSM document accepts taskId as a parameter.
Save this document as a JSON file, and call create-document using the AWS CLI:
aws ssm create-document --content file:///tmp/run_my_example.json --name "run_my_example"
You can review the description of the SSM document by calling describe-document:
aws ssm describe-document --name "run_my_example"
You can specify the taskId parameter and run the command by using the document name with the send-command
aws ssm send-command --instance-ids i-12345678 --document-name "run_my_example" --parameters --taskid=123456
NOTES
Instances must be running the latest version of the SSM agent.
You will need to have some logic in the Lambda script to identify the instance ids of the server EG look up the instance id of a specifically tagged instance.
I think you could use the new EC2 Run Command feature to accomplish this.
There are few things to consider one of them is:
Security. As of today lambda can't run in VPC. Which means your EC2 has to have a wide open inbound security group.
I would suggest take a look at the messaging queue (say SQS ). This would solve a lot of headache.
That's how it might work:
Lambda. Get message; send to SQS
EC2. Cron job that gets trigger N number of minutes. pull message from sqs; process message.
Is there a way to automatically publish a message on SQS when an EC2 instance starts?
For example, could you use Cloudwatch to set an alarm that fires whenever an instance starts up? All the alarms on Cloudwatch appear to be related to a specific EC2 instance, rather than the EC2 service in general.
To better understand this question and offer a more accurate answer, further information is needed.
Are we talking about:
New instance created and started from any AMI ?
New instance created and started from a specific AMI?
Starting an existing AMI that is just in the stopped state?
Or creating a new instance inside a scale group?
These all affect the way you would create your cloudwatch alarm.
For example if it were an existing ec2 you would use status checks as per:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html
Though if it were a brand new Ec2 instance created you would need to use more advanced cloudtrail log alarms as per:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cw_create_alarms.html
However after that point it would follow the same basic logic and that is:
Create an Alarm that triggers a SNS as per:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ConsoleAlarms.html
have that SNS Notifier publish to a SQS topic as per:
https://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.html
As always though there are many ways to skin a cat.
If it were a critical event and I want the same typical response from every start event I personally would consider bootstrap scripts pushed out from puppet or chef, this way a fast change for all events is centralised in a single script.
Step1 : Create a Cloud watch Rule to notifiy the creation . As per the lifecycle of EC2 Instance when lauch button is pressed . Instance goes from Pending state to Running state and If instance is moved from stop state to running state. So create the Rule for Pending state
Create a Cloud watch Rule as specified in the image screenshot
Step2 : Create a Step function . Because cloud Trail logs all the event in the account with a delay of atleast 20 min . This step function is usefull if you want the name of user who has created the instance .
{
"StartAt": "Wait",
"States": {
"Wait": {
"Type": "Wait",
"Seconds": 1800,
"Next": "Ec2-Alert"
},
"Ec2-Alert":{
"Type": "Task",
"Resource":"arn:aws:lambda:ap-south-1:321039853697:function:EC2-Creation-Alert",
"End": true
}
}
}
Step3 : Create a SNS topic for notification
Step4 : Write a lambda function to fetch the log from cloud trail and get the user name who has created the instance .
import json
import os
import subprocess
import boto3
def lambda_handler(event, context):
client = boto3.client('cloudtrail')
client1 = boto3.client('sns')
Instance=event["detail"]["instance-id"]
response = client.lookup_events(
LookupAttributes=[
{
'AttributeKey': 'ResourceName',
'AttributeValue': Instance
},
],
MaxResults=1)
test=response['Events']
st="".join(str(x) for x in test)
print(st)
user=st.split("Username")[1]
finalname=user.split(",")
Creator=finalname[0]
#print(st[st.find("Username")])
Email= "Hi All ,\n\n\n The User%s has created new EC2-Instance in QA account and the Instance id is %s \n\n\n Thank you \n\n\n Regard's lamda"%(Creator,Instance)
response = client1.publish(
TopicArn='arn:aws:sns:ap-south-1:321039853697:Ec2-Creation-Alert',
Message=Email
)
# TODO implement
return {
'statusCode': 200,
}
Note: This code trigger an notification if the instance is changed from stop state to running state or a new instance is launched .