I have created a workflow like this:
Use requests for an instance creation through a API Gateway endpoint
The gateway invokes a lamda function that executes the following code
Generate a RDP with the public dns to give it to the user so that they can connect.
ec2 = boto3.resource('ec2', region_name='us-east-1')
instances = ec2.create_instances(...)
instance = instances[0]
time.sleep(3)
instance.load()
return instance.public_dns_name
The problem with this approach is that the user has to wait almost 2 minutes before they were able to login successfully. I'm totally okay to let the lamda run for that time by adding the following code:
instance.wait_until_running()
But unfortunately the API gateway has a 29 seconds timeout for lambda integration. So even if I'm willing to spend it wouldn't work. What's the easiest way to overcome this?
My approach to accomplish your scenario could be Cloudwatch Event Rule.
The lambda function after Instance creation must store a kind of relation between the instance and user, something like this:
Proposed table:
The table structure is up to you, but these are the most important columns.
------------------------------
| Instance_id | User_Id |
------------------------------
Creates a CloudWatch Event Rule to execute a Lambda function.
Firstly, pick Event Type: EC2 Instance State-change Notification then select Specific state(s): Running:
Secondly, pick the target: Lambda function:
Lambda Function to send email to the user.
That Lambda function will receive the InstanceId. With that information, you can find the related User_Id and send the necessary information to the user. You can use the SDK to get information of your EC2 instance, for example, its public_dns_name.
This is an example of the payload that will be sent by Cloudwatch Event Rule notification:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2015-12-22T18:43:48Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-12345678"
],
"detail": {
"instance-id": "i-12345678",
"state": "running"
}
}
That way, you can send the public_dns_name when your instance is totally in service.
Hope it helps!
Related
So i wanted to create an eventbridge event pattern that would trigger my lambda for updating the ecs cluster LT ami's to the latest one when this param updates. I tried using something like this
{
"source": ["aws.ssm"],
"detail-type": ["Parameter Store Change"],
"detail": {
"operation": ["Update"],
"name": ["/aws/service/ecs/optimized-ami/amazon-linux-2/recommended"]
}
}
The problem is this never triggers.
Does know how the event looks like when this parameter get's updated.
Also would also like to know for events similiar like this where i could find the source events.
I had a haunch i could get it from aws cli with describing past changes but this doesn't work for public ssm params.
I expected the above event pattern to trigger my lambda when the /aws/service/ecs/optimized-ami/amazon-linux-2/recommended param get's changed.
I have a lambda function deployed in us-east-1 which runs every time an EC2 instance is started.
The lambda function is triggered with the following EventBridge configuration:
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"source": [
"aws.ec2"
],
"detail": {
"eventName": [
"RunInstances"
]
}
}
The lambda function is working great. Now, I'm looking to extend this so that my lambda function is triggered even when an EC2 instance is launched in a different region (e.g. us-east-2).
How can I achieve this?
One option is to put SNS as an event target and subscribe the lambda to the SNS topic. SNS supports cross region subscriptions.
Another option is to use cross region event busses. You create a rule that forwards the event to another region and create another event rule in that region that invokes a lambda. More info here: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/
There was a recently announced new functionality that can help with cross region use cases with aws lambda: https://aws.amazon.com/blogs/compute/introducing-cross-region-event-routing-with-amazon-eventbridge/
Amazon eventBridge is a great way for cross region (and cross-account) event processing
I have lambda which is triggered by cloudwatch event when VPN tunnels are down or up. I searched online but can't find a way to trigger this cloudwatch event.
I see an option for test event but what can I enter in here for it to trigger an event that tunnel is up or down?
You can look into CloudWatchEventsandEventPatterns
Events in Amazon CloudWatch Events are represented as JSON objects.
For more information about JSON objects, see RFC 7159. The following
is an example event:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/ i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
Also log based on event, you can pick your required event from AWS CW EventTypes
I believe in your scenario, you don't need to pass any input data as you must have built the logic to test the VPN tunnels connectivity within the Lamda. You can remove that JSON from the test event and then run the test.
If you need to pass in some information as part of the input event then follow the approach mentioned by #Adiii.
EDIT
The question is more clear through the comment which says
But question is how will I trigger the lambda? Lets say I want to
trigger it when tunnel is down? How will let lambda know tunnel is in
down state? – NoviceMe
This can be achieved by setting up a rule in Cloudwatch to schedule the lambda trigger at a periodic interval. More details here:
Tutorial: Schedule AWS Lambda Functions Using CloudWatch Events
Lambda does not have an invocation trigger right now that can monitor a VPN tunnel, so the only workaround is to poll the status through lamda.
We have developed an AWS Serverless Lambda application using dotnetcore to perform operations on EC2 Instances, say start or stop EC2 instance and integrated with Aws API Gateway.
serverless.template in dotnetcore application
"StartInstanceById" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "EC2_Monitoring_Serverless::EC2_Monitoring_Serverless.Functions::StartInstanceById",
"Runtime": "dotnetcore2.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": "arn:aws:iam::2808xxxx1013:role/lamda_start_stop",
"Policies": [ "AWSLambdaBasicExecutionRole" ],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/instances",
"Method": "Get"
}
}
}
}
}
The above Lambda function is working fine for starting ec2 instance when I invoking the API gateway url.
For calling these API's, We have created Angular 6 application and provided authentication using Aws Cognito Userpools.
So the cognito user logins into the website and gets all EC2 informations.
If the user wants to stop / start the EC2 instance, user will click on the particular button which invokes the relevant api gateway url of the lambda functions and It's working fine.
Now the question is who performed that action. After so much of research on stackoverflow and aws community forums for knowing who started or stopped the EC2 instances , I found Aws CloudTrail logs the information when user start or stopped the instance.
So I created a trail and I can see the logs in S3 buckets. But in every log I opened, I saw that the role "arn:aws:iam::2808xxxx1013:role/lamda_start_stop" is captured. I know this is because of the Lambda function. But I want to know who really stopped the instance.
Please advice how to capture user details!
The reason lambda execution role is getting printed in cloudtrail, is because it has initiated the process to stop the ec2 instance. Here the role is assumed (instead of actual user).
To print your actual user, you need to implement logs at your lambda, which will print logs to Cloudwatch. You can get the actual user or any other custom information from those logs.
Is there a way to automatically publish a message on SQS when an EC2 instance starts?
For example, could you use Cloudwatch to set an alarm that fires whenever an instance starts up? All the alarms on Cloudwatch appear to be related to a specific EC2 instance, rather than the EC2 service in general.
To better understand this question and offer a more accurate answer, further information is needed.
Are we talking about:
New instance created and started from any AMI ?
New instance created and started from a specific AMI?
Starting an existing AMI that is just in the stopped state?
Or creating a new instance inside a scale group?
These all affect the way you would create your cloudwatch alarm.
For example if it were an existing ec2 you would use status checks as per:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html
Though if it were a brand new Ec2 instance created you would need to use more advanced cloudtrail log alarms as per:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cw_create_alarms.html
However after that point it would follow the same basic logic and that is:
Create an Alarm that triggers a SNS as per:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ConsoleAlarms.html
have that SNS Notifier publish to a SQS topic as per:
https://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.html
As always though there are many ways to skin a cat.
If it were a critical event and I want the same typical response from every start event I personally would consider bootstrap scripts pushed out from puppet or chef, this way a fast change for all events is centralised in a single script.
Step1 : Create a Cloud watch Rule to notifiy the creation . As per the lifecycle of EC2 Instance when lauch button is pressed . Instance goes from Pending state to Running state and If instance is moved from stop state to running state. So create the Rule for Pending state
Create a Cloud watch Rule as specified in the image screenshot
Step2 : Create a Step function . Because cloud Trail logs all the event in the account with a delay of atleast 20 min . This step function is usefull if you want the name of user who has created the instance .
{
"StartAt": "Wait",
"States": {
"Wait": {
"Type": "Wait",
"Seconds": 1800,
"Next": "Ec2-Alert"
},
"Ec2-Alert":{
"Type": "Task",
"Resource":"arn:aws:lambda:ap-south-1:321039853697:function:EC2-Creation-Alert",
"End": true
}
}
}
Step3 : Create a SNS topic for notification
Step4 : Write a lambda function to fetch the log from cloud trail and get the user name who has created the instance .
import json
import os
import subprocess
import boto3
def lambda_handler(event, context):
client = boto3.client('cloudtrail')
client1 = boto3.client('sns')
Instance=event["detail"]["instance-id"]
response = client.lookup_events(
LookupAttributes=[
{
'AttributeKey': 'ResourceName',
'AttributeValue': Instance
},
],
MaxResults=1)
test=response['Events']
st="".join(str(x) for x in test)
print(st)
user=st.split("Username")[1]
finalname=user.split(",")
Creator=finalname[0]
#print(st[st.find("Username")])
Email= "Hi All ,\n\n\n The User%s has created new EC2-Instance in QA account and the Instance id is %s \n\n\n Thank you \n\n\n Regard's lamda"%(Creator,Instance)
response = client1.publish(
TopicArn='arn:aws:sns:ap-south-1:321039853697:Ec2-Creation-Alert',
Message=Email
)
# TODO implement
return {
'statusCode': 200,
}
Note: This code trigger an notification if the instance is changed from stop state to running state or a new instance is launched .