How user aws EventBridge to reboot instance when env degraded - amazon-web-services

I'm trying to use the event bridge to reboot my degraded ec2 instance whenever the beanstalk changes to warn status, at the destination there is the option to call a lambda function or use the api reboot instance, my doubt is how to get the id of the instance only degraded (Taking into account that my environment has 2 instances).

when Elastic Beanstalk dispatches an event for a health status change, it looks like:
{
"version":"0",
"id":"1234a678-1b23-c123-12fd3f456e78",
"detail-type":"Health status change",
"source":"aws.elasticbeanstalk",
"account":"111122223333",
"time":"2020-11-03T00:34:48Z",
"region":"us-east-1",
"resources":[
"arn:was:elasticbeanstalk:us-east-1:111122223333:environment/myApplication/myEnvironment"
],
"detail":{
"Status":"Environment health changed",
"EventDate":1604363687870,
"ApplicationName":"myApplication",
"Message":"Environment health has transitioned from Pending to Ok. Initialization completed 1 second ago and took 2 minutes.",
"EnvironmentName":"myEnvironment",
"Severity":"INFO"
}
}
in your AWS Lambda, you can use any AWS Elastic Beanstalk command, using the AWS SDK for your language of choice,
using the AWS JavaScript SDK for Elastic Beanstalk, you can restart your environment with restartAppServer:
var params = {
EnvironmentName: event.detail.EnvironmentName // based on the sample above
};
elasticbeanstalk.restartAppServer(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
in the example above we would trigger a restart for all instances in the environment.
to target a specific instance, you can use describe-instances-health, renamed to describeInstancesHealth in the AWS JavaScript SDK:
var params = {
AttributeNames: ["HealthStatus"],
EnvironmentName: event.detail.EnvironmentName,
};
elasticbeanstalk.describeInstancesHealth(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
based on its response, you can filter the instance that isn't OK and trigger a restart using the instance id by calling the EC2 API rebootInstances:
var params = {
InstanceIds: [
"i-1234567890abcdef5"
]
};
ec2.rebootInstances(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});

Related

AWS Enabling CORS for my API triggering a Lambda

I managed to create an AWS Lambda that does two things: writes on a dynamo DB and sends an SMS to a mobile number. Then I call this Lambda through a API Gateway POST call, and it works great from the Test section on AWS console but it gives error both on Postman and my own website. I inserted a callback to handle CORS on Lambda and deployed + enabled CORS on my API via console and deployed it but still get errors:
Errors via postman call: {
"message": "Internal server error"
}
Errors via my website (jquery ajax POST call): Lambda calling failed: {"readyState":4,"responseText":"{"message": "Internal server error"}","responseJSON":{"message":"Internal server error"},"status":500,"statusText":"error"}
This is my lambda code
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB();
const SNS = new AWS.SNS();
const tableName = "#####";
let params = {
PhoneNumber: 'mynumber####',
Message: 'Someone wrote!'
};
exports.handler = (event, context, callback) => {
dynamodb.putItem({
"TableName": tableName,
"Item" : {
"Id": {
N: event.Id
},
"Type": {
S: event.Type
}
}
}, function(err, data) {
if (err) {
console.log('Error putting item into dynamodb failed: '+err);
}
else {
console.log('Success in writing, now starting to send SMS');
return new Promise((resolve, reject) => {
SNS.publish(params, function(err, data) {
if(err) {
console.log("Error in sending sms alarm");
reject(err);
}
else {
console.log("SMS alarm sent!");
resolve(data);
}
})
})
}
});
callback(null, {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers" : "Content-Type",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET"
},
body: JSON.stringify('Hello from Lambda!'),
});
};
What am I doing wrong? I don't think permissions on lambda are the problem here, since testing it on console works both on writing on dynamo both sending the sms to my cellphone.
If I click from here on API Endpoint I get the same error {"message": "Internal server error"}
SOLUTION: instead of making an AWS HTTP API, make a AWS REST API, that is much more complex and offers more personalization for CORS, letting you set them and headers.

Make a cross account call to Redshift Data API

Summary of problem:
We have an AWS Redshift cluster in Account A, this has a database called 'products'
In Account B we have a lambda function which needs to execute a SQL statement against 'products' using the Redshift Data API
We have setup a new secret in AWS Secrets manager containing the redshift cluster credentials. This secret has been shared with Account B. We've confirmed Account B can access this information from AWS Secrets Manager.
When we call the Redshift Data API action 'executeStatement' we get the following error:
ValidationException: Cluster doesn't exist in this region.
at Request.extractError (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\protocol\json.js:52:27)
at Request.callListeners (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:688:14)
at Request.transition (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\state_machine.js:14:12)
at C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\state_machine.js:26:10
at Request.<anonymous> (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:38:9)
at Request.<anonymous> (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:690:12)
at Request.callListeners (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
The error message suggest it's perhaps not going to the correct account, since the secret contains this information I would have expected it to know.
Code Sample:
Here's my code:
var redshiftdata = new aws.RedshiftData({ region: 'eu-west-2'});
const params : aws.RedshiftData.ExecuteStatementInput = {
ClusterIdentifier: '<clusteridentifier>',
Database: 'products',
SecretArn: 'arn:aws:secretsmanager:<region>:<accountNo>:secret:<secretname>',
Sql: `select * from product_table where id = xxx`,
StatementName: 'statement-name',
WithEvent: true
};
redshiftdata.executeStatement(params,
async function(err, data){
if (err) console.log(err, err.stack);
else {
const resultParams : aws.RedshiftData.GetStatementResultRequest = { Id: data.Id! };
redshiftdata.getStatementResult(resultParams, function(err, data){
if (err) console.log(err, err.stack);
else console.dir(data, {depth: null});
})
}
});
Any suggestions or pointers would be really appreciated.
Thanks for the answer Parsifal. Here's a code snippet of the working solution.
import aws from "aws-sdk";
var roleToAssume = {RoleArn: 'arn:aws:iam::<accountid>:role/<rolename>',
RoleSessionName: 'example',
DurationSeconds: 900,};
var sts = new aws.STS({ region: '<region>'});
sts.assumeRole(roleToAssume, function(err, data) {
if (err)
{
console.log(err, err.stack);
}
else
{
aws.config.update({
accessKeyId: data.Credentials?.AccessKeyId,
secretAccessKey: data.Credentials?.SecretAccessKey,
sessionToken: data.Credentials?.SessionToken
})
// Redshift code here...
}
});

Creating API key for Usage Plan from AWS Lambda

I would like to create a new api key from lambda. I have usage plan with my Gateway API, created with CF like:
MyApi:
Type: AWS::Serverless::Api
Properties:
Auth:
UsagePlan:
UsagePlanName: MyUsagePlan
CreateUsagePlan: PER_API
...
...
Using this as a reference https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/APIGateway.html
I guess the process in the lambda should be like this:
- createApiKey
- getUsagePlan
- createUsagePlanKey
In the lambda, I have MyApi id and I'm trying to fetch the api:
var apiGateway = new AWS.APIGateway({region: region});
const restApi = await new Promise((resolve, reject) => {
apiGateway.getRestApi({restApiId: MYAPI_ID}, function(err, data) {
if (err) {
console.log('getRestApi err', err, err.stack);
reject(err);
} else {
console.log('getRestApi', data);
resolve(data);
}
});
});
But this gets timed out by my lambda.
If I try to input values manually, it gets timed out as well:
const keyParams = {
keyId: 'xxxxxxxx',
keyType: 'API_KEY',
usagePlanId: 'yyyyyyyy'
};
const apiKey = await new Promise((resolve, reject) => {
apiGateway.createUsagePlanKey(keyParams, function (err, data) {
if (err) {
console.log('createUsagePlanKey err', err, err.stack);
reject(err);
} else {
console.log('createUsagePlanKey', data);
resolve(data);
}
});
});
Why do every function call to api get timed out and nothing gets printed in console.log? Is my approach ok or how should I create the new api key for a user?
Edited: Timeout for lambdas is 10 seconds and they run in VPC
It sounds like you probably haven't configured your VPC to allow your Lambda function to access resources (like the AWS API) that exist outside the VPC. First, is it really necessary to run the function inside a VPC? If not then removing it from the VPC should fix the issue.
If it is necessary to run the function in a VPC, then you will need to place your Lambda function inside a private subnet with a route to a NAT Gateway, or configure a VPC endpoint for the AWS services it needs to access.

SMS not delivered to India number using AWS SNS

You can also find this on the AWS thread: AWS Developer Forums: No SMS delivered on India phone number ...
var sns = new AWS.SNS({ "region": "ap-south-1" });
var params = {
Message: 'this is a test message',
PhoneNumber: '+91xxx'
};
sns.setSMSAttributes({
attributes: {
DefaultSMSType: 'Transactional'
}
});
sns.publish(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log("success "+JSON.stringify(data)); // successful response
});
I got the message-id in response, but the SMS never got deliverd.
{
ResponseMetadata: { RequestId: '1d6dd652-fd57-5f7d-ad7f-XXXXX' },
MessageId: '431efd91-7356-5e8e-9384-XXXX'
}
During research, I found out that I didn't select the "Default Messaging Type " to Transactional before in the console (see the image). Once I selected it to Transactional, I started receiving the sms.

access all ec2 cross region via lambda

I have lambda function for auto Ami backup is possible to execute lambda across the region for take automatic backup of all my EC2 working on account.
One lambda function execution for all ec2 across region
var aws = require('aws-sdk');
aws.config.region = 'us-east-1','ap-south-1','eu-central-1';
var ec2 = new aws.EC2();
var now = new Date();
date = now.toISOString().substring(0, 10)
hours = now.getHours()
minutes = now.getMinutes()
exports.handler = function(event, context) {
var instanceparams = {
Filters: [{
Name: 'tag:Backup',
Values: [
'yes'
]
}]
}
ec2.describeInstances(instanceparams, function(err, data) {
if (err) console.log(err, err.stack);
else {
for (var i in data.Reservations) {
for (var j in data.Reservations[i].Instances) {
instanceid = data.Reservations[i].Instances[j].InstanceId;
nametag = data.Reservations[i].Instances[j].Tags
for (var k in data.Reservations[i].Instances[j].Tags) {
if (data.Reservations[i].Instances[j].Tags[k].Key == 'Name') {
name = data.Reservations[i].Instances[j].Tags[k].Value;
}
}
console.log("Creating AMIs of the Instance: ", name);
var imageparams = {
InstanceId: instanceid,
Name: name + "_" + date + "_" + hours + "-" + minutes,
NoReboot: true
}
ec2.createImage(imageparams, function(err, data) {
if (err) console.log(err, err.stack);
else {
image = data.ImageId;
console.log(image);
var tagparams = {
Resources: [image],
Tags: [{
Key: 'DeleteOn',
Value: 'yes'
}]
};
ec2.createTags(tagparams, function(err, data) {
if (err) console.log(err, err.stack);
else console.log("Tags added to the created AMIs");
});
}
});
}
}
}
});
}
where aws.config.region is for region config..it's working for current(in which lambda deploy) region
This line:
var ec2 = new aws.EC2();
connects to the Amazon EC2 service in the region where the Lambda function is running.
You can modify it to connect to another region:
var ec2 = new AWS.EC2({apiVersion: '2006-03-01', region: 'us-west-2'});
Thus, your program could loop through a list of regions (from ec2.describeRegions), creating a new EC2 client for the given region, then running the code you already have.
See: Setting the AWS Region - AWS SDK for JavaScript
In your Lambda Role, you need to add a policy which gives the Lambda function necessary permissions to access the EC2 on different accounts, typically you can add ARN's of EC2 instances you wan't access to or you can specify "*" which gives permissions to all instances.
Also on other accounts where EC2 instances are running you need to add IAM policy which gives access to your Lambda Role, note that you need to provide Lambda role ARN,
In this way your Lambda role will have policy to access EC2 and cross account EC2 will have policy which grant's access to Lambda role.
Without this in place you might need to do heavy lifting of configuring IP's of each EC2 in each account.
Yes and you also need to point EC2 object to a region where the instance is running,
Any code (including a Lambda function) can create a client that connects to a different region.