SMS not delivered to India number using AWS SNS - amazon-web-services

You can also find this on the AWS thread: AWS Developer Forums: No SMS delivered on India phone number ...
var sns = new AWS.SNS({ "region": "ap-south-1" });
var params = {
Message: 'this is a test message',
PhoneNumber: '+91xxx'
};
sns.setSMSAttributes({
attributes: {
DefaultSMSType: 'Transactional'
}
});
sns.publish(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log("success "+JSON.stringify(data)); // successful response
});
I got the message-id in response, but the SMS never got deliverd.
{
ResponseMetadata: { RequestId: '1d6dd652-fd57-5f7d-ad7f-XXXXX' },
MessageId: '431efd91-7356-5e8e-9384-XXXX'
}

During research, I found out that I didn't select the "Default Messaging Type " to Transactional before in the console (see the image). Once I selected it to Transactional, I started receiving the sms.

Related

AWS Enabling CORS for my API triggering a Lambda

I managed to create an AWS Lambda that does two things: writes on a dynamo DB and sends an SMS to a mobile number. Then I call this Lambda through a API Gateway POST call, and it works great from the Test section on AWS console but it gives error both on Postman and my own website. I inserted a callback to handle CORS on Lambda and deployed + enabled CORS on my API via console and deployed it but still get errors:
Errors via postman call: {
"message": "Internal server error"
}
Errors via my website (jquery ajax POST call): Lambda calling failed: {"readyState":4,"responseText":"{"message": "Internal server error"}","responseJSON":{"message":"Internal server error"},"status":500,"statusText":"error"}
This is my lambda code
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB();
const SNS = new AWS.SNS();
const tableName = "#####";
let params = {
PhoneNumber: 'mynumber####',
Message: 'Someone wrote!'
};
exports.handler = (event, context, callback) => {
dynamodb.putItem({
"TableName": tableName,
"Item" : {
"Id": {
N: event.Id
},
"Type": {
S: event.Type
}
}
}, function(err, data) {
if (err) {
console.log('Error putting item into dynamodb failed: '+err);
}
else {
console.log('Success in writing, now starting to send SMS');
return new Promise((resolve, reject) => {
SNS.publish(params, function(err, data) {
if(err) {
console.log("Error in sending sms alarm");
reject(err);
}
else {
console.log("SMS alarm sent!");
resolve(data);
}
})
})
}
});
callback(null, {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers" : "Content-Type",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET"
},
body: JSON.stringify('Hello from Lambda!'),
});
};
What am I doing wrong? I don't think permissions on lambda are the problem here, since testing it on console works both on writing on dynamo both sending the sms to my cellphone.
If I click from here on API Endpoint I get the same error {"message": "Internal server error"}
SOLUTION: instead of making an AWS HTTP API, make a AWS REST API, that is much more complex and offers more personalization for CORS, letting you set them and headers.

How user aws EventBridge to reboot instance when env degraded

I'm trying to use the event bridge to reboot my degraded ec2 instance whenever the beanstalk changes to warn status, at the destination there is the option to call a lambda function or use the api reboot instance, my doubt is how to get the id of the instance only degraded (Taking into account that my environment has 2 instances).
when Elastic Beanstalk dispatches an event for a health status change, it looks like:
{
"version":"0",
"id":"1234a678-1b23-c123-12fd3f456e78",
"detail-type":"Health status change",
"source":"aws.elasticbeanstalk",
"account":"111122223333",
"time":"2020-11-03T00:34:48Z",
"region":"us-east-1",
"resources":[
"arn:was:elasticbeanstalk:us-east-1:111122223333:environment/myApplication/myEnvironment"
],
"detail":{
"Status":"Environment health changed",
"EventDate":1604363687870,
"ApplicationName":"myApplication",
"Message":"Environment health has transitioned from Pending to Ok. Initialization completed 1 second ago and took 2 minutes.",
"EnvironmentName":"myEnvironment",
"Severity":"INFO"
}
}
in your AWS Lambda, you can use any AWS Elastic Beanstalk command, using the AWS SDK for your language of choice,
using the AWS JavaScript SDK for Elastic Beanstalk, you can restart your environment with restartAppServer:
var params = {
EnvironmentName: event.detail.EnvironmentName // based on the sample above
};
elasticbeanstalk.restartAppServer(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
in the example above we would trigger a restart for all instances in the environment.
to target a specific instance, you can use describe-instances-health, renamed to describeInstancesHealth in the AWS JavaScript SDK:
var params = {
AttributeNames: ["HealthStatus"],
EnvironmentName: event.detail.EnvironmentName,
};
elasticbeanstalk.describeInstancesHealth(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
based on its response, you can filter the instance that isn't OK and trigger a restart using the instance id by calling the EC2 API rebootInstances:
var params = {
InstanceIds: [
"i-1234567890abcdef5"
]
};
ec2.rebootInstances(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});

AWS SES says email address is not verified, even though it's for ccAddress, not from?

I keep getting the following error in Express:
MessageRejected: Email address is not verified. The following identities failed the check in region EU-WEST-1: email#gmail.com
Here is my code:
// Set the region
AWS.config.update({region: 'eu-west-1'});
// Create sendEmail params
var params = {
Destination: { /* required */
CcAddresses: [
'email#gmail.com',
],
ToAddresses: [
'a#stack.overflow',
]
},
Message: { /* required */
Body: { /* required */
Html: {
Charset: "UTF-8",
Data: "HTML_FORMAT_BODY"
},
Text: {
Charset: "UTF-8",
Data: "TEXT_FORMAT_BODY"
}
},
Subject: {
Charset: 'UTF-8',
Data: 'Test email'
}
},
Source: 'address#email.com', /* required */
};
// Create the promise and SES service object
var sendPromise = new AWS.SES({apiVersion: '2010-12-01'}).sendEmail(params).promise();
// Handle promise's fulfilled/rejected states
sendPromise.then(
function(data) {
res.send(data.MessageId);
}).catch(
function(err) {
console.error(err, err.stack);
})
a#stack.overflow and address#email are verified, but email#gmail is not. How can I send to users if I have to verify them? Am I using the wrong AWS service?
When you're using SES sandbox, addresses, to which you send emails, should be verified by SES - it is done for your wallet security. It won't require that in production mode.
See: Moving Out of the Amazon SES Sandbox - Amazon Simple Email Service

Permission Issue at an AWS API using Lambda

I'm testing my newly deployed AWS API using https://www.apitester.com/.
As you can see i cant access the API. The API is deployed and the Lambda code looks as following.
const AWS = require('aws-sdk');
var bucket = new AWS.S3();
exports.handler = (event, context, callback) => {
let data =JSON.parse(event.body);
var params = {
"Body": data,
"Bucket": "smartmatressbucket",
// "Key": filePath
};
bucket.upload(params, function(err, data){
if(err) {
callback(err, null);
} else {
let response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(data),
"isBase64Encoded": false
};
callback(null, response);
}
});
};
Looking at the response log, it seems the API Gateway generates "ForbiddenException". I believe the most possible reason is using an incorrect API URL (eg- https://ogk2hm09j0.execute-api.eu-central-1.amazonaws.com/).
Suppose you configure the Lambda function to a GET method of a resource name "resourceA". Then you deploy the API to a stage named "dev". Then the correct URL should be https://ogk2hm09j0.execute-api.eu-central-1.amazonaws.com/dev/resourceA
But looking at the API URL in the logs, it seems the stage name or the resource name is not specified.

Adding s3 object tags using a lambda function?

The documentation describes how to tag an s3 object via the console. How do we do it programmatically with a lambda function?
If you are using JavaScript in your Lambda you are good to use: s3.putObjectTagging
Documentation Snippet
/* The following example adds tags to an existing object. */
var params = {
Bucket: "examplebucket",
Key: "HappyFace.jpg",
Tagging: {
TagSet: [
{
Key: "Key3",
Value: "Value3"
},
{
Key: "Key4",
Value: "Value4"
}
]
}
};
s3.putObjectTagging(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
VersionId: "null"
}
*/
});
You can also use ResourceGroupsTaggingAPI for that:
const resourcegroupstaggingapi = new AWS.ResourceGroupsTaggingAPI();
resourcegroupstaggingapi.tagResources({
ResourceARNList: [
'arn:aws:s3<...>',
'arn:aws:s3<...>'
],
Tags: {
'SomeTagKey': 'Some tag value',
'AnotherTagKey': 'Another tag value'
}
}, (err, result) => {
// callback
});
If you're using AWS SDK for node to interact with S3, it could be done by simply adding the Tagging field to your object that would be put in the S3 bucket.
// Create an object to be sent to S3
var params = {
Body: <Binary String>,
Bucket: "examplebucket",
Key: "HappyFace.jpg",
Tagging: "key1=value1&key2=value2"
};
// Put the params object to s3
s3.putObject(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(data); // successful response
}
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
You can create a Lambda function that is able to automatically tag all assets in an Amazon S3 bucket. For example, assume you have an asset like this:
After you execute the Lambda function, it creates tags and applies them to the image.
For more information, click Creating an Amazon Web Services Lambda function that tags digital assets located in Amazon S3 buckets.