How to call service in different region using lambda and AWS SDK - amazon-web-services

Trying to call AWS Firehose service's PutRecord using AWS SDK from Lambda Function that located in different region using NodeJS environment. This works when Lambda Function and Firehose are in same region and not working when Function is in other region. Is there a setting of Firehose or IAM role attached to it that could allow these inter region calls?

You just specify the region name in your sdk. For example, the following should be enough in Python:
import boto3
client = boto3.client('firehose', region_name='us-west-2')
client.put_record(...)
No other special settings should be required.
In nodejs, it would be:
var fh = new AWS.Firehose({region: 'us-west-2'});

Problem was that I've set region incorrectly. Instead of specifying region just for Firehose, need to specify region for SDK:
Insted of:
const AWS = require('aws-sdk');
const firehose = new AWS.Firehose({ region: 'us-east-1' });
Using:
const AWS = require('aws-sdk');
AWS.config.update({ region: 'us-east-1' });
const firehose = new AWS.Firehose();

Related

Avoid settings AWS credentials for local development with localstack on node

Using the aws-sdk for node, I am initializing sqs
sqs = new AWS.SQS({
region: 'us-east-1',
});
In the prod environment this works great because the role running this has the required permissions to connect with SQS. However, running locally is a problem because those prod permissions are not applied to my local dev environment.
I can fix this problem by adding them in
sqs = new AWS.SQS({
region: 'us-east-1',
accessKeyId: MY_AWS_ACCESS_KEY_ID,
secretAccessKey: MY_AWS_SECRET_ACCESS_KEY,
});
But for local development, I don't make any requests to AWS since I'm using a localstack queue.
How can I initialize AWS.SQS so that it continues to function in prod without specifying the AWS keys for local development?
The AWS SDK and CLI read credentials from multiple locations in a well-defined order.
For example you can create a local credential file, and SQS will automatically use credentials from it without making any changes to your original production code.
% cat ~/.aws/credentials
[default]
aws_access_key_id=AAA
was_secret_access_key=BBB
If you have multiple environments, you can specify them in this file by name. The SDK and CLI will typically read the $AWS_PROFILE environmental variable and use the specified profile from your credentials (or [default] if the environmental var is missing).
I am not using AWS resources when developing locally, so it should not be necessary to provide AWS key credentials at all.
If you want to work around this problem, you can just set bogus values.
const SQS = new AWS.SQS({
region: 'us-east-1',
accessKeyId: 'na',
secretAccessKey: 'na',
});

How to configure AWS Amplify's Storage.put to use a transfer accelerated s3 bucket domain?

I've enabled S3 Transfer Acceleration using Cloudformation.
The documentation says that after enabling it, developers need to point their clients to use the new accelerated domain name.
E.g. from mybucket.s3.us-east-1.amazonaws.com to bucketname.s3-accelerate.amazonaws.com.
However, AWS Amplify's Storage.put method is using the bucket name defined during configuration like so:
Amplify.configure({
Storage: {
AWSS3: {
bucket: AWS_BUCKET_NAME,
region: AWS_REGION
}
}
})
Since there is no domain name here, but only a bucket name, how does one set it to access the accelerated endpoint instead?
It's seems to me that Amplify Storage doesn't support this configuration out of the box, so if you want to use Transfer Acceleration you will need to use the standard S3 client for javascript like so:
// obtain credentials from cognito to make uploads to s3...
let albumBucketName = "BUCKET_NAME";
let bucketRegion = "REGION";
let IdentityPoolId = "IDENTITY_POOL_ID";
AWS.config.update({
region: bucketRegion,
credentials: new AWS.CognitoIdentityCredentials({
IdentityPoolId: IdentityPoolId
})
});
// configure the S3 client to use accelerate - note useAccelerateEndpoint flag
const options = {
signatureVersion: 'v4',
region: bucketRegion, // same as your bucket
endpoint: new AWS.Endpoint('your-bucket-name.s3-accelerate.amazonaws.com'),
useAccelerateEndpoint: true,
};
const s3 = new AWS.S3(options);
// then use the client...
// ...
Reference for the class AWS.S3: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
I also was struggling with this and stumbled on now to enable this with Storage.put:
Specify your normal bucket name, as you normally would
In the options object for Storage.put, set useAccelerateEndpoint: true (which I took from the above answer)
If you do a test, and look at the network console for the Chrome Developer Tools, you will see that Amplify specifies the correct path for the accelerated endpoint.

Lambda function calling MediaConvert SDK to describeEndpoints timeout

I am just attempting to call describeEndpoints of mediaconvert SDK but seems like it times out, why could that be. I already gave my Lambda function admin access. I set timeout to 30s which should be more than sufficient but it still fails
const AWS = require('aws-sdk');
const util = require('util');
async function test() {
let mediaconvert = new AWS.MediaConvert();
const describeEndpoints = util
.promisify(mediaconvert.describeEndpoints)
.bind(mediaconvert);
return await describeEndpoints()
}
Have you launched lambda in vpc ? if so, check if it has nat gateway , lambda doesn't work with subnet with route igw.
Use this instead of describing then:
const mediaConvert = new AWS.MediaConvert(
{ endpoint: 'MEDIACONVERT REGIONAL API ENDPOINT', });

Why am I getting this error? UnknownEndpoint: Inaccessible host: `devicefarm.us-east-1.amazonaws.com'

I'm trying to ListProjects in AWS Device Farm.
Here's my code:
const AWS = require('aws-sdk');
AWS.config.update({ region:'us-east-1' });
const credentials = new AWS.SharedIniFileCredentials({ profile: '***' });
AWS.config.credentials = credentials;
const devicefarm = new AWS.DeviceFarm();
async function run() {
let projects = await devicefarm.listProjects().promise();
console.log(projects);
}
run();
I'm getting this error:
UnknownEndpoint: Inaccessible host: devicefarm.us-east-1.amazonaws.com'.
This service may not be available in the `us-east-1' region.
According to http://docs.amazonaws.cn/en_us/general/latest/gr/rande.html, Device Farm is only available in us-west-2?
Changing AWS.config.update({ region:'us-east-1' }); to AWS.config.update({ region:'us-west-2' }); worked:
Working code:
const AWS = require('aws-sdk');
AWS.config.update({ region:'us-west-2' });
var credentials = new AWS.SharedIniFileCredentials({ profile: '***' });
AWS.config.credentials = credentials;
var devicefarm = new AWS.DeviceFarm();
async function run() {
let projects = await devicefarm.listProjects().promise();
console.log(projects);
}
run();
I faced the same issue and realised that uploads were failing because my internet connection was too slow to resolve the DNS of bucket URL. However, slow connection is not the only reason for this error -- network/server/service/region/data center outage could also be the possible root cause.
AWS provides a dashboard to get health reports of the services offered and the same is available at this link. API to access the services' health is also available.
I had the same issue and checked all the github issues as well as SO answers, not much help.
If you are also in the corporate environment as I am and get this error locally, it is quite possible because you did not have the proxy setup in the SDK.
import HttpsProxyAgent from 'https-proxy-agent';
import AWS from 'aws-sdk';
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
signatureVersion: 'v4',
credentials,
httpOptions: { agent: new HttpsProxyAgent(process.env.https_proxy) },
});
Normally the code might work in your ec2/lambda as they have vpc endpoint but locally you might need proxy in order to access the bucket url: YourBucketName.s3.amazonaws.com
I got the same error, I have resolved following way
Error:
Region: 'Asia Pacific (Mumbai) ap-south-1' which I choose
Solution:
In region instead of writing the entire name, you should only mention the region code
Region: ap-south-1
Well for anyone still having this issue, I managed to solve it by changing the endpoint parameter passed in to the AWS.config.update() from an ARN string example arn:aws:dynamodb:ap-southeast-<generated from AWS>:table/<tableName> to the URL example https://dynamodb.aws-region.amazonaws.com, but replacing aws-region with your region in my case ap-southeast-1.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.NodeJs.Summary.html
I have the same problem. I just change the origin in the .env file.
Previous value:-
"Asia Pacific (Mumbai) ap-south-1"
Corrected value:-
"ap-south-1"
I just kept trying amplify init until it worked.

access SQS queue using "AmazonSQSClient" without using AcccessKey and SecretKey using .NET Core 2.0

How we can access SQS queue using "AmazonSQSClient" without using AcccessKey and SecretKey? Is there any options or code sample which use Role instead of AccessKey and SecretKey. We are trying to access SQS queue in our AWS environment where Lambda has Role assigned who has access to SQS and we don;t allow to use AccessKey and SecretKey. How to achieve this? Any idea>
I am using Lambda function and AWSSDK.SQS Nuget Package to work with AWS SQS Queues for sending , reading and deleting messages.
So if your lambda function running using IAM Role which has permission to access (read, write, delete = full) then without providing AccessKey and SecretKey you can access sqs queue. I have done this in my project recently.
https://ramanisandeep.wordpress.com/2018/03/10/amazon-simple-queue-service-sqs/
Note: You need to use ProxyHost and ProxyPort if you are running your Lambda function in restricted environment.
i.e
_sqsConfig = new AmazonSQSConfig
{
ProxyHost = proxyHost,
ProxyPort = proxyPort,
ServiceURL = queueUrl,
RegionEndpoint = RegionEndpoint.USEast1
};