Using the aws-sdk for node, I am initializing sqs
sqs = new AWS.SQS({
region: 'us-east-1',
});
In the prod environment this works great because the role running this has the required permissions to connect with SQS. However, running locally is a problem because those prod permissions are not applied to my local dev environment.
I can fix this problem by adding them in
sqs = new AWS.SQS({
region: 'us-east-1',
accessKeyId: MY_AWS_ACCESS_KEY_ID,
secretAccessKey: MY_AWS_SECRET_ACCESS_KEY,
});
But for local development, I don't make any requests to AWS since I'm using a localstack queue.
How can I initialize AWS.SQS so that it continues to function in prod without specifying the AWS keys for local development?
The AWS SDK and CLI read credentials from multiple locations in a well-defined order.
For example you can create a local credential file, and SQS will automatically use credentials from it without making any changes to your original production code.
% cat ~/.aws/credentials
[default]
aws_access_key_id=AAA
was_secret_access_key=BBB
If you have multiple environments, you can specify them in this file by name. The SDK and CLI will typically read the $AWS_PROFILE environmental variable and use the specified profile from your credentials (or [default] if the environmental var is missing).
I am not using AWS resources when developing locally, so it should not be necessary to provide AWS key credentials at all.
If you want to work around this problem, you can just set bogus values.
const SQS = new AWS.SQS({
region: 'us-east-1',
accessKeyId: 'na',
secretAccessKey: 'na',
});
Related
I have a simple Terraform root that provisions some AWS resources. It was initially set up with default local state. I use an AWS Profile to specify the target environment:
$ export AWS_PROFILE="some-aws-profile"
$ aws sts get-caller-identity
{
"UserId": "REDACTED:REDACTED",
"Account": "account_id",
"Arn": "arn:aws:sts::account:assumed-role/somerolename/someusername"
}
And I can run terraform plan or terraform apply - resources get created in the target account. provider "aws" is configured with a region parameter only, all other details / creds are controlled via the AWS_PROFILE env var.
Now I am looking to move state to remote, with an S3 backend.
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "some/path/to/terraform.tfstate"
region = "eu-west-1"
}
}
When I run terraform init with this, an error is thrown: Error: error configuring S3 Backend: no valid credential sources for S3 Backend found. I have also tried adding profile = "some-aws-profile" into the s3 backend block, but the same still fails.
Does a terraform / backend block use a different credential provider chain? Any reason why this backend config is not able to use AWS_PROFILE implicitly from environment var, or even when profile is added?
I don't have any .credentials files that I use for auth - in my local environment, i am using aws sso login to automatically manage credentials via /cache/ subdirs in ~/.aws/sso or ~/.aws/cli - is this the part that is not compatible with backend?
edit adding in a snippet from ~/.aws/config
This is what my profile looks like:
[profile some-aws-profile]
sso_start_url = https://myhostname.awsapps.com/start/#/
sso_region = eu-west-1
sso_account_id = <actual_account_id>
sso_role_name = somerolename
region = eu-west-1
output = json
To set up auth, i use aws sso login once AWS_PROFILE is set, and I authorize the request for temporary credentials in whereever CLI stores them.
This was not working in 0.13.6 with the latest version of terraform provider aws (4.15.1).
Upgrading to TF 1.2.0 resolved this - SSO profile is used for credential loading in the S3 backend.
I enjoy using the AWS SDK without having to specify where to find the credentials, it makes it easier to configure on multiple environment where different types of credentials are available.
The AWS SDK for Ruby searches for credentials [...]
Is there some way I can retrieve the code that does this to configure Faraday with AWS ? In order to configure Faraday I need something like
faraday.request(:aws_sigv4,
service: 'es',
credentials: credentials,
region: ENV['AWS_REGION'],
)
Now I would love that this credentials be picked "automatically" as in the aws sdk v3. How Can I do that ?
(ie where is the code in the AWS SDK v3 that does something like
credentials = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'])
unless credentials.set?
credentials = Aws::InstanceProfileCredentials.new
end
...
The class Aws::CredentialProviderChain is the one in charge of resolving the credentials, but it is tagged as #api private so there's currently no guarantee it will still be there after updates (I've opened a discussion to make it public).
If you're okay to use it, you can resolve credentials like this. I'm going to test it in CI (ENV creds), development (Aws config creds), and staging/production environments (Instance profile creds).
Aws::CredentialProviderChain.new.resolve
You can use it in middlewares like this (for example configuring Elasticsearch/Faraday)
faraday.request(:aws_sigv4,
service: 'es',
credentials: Aws::CredentialProviderChain.new.resolve,
region: ENV['AWS_REGION'],
)
end
We are using Dynamodb Local to do integration testing. It is launched inside a container, and within that container, we need to connect to Dynamodb local. Here is how the DocumentClient is initialized:
const doc = new AWS.DynamoDB.DocumentClient({
region: 'localhost',
endpoint: 'http://localhost:5000/'
});
However, when I try connect try batchwrite, like so doc.batchWrite(buildSetData).promise(), the promise is never fulfilled. For those that are wondering, the batchwrite is in JavaScript, and .promise() just returned a JS promise.
However, when I run my setup locally (outside of docker container), everything works perfectly.
TLDR: Why can't I connect to DynamoDb Local inside my container.
The problem was due to the docker environment not having credentials. I assumed that dynamodb-local would not need AWS credentials, and even though it is not making connected to AWS, dynamodb-local still needs them (in fact, they can even be non-sensical credentials, as long as the keys are present).
TLDR: If anyone else has this problem, just define the following keys in your docker env:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
I'm trying to ListProjects in AWS Device Farm.
Here's my code:
const AWS = require('aws-sdk');
AWS.config.update({ region:'us-east-1' });
const credentials = new AWS.SharedIniFileCredentials({ profile: '***' });
AWS.config.credentials = credentials;
const devicefarm = new AWS.DeviceFarm();
async function run() {
let projects = await devicefarm.listProjects().promise();
console.log(projects);
}
run();
I'm getting this error:
UnknownEndpoint: Inaccessible host: devicefarm.us-east-1.amazonaws.com'.
This service may not be available in the `us-east-1' region.
According to http://docs.amazonaws.cn/en_us/general/latest/gr/rande.html, Device Farm is only available in us-west-2?
Changing AWS.config.update({ region:'us-east-1' }); to AWS.config.update({ region:'us-west-2' }); worked:
Working code:
const AWS = require('aws-sdk');
AWS.config.update({ region:'us-west-2' });
var credentials = new AWS.SharedIniFileCredentials({ profile: '***' });
AWS.config.credentials = credentials;
var devicefarm = new AWS.DeviceFarm();
async function run() {
let projects = await devicefarm.listProjects().promise();
console.log(projects);
}
run();
I faced the same issue and realised that uploads were failing because my internet connection was too slow to resolve the DNS of bucket URL. However, slow connection is not the only reason for this error -- network/server/service/region/data center outage could also be the possible root cause.
AWS provides a dashboard to get health reports of the services offered and the same is available at this link. API to access the services' health is also available.
I had the same issue and checked all the github issues as well as SO answers, not much help.
If you are also in the corporate environment as I am and get this error locally, it is quite possible because you did not have the proxy setup in the SDK.
import HttpsProxyAgent from 'https-proxy-agent';
import AWS from 'aws-sdk';
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
signatureVersion: 'v4',
credentials,
httpOptions: { agent: new HttpsProxyAgent(process.env.https_proxy) },
});
Normally the code might work in your ec2/lambda as they have vpc endpoint but locally you might need proxy in order to access the bucket url: YourBucketName.s3.amazonaws.com
I got the same error, I have resolved following way
Error:
Region: 'Asia Pacific (Mumbai) ap-south-1' which I choose
Solution:
In region instead of writing the entire name, you should only mention the region code
Region: ap-south-1
Well for anyone still having this issue, I managed to solve it by changing the endpoint parameter passed in to the AWS.config.update() from an ARN string example arn:aws:dynamodb:ap-southeast-<generated from AWS>:table/<tableName> to the URL example https://dynamodb.aws-region.amazonaws.com, but replacing aws-region with your region in my case ap-southeast-1.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.NodeJs.Summary.html
I have the same problem. I just change the origin in the .env file.
Previous value:-
"Asia Pacific (Mumbai) ap-south-1"
Corrected value:-
"ap-south-1"
I just kept trying amplify init until it worked.
I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216򎘰
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});