Is there a way to set up multiple aws configurations in a rails
environment? For example:
Aws account #1
Aws.config.update({
region: aws_config[:region_1],
credentials: Aws::Credentials.new(aws_config[:access_key_id_1],
aws_config[:secret_access_key_1])
})
Aws account #2
Aws.config.update({
region: aws_config[:region_2],
credentials: Aws::Credentials.new(aws_config[:access_key_id_2],
aws_config[:secret_access_key_2])
})
I tried setting the second specification manually in the model, but it
doesn't login with the new credentials for some reason:
foo = Aws::S3::Client.new(
region: aws_config[:region_2],
access_key_id: aws_config[:access_key_id_2],
secret_access_key: aws_config[:secret_access_key_2]
)
bar = foo.list_objects_v2(bucket: aws_config[:bucket_2])
Related
How can I pass the Cognito user's idToken to Amplify.configure() , cause the users have to be authenticated, not as a guest user. So Instead of getting an unauthenticated identityId, How do I get an authenticated Id by setting in the aws amplify configure.
For sign-in and sign-up we are not using Amplify instead we are using Rest API
Currently I'm sending Object to configure as follows,
{
Auth: {
identityPoolId: const.identityPoolId,
IdentityId: 'us-east-1:xxxxxxxxxxxxxxxxxxxyyyyyyzzzzzz',
region: 'us-east-1'
},
Analytics: {
disabled: false,
autoSessionRecord: true,
AWSPinpoint: {
appId: PinpointId,
region: 'us-east-1',
bufferSize: 1000,
flushInterval: 5000,
flushSize: 100,
resendLimit: 3
}
}
}
};
Any help is highly appreciated. Thanks
I need to upload some files to S3 from a NextJs application. Since it is server side I am under the impression simply setting environment variables should work but it doesn't. I know there are other alternative like assigning a role to EC2 but I want to use accessKeyID and secretKey.
This is my next.config.js
module.exports = {
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
serverRuntimeConfig: {
//..others
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY
}
}
This is my config/index.js
export default {
//...others
awsClientID: process.env. AWS_ACCESS_KEY_ID,
awsClientSecret: process.env.AWS_SECRET_ACCESS_KEY
}
This is how I use in my code:
import AWS from 'aws-sdk'
import config from '../config'
AWS.config.update({
accessKeyId: config.awsClientID,
secretAccessKey: config.awsClientSecret,
});
const S3 = new AWS.S3()
const params = {
Bucket: "bucketName",
Key: "some key",
Body: fileObject,
ContentType: fileObject.type,
ACL: 'public-read'
}
await S3.upload(params).promise()
I am getting this error:
Unhandled Rejection (CredentialsError): Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
If I hard code the credentials in code, it works fine.
How can I make it work correctly?
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example.
.env.local
YOUR_APP_AWS_ACCESS_KEY_ID=[your key]
YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret]
Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client.
import { S3Client } from '#aws-sdk/client-s3'
...
const s3 = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '',
},
})
(note: undefined check so Typescript stays happy)
are you possibly hosting this app via vercel?
As per vercel docs, some env variables are reserved by vercel.
https://vercel.com/docs/concepts/projects/environment-variables#reserved-environment-variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Maybe that's the reason why it is not getting those env vars
I was able to workaround this by adding my custom env variables into .env.local and then calling for those variables
AWS.config.update({
'region': 'us-east-1',
'credentials': {
'accessKeyId': process.env.MY_AWS_ACCESS_KEY,
'secretAccessKey': process.env.MY_AWS_SECRET_KEY
}
});
As last step would need to add these into vercel UI
obviously not ideal solution and not recommended by AWS.
https://vercel.com/support/articles/how-can-i-use-aws-sdk-environment-variables-on-vercel
If I'm not mistaken, you want to make AWS_ACCESS_KEY_ID into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application.
// replace this
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
// with this
module.exports = {
serverRuntimeConfig: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
}
}
Reference: https://nextjs.org/docs/api-reference/next.config.js/environment-variables
In serverless framework, I want to set the deployment bucket as
<project_name>-<stage>-<account_id>
I can get the stage using a custom variable, like:
custom:
stage: ${opt:stage, self:provider.stage}
but how can I get the aws account id? I already tried to used serverless-pseudo-parameters, like this below, without success.
custom:
account_id: #{AWS::AccountId}
plugins:
- serverless-pseudo-parameters
Someone could help me to set the account id as a custom variable?
According to the documentation, to get the Account Id, you can use external js files:
// myCustomFile.js
module.exports.getAccountId = async (context) => {
return context.providers.aws.getAccountId();
};
.
# serverless.yml
service: new-service
provider: aws
custom:
accountId: ${file(../myCustomFile.js):getAccountId}
For anyone using Serverless with an "assumed role" where your IAM users are defined in a master AWS account and you're trying to deploy in a child account using a role from that child account: the documented solution - the one in the accepted answer above - does not work.
This setup in described in detail here: https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/. When using serverless with an --aws-profile that's configured to assume a role defined in another account, sts.getCallerIdentity() returns the account info of your master account from the default profile, and not the account of the assumed role.
To get the account ID of the assumed role (which is where we're deploying to), I did the following:
const { STS } = require('aws-sdk');
module.exports.getAccountId = async (context) => {
// This loads the AWS credentials Serverless is currently using
// They contain the role ARN of the assumed role
const credentials = context.providers.aws.getCredentials();
// init STS using the same credentials
const sts = new STS(credentials);
const identity = await sts.getCallerIdentity().promise();
return identity.Account;
};
Edit:
Found an even better way, that is simpler than the one presented in Serverless docs and also works fine with assumed roles:
module.exports.getAccountId = async (context) => {
return context.providers.aws.getAccountId();
};
You should be able to access them below as per below example https://serverless.com/framework/docs/providers/aws/guide/variables/
Resources:
- 'Fn::Join':
- ':'
- - 'arn:aws:logs'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
It seems like your syntax is wrong. Try
custom:
account_id: ${AWS::AccountId}
Because at least in the example that you provided you are using #{AWS::AccountId}
Notice the hashtag in your one?
I'm using AWS Elastic search service and written a Node.js wrapper (runs within a ECS dockerized container) . IAM roles are used to create a singleton connection with Elastic search.
I'm trying to refresh my session token by checking AWS.config.credentials.needsRefresh() before every request - however it always returns true even after the session has expired. Obviously AWS is complaining with a 403 error. Any ideas will be greatly appreciated .
var AWS = require('aws-sdk');
var config = require('config');
var connectionClass = require('http-aws-es');
var elasticsearch = require('elasticsearch');
AWS.config.getCredentials(function() {
AWS.config.update({
credentials: new AWS.Credentials(AWS.config.credentials.accessKeyId,AWS.config.credentials.secretAccessKey,AWS.config.credentials.sessionToken),
region: 'us-east-1'
});
}
)
var client = new elasticsearch.Client({
host: `${config.get('elasticSearch.host')}`,
log: 'debug',
connectionClass: connectionClass,
amazonES: {
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
module.exports = client;
While following the AWS docs for setting up DynamoDB in my front end project, with settings taken from the docs the API throws:
Error: Missing region in config
at constructor.<anonymous> (aws-sdk-2.129.0.min.js:42)
at constructor.callListeners (aws-sdk-2.129.0.min.js:44)
at i (aws-sdk-2.129.0.min.js:44)
at aws-sdk-2.129.0.min.js:42
at t (aws-sdk-2.129.0.min.js:41)
at constructor.getCredentials (aws-sdk-2.129.0.min.js:41)
at constructor.<anonymous> (aws-sdk-2.129.0.min.js:42)
at constructor.callListeners (aws-sdk-2.129.0.min.js:44)
at constructor.emit (aws-sdk-2.129.0.min.js:44)
at constructor.emitEvent (aws-sdk-2.129.0.min.js:43)
My settings:
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.129.0.min.js"></script>
<script>
var myCredentials = new AWS.CognitoIdentityCredentials({IdentityPoolId:'eu-west-1_XXXXXX'});
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'eu-west-1',
});
console.log(myConfig.region); //logs 'eu-west-1'
var dynamodb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
dynamodb.listTables({Limit: 10}, function(err, data) {
if (err) {
console.log(err);
} else {
console.log("Table names are ", data.TableNames);
}
});
</script>
What am I missing?
Looks like you’re newing up AWS.config.
Change the line
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'eu-west-1',
});
to
AWS.config.update({
credentials: myCredentials, region: 'eu-west-1',
}});
Reference:
http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html
Hope it helps.
For others hitting the same issue, where the docs mention:
If you have not yet created one, create an identity pool...
and you got forwarded to the Amazon Cognito service, choose the Manage Federal Identities not the Manage User Pools option.