I need to upload some files to S3 from a NextJs application. Since it is server side I am under the impression simply setting environment variables should work but it doesn't. I know there are other alternative like assigning a role to EC2 but I want to use accessKeyID and secretKey.
This is my next.config.js
module.exports = {
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
serverRuntimeConfig: {
//..others
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY
}
}
This is my config/index.js
export default {
//...others
awsClientID: process.env. AWS_ACCESS_KEY_ID,
awsClientSecret: process.env.AWS_SECRET_ACCESS_KEY
}
This is how I use in my code:
import AWS from 'aws-sdk'
import config from '../config'
AWS.config.update({
accessKeyId: config.awsClientID,
secretAccessKey: config.awsClientSecret,
});
const S3 = new AWS.S3()
const params = {
Bucket: "bucketName",
Key: "some key",
Body: fileObject,
ContentType: fileObject.type,
ACL: 'public-read'
}
await S3.upload(params).promise()
I am getting this error:
Unhandled Rejection (CredentialsError): Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
If I hard code the credentials in code, it works fine.
How can I make it work correctly?
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example.
.env.local
YOUR_APP_AWS_ACCESS_KEY_ID=[your key]
YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret]
Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client.
import { S3Client } from '#aws-sdk/client-s3'
...
const s3 = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '',
},
})
(note: undefined check so Typescript stays happy)
are you possibly hosting this app via vercel?
As per vercel docs, some env variables are reserved by vercel.
https://vercel.com/docs/concepts/projects/environment-variables#reserved-environment-variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Maybe that's the reason why it is not getting those env vars
I was able to workaround this by adding my custom env variables into .env.local and then calling for those variables
AWS.config.update({
'region': 'us-east-1',
'credentials': {
'accessKeyId': process.env.MY_AWS_ACCESS_KEY,
'secretAccessKey': process.env.MY_AWS_SECRET_KEY
}
});
As last step would need to add these into vercel UI
obviously not ideal solution and not recommended by AWS.
https://vercel.com/support/articles/how-can-i-use-aws-sdk-environment-variables-on-vercel
If I'm not mistaken, you want to make AWS_ACCESS_KEY_ID into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application.
// replace this
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
// with this
module.exports = {
serverRuntimeConfig: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
}
}
Reference: https://nextjs.org/docs/api-reference/next.config.js/environment-variables
Related
I'm learning how to use aws cdk, here is my code, I wanna do "cdk deploy --profile myProfile", got "Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment",
but I already specifying my Credentials and Region by using, can anyone help me with that.
cdk doctor
ℹ️ CDK Version: 1.30.0 (build 4f54ff7)
ℹ️ AWS environment variables:
- AWS_PROFILE = myProfile
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ CDK environment variables:
- CDK_DEPLOY_ACCOUNT = 096938481488
- CDK_DEPLOY_REGION = us-west-2
aws configure --profile myProfile
AWS Access Key ID [****************6LNQ]:
AWS Secret Access Key [****************d9iz]:
Default region name [us-west-2]:
Default output format [None]:
import core = require('#aws-cdk/core');
import dynamodb = require('#aws-cdk/aws-dynamodb')
import { AttributeType } from '#aws-cdk/aws-dynamodb';
import { App, Construct, Stack } from "#aws-cdk/core";
export class HelloCdkStack extends core.Stack {
constructor(scope: core.App, id: string, props?: core.StackProps) {
super(scope, id, props);
new dynamodb.Table(this, 'MyFirstTable', {
tableName: 'myTable1',
partitionKey: {
name: 'MyPartitionkey',
type: AttributeType.NUMBER
}
});
}
}
const app = new App();
new HelloCdkStack(app, 'first-stack-us', { env: { account: '***', region: 'us-west-2' }});
app.synth();
It should be the bug as in [master] CDK CLI Authentication Issues #1656.
if you have ~/.aws/credentials and ~/.aws/config they both can't have a default profile section.
cli: cdk deploy issue #3340
removing [profile default] from ~/.aws/config solved the issue! I had both [default] and [profile default]. Please see #1656
resolved the issue insert the AWS keys in the "config" file inside ~/.aws folder, and not inside "credentials" file
I am new to AWS technology, i need to do above assignment so i want to know procedure do complete above task. What are prerequisites software and tools should i required.
Please suggest me in simple way. TIN Happy Coding
you have to configure your AWS credential before suing Aws with aws-sdk npm package using below method
import AWS from "aws-sdk";
const s3 = new AWS.S3({
accessKeyId: YOUR_ACCESS_KEY_ID_OF_AMAZON,
secretAccessKey: YOUR_AMAZON_SECRET_ACCESS_KEY,
signatureVersion: "v4",
region: YOUR_AMAZON_REGION // country
});
export { s3 };
then call s3 and do upload request using below code
const uploadReq: any = {
Bucket: "YOUR_BUCKET"
Key: "FILE_NAME",
Body: "FILE_STREAM",
ACL: "public-read", //ACCESSIBLE TO REMOTE LOCATION
};
await new Promise((resolve, reject) => {
s3.upload(uploadReq).send(async (err: any, data: any) => {
if (err) {
console.log("err", err);
reject(err);
} else {
//database call
resolve("STATUS");
}
});
});
Is there a way to set up multiple aws configurations in a rails
environment? For example:
Aws account #1
Aws.config.update({
region: aws_config[:region_1],
credentials: Aws::Credentials.new(aws_config[:access_key_id_1],
aws_config[:secret_access_key_1])
})
Aws account #2
Aws.config.update({
region: aws_config[:region_2],
credentials: Aws::Credentials.new(aws_config[:access_key_id_2],
aws_config[:secret_access_key_2])
})
I tried setting the second specification manually in the model, but it
doesn't login with the new credentials for some reason:
foo = Aws::S3::Client.new(
region: aws_config[:region_2],
access_key_id: aws_config[:access_key_id_2],
secret_access_key: aws_config[:secret_access_key_2]
)
bar = foo.list_objects_v2(bucket: aws_config[:bucket_2])
I'm using AWS Elastic search service and written a Node.js wrapper (runs within a ECS dockerized container) . IAM roles are used to create a singleton connection with Elastic search.
I'm trying to refresh my session token by checking AWS.config.credentials.needsRefresh() before every request - however it always returns true even after the session has expired. Obviously AWS is complaining with a 403 error. Any ideas will be greatly appreciated .
var AWS = require('aws-sdk');
var config = require('config');
var connectionClass = require('http-aws-es');
var elasticsearch = require('elasticsearch');
AWS.config.getCredentials(function() {
AWS.config.update({
credentials: new AWS.Credentials(AWS.config.credentials.accessKeyId,AWS.config.credentials.secretAccessKey,AWS.config.credentials.sessionToken),
region: 'us-east-1'
});
}
)
var client = new elasticsearch.Client({
host: `${config.get('elasticSearch.host')}`,
log: 'debug',
connectionClass: connectionClass,
amazonES: {
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
module.exports = client;
I am attempting to create an EC2 instance and then add it to my Auto Scaling group. I am having a lot of issues trying to authenticate. I am looking for a simple way to authenticate a request using my access key to simply start an instance. What I have tried so far:
//Authenticate AWS:
var myCredentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId:'us-west-2:IdentityPoolID'
});
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'us-west-2'
});
AWS.config = myConfig
var minInst = 1;
var maxInst = 3;
var ec2 = new AWS.EC2();
//Set up parameters for EC2 Instances:
var params = {
ImageId: 'ami-6e1a0117',
MaxCount: minInst,
MinCount: maxInst,
InstanceInitiatedShutdownBehavior: 'terminate',
InstanceType: 't2.micro',
Monitoring: {
Enabled: false
},
NetworkInterfaces: [{
AssociatePublicIpAddress: true,
DeleteOnTermination: true,
}],
Placement: {
AvailabilityZone: 'us-west-2',
},
SecurityGroupIds: [
'sg-b0307ccd',
],
SecurityGroups: [
'CAB432Assignment2SG',
],
};
ec2.runInstances(params, function(err, data) {
if (err){
console.log(err, err.stack); //An error occurred
}
else{
console.log(data); //Successful Response
}
});
I know this code is wrong. I just don't know to fix it. The error I get is:
CredentialsError: Missing credentials in config
Any help would be greatly appreciated.
Delete this section of code entirely:
//Authenticate AWS:
var myCredentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId:'us-west-2:IdentityPoolID'
});
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'us-west-2'
});
AWS.config = myConfig
Change this:
var ec2 = new AWS.EC2();
To this:
var ec2 = new AWS.EC2({region: 'us-west-2'});
Then go read this page in the Setting Credentials in Node.js documentation. In particular, you need to do one of the following:
Add an IAM role to your EC2 instance, if this is running on EC2.
Add an IAM execution role to your Lambda function if this is running on Lambda.
Create either a ~/.aws/credentials file with your keys. This can be done with the aws configure command if you have the AWS CLI installed.
Set the keys as environment variables.