I am working on my AWS cert and I'm trying to figure out how the following bit of js code works:
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3();
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
This code successfully loads a txt file containing the words "Hello" in it. I do not understand how this ^ can identify MY AWS account. It does! But how! It somehow is able to determine that I want a new bucket inside MY account, but this code was taken directly from the AWS docs. I don't know how it could figure that out....
As per Class: AWS.CredentialProviderChain, the AWS SDK for JavaScript looks for credentials in the following locations:
AWS.CredentialProviderChain.defaultProviders = [
function () { return new AWS.EnvironmentCredentials('AWS'); },
function () { return new AWS.EnvironmentCredentials('AMAZON'); },
function () { return new AWS.SharedIniFileCredentials(); },
function () {
// if AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set
return new AWS.ECSCredentials();
// else
return new AWS.EC2MetadataCredentials();
}
]
Environment Variables (useful for testing, or when running code on a local computer)
Local credentials file (useful for running code on a local computer)
ECS credentials (useful when running code in Elastic Container Service)
Amazon EC2 Metadata (useful when running code on an Amazon EC2 instance)
It is highly recommended to never store credentials within an application. If the code is running on an Amazon EC2 instance and a role has been assigned to the instance, the SDK will automatically retrieve credentials from the instance metadata.
The next best method is to store credentials in the ~/.aws/credentials file.
Related
I am trying to upload a file in S3 by AWS Assume Role. When I am trying to access it from CLI it works fine but from .Net SDK it gives me Access Denied error.
Here are the steps I followed in CLI -
Setup the access key/secret key for user using aws configure
Assume the Role - “aws sts assume-role --role-arn "arn:aws:iam::1010101010:role/Test-Account-Role" --role-session-name AWSCLI-Session”
Take the access key / secret key / session token from the assumed role and setup an AWS profile. The credentials are printed out/returned from the assumed role.
Switch to the assume role profile: “set AWS_PROFILE=”
Verify that the user has the role: “aws sts get-caller-identity”
Access the bucket using ls or cp or rm command - Works Successfully.
Now I am trying to access it from .Net core App -
Here is the code snippet- Note that I am using same Access and Secret key as CLI from my local.
try
{
var region = RegionEndpoint.GetBySystemName(awsRegion);
SessionAWSCredentials tempCredentials = await GetTemporaryCredentialsAsync(awsAccessKey, awsSecretKey, region, roleARN);
//Use the temp credentials received to create the new client
IAmazonS3 client = new AmazonS3Client(tempCredentials, region);
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest
{
BucketName = bucketName,
Key = $"{subFolder}/{fileName}",
FilePath = localFilePath
utility.Upload(request); //transfer
fileUploadedSuccessfully = true;
}
catch (AmazonS3Exception ex)
{
// HandleException
}
catch (Exception ex)
{
// HandleException
}
The method to get temp credentials is as follow - GetTemporaryCredentialsAsync
private static async Task<SessionAWSCredentials> GetTemporaryCredentialsAsync(string awsAccessKey, string awsSecretKey, RegionEndpoint region, string roleARN)
{
using (var stsClient = new AmazonSecurityTokenServiceClient(awsAccessKey, awsSecretKey, region))
{
var getSessionTokenRequest = new GetSessionTokenRequest
{
DurationSeconds = 7200
};
await stsClient.AssumeRoleAsync(
new AssumeRoleRequest()
{
RoleArn = roleARN,
RoleSessionName = "mySession"
});
GetSessionTokenResponse sessionTokenResponse =
await stsClient.GetSessionTokenAsync(getSessionTokenRequest);
Credentials credentials = sessionTokenResponse.Credentials;
var sessionCredentials =
new SessionAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey,
credentials.SessionToken);
return sessionCredentials;
}
}
I am getting back the temp credentials but it gives me Access Denied while uploading the file. Not sure if I am missing anything here.
Also noted that the token generated via SDK is shorter than that from CLI. I tried pasting these temp credentials to local profile and then tried to access the bucket and getting the Access Denied error then too.
There is an AWS .NET V3 example that shows this exact use case. To assume a role, you use a AmazonSecurityTokenServiceClient. In this example, the user assumes the role that allows the role to be used to list all S3 buckets. See this .NET scenario here.
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/dotnetv3/IAM/IAM_Basics_Scenario/IAM_Basics_Scenario/IAM_Basics.cs
I'm learning CloudFoundry, and trying to get my node.js app to access my AWS S3 service. I've bound my AWS S3 service to the app (in manifest.yml, path applications/services). In code, I can get the credentials using cfenv, but how do I supply them to AWS?
var cfenv = require("cfenv");
var appEnv = cfenv.getAppEnv();
var my_s3_service = appEnv.getService('my-s3-service');
/* my_s3_service.credentials = {
"api_key": "(redacted)",
"bucket": "(redacted)",
"endpoint": "s3-eu-west-1.amazonaws.com",
"location_constraint": "eu-west-1",
"secret_key": "(redacted)",
"uri": "s3://(redacted):(redacted)#s3-eu-west-1.amazonaws.com/(redacted)"
}
var AWS = require('aws-sdk');
AWS.config.update(Uhhh... something with my_s3_service.credentials... but what?);
const s3 = new AWS.S3();
s3.getObject({
Bucket: my_s3_service.credentials,
Key: "my-key.json"
}, (...));
Looking in AWS SDK for JavaScript - Setting Credentials in Node.js, I see several methods to provide credentials - but none starts with the credentials object I have...
This worked for me:
var servicekey = my_s3_service.credentials;
var creds = new AWS.Credentials(servicekey.api_key, servicekey.secret_key);
AWS.config.update({credentials: creds});
How I found out: I debugged how it worked locally, and saw that AWS.config.credentials was of type SharedIniFileCredentials, taken locally from %USERPROFILE%\.aws\credentials (ref), which presumably was instated when I configured AWS cli. So I guessed I needed to put an AWS.Credentials object in there
I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents
I had pretty long number of attempts to put a file in S3 bucket, after which I have to update my model.
I have following code (note that I have tried commented lines too. It works neither with comments nor without it.)
The problem observed:
Everything in the first .then() block (successCallBack()) gets successfully executed, but I do not see result of s3.putObject().
The bucket in question is public, no access restrictions. It used to work with sls offline option, then because of it not working in AWS I had to make lot of changes and managed to make successCallback() work which does the database work successfully. However, file upload still doesn't work.
Some questions:
While solving this, the real questions I am pondering / searching are,
Is lambda supposed to return something? I saw AWS docs but they have fragmented code snippets.
Putting await in front of s3.putObject(params).promise() does not help. I see samples with and without await in front of things that have AWS Promise() function call. Not sure which ones are correct.
What is the correct way when you have chained async functions to accomplish within one lambda function?
UPDATE:
var myJSON = {}
const createBook = async (event) => {
let bucketPath = "https://com.xxxx.yyyy.aa-bb-zzzzzz-1.amazonaws.com"
let fileKey = //file key
let path = bucketPath + "/" + fileKey;
myJSON = {
//JSON from headers
}
var s3 = new AWS.S3();
let buffer = Buffer.from(event.body, 'utf8');
var params = {Bucket: 'com.xxxx.yyyy', Key: fileKey, Body: buffer, ContentEncoding: 'utf8'};
let putObjPromise = s3.putObject(params).promise();
putObjPromise
.then(successCallBack())
.then(c => {
console.log('File upload Success!');
return {
statusCode: 200,
headers: { 'Content-Type': 'text/plain' },
body: "Success!!!"
}
})
.catch(err => {
let str = "File upload / Creation error:" + err;
console.log(str);
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain' },
body: str
}
});
}
const successCallBack = async () => {
console.log("Inside success callback - " + JSON.stringify(myJSON)) ;
const { myModel } = await connectToDatabase()
console.log("After connectToDatabase")
const book = await myModel.create(myJSON)
console.log(msg);
}
Finally, I got this to work. My code worked already in sls offline setup.
What was different on AWS endpoint?
What I observed on console was the fact that my lambda was set to run under VPC.
When I chose No VPC, it worked. I do not know if this is the best practice. There must be some security advantage obtained by functions running under VPC.
I came across this huge explanation about VPC but I could not find anything related to S3.
The code posted in the question currently runs fine on AWS endpoint.
If the lambda is running in a VPC then you would need a VPC endpoint to access a service outside the vpc. S3 would be outside the VPC. Perhaps if security is an issue then creating a VPC endpoint would solve the issue in a better way. Also, if security is an issue, then perhaps adding a policy (or using the default AmazonS3FullAccess policy) to the role that the lambda is using, then the S3 bucket wouldn't need to be public.
im trying to found any example to how decrypt a password encrypted by KMS, for example i have a RDS database password encrypted by KMS, i want to decrypt it in my aws-cpp-lambda function to connect to the database.
i see that for call client Decrypt i need a DecryptRequest;
but i don't know how to initialize it and where set my "encriptedPassword" in the DecryptedRequest to call client.Decrypt()
this is base64
encriptedPassword = "GPK0ujdAAAAZzBlBgkqhkiG"
Aws::SDKOptions options;
InitAPI(options);
{
Aws::Client::ClientConfiguration awsConfig;
awsConfig.region = Aws::Environment::GetEnv("AWS_REGION");
Aws::KMS::KMSClient client(awsConfig);
// Aws::KMS::Model::DecryptRequest decryptRequest;
// client.Decrypt(decryptRequest);
}
// shutdown the aws api
std::cout << "shutdown api" << "\n";
ShutdownAPI(options);
all the credentials are managed and storage by the aws administrator so i don't have access to that configuration, i only have the encrypted db password. when i make the lambda i publish it a git repository after that a jenkins process build and deploy the lambda to aws, jenkins has the credentials etc and i suppose that also is saved in the ec2 or aws config, they only give me a example of how to do that but its in nodejs i need to do a c++ version of that for example this is the nodejs example
use strict'
const AWS = require('aws-sdk');
module.exports.decrypt = (key) => {
return new Promise((resolve, reject) => {
const kms = new AWS.KMS();
console.log('Attempting to decrypt: ' + key);
const params = {CiphertextBlob: new Buffer(key, 'base64')};
console.log(params);
kms.decrypt(params, function (err, data) {
if (err) {
console.log('Error while decrypting key: ' + err);
reject(err)
} else {
console.log('Decrypted key');
resolve(data.Plaintext.toString('ascii'));
}
});
});
what nodejs version do is decrypt the password using sdk kms client after that is passed as simple string to the db connection lib and connect to the database using host, port, db name, etc.
what i need to do is decrypt the password using the KMSCLIENT for c++ like the node version.
anyone cant write a small example please. Thanks to all!!