I'm learning CloudFoundry, and trying to get my node.js app to access my AWS S3 service. I've bound my AWS S3 service to the app (in manifest.yml, path applications/services). In code, I can get the credentials using cfenv, but how do I supply them to AWS?
var cfenv = require("cfenv");
var appEnv = cfenv.getAppEnv();
var my_s3_service = appEnv.getService('my-s3-service');
/* my_s3_service.credentials = {
"api_key": "(redacted)",
"bucket": "(redacted)",
"endpoint": "s3-eu-west-1.amazonaws.com",
"location_constraint": "eu-west-1",
"secret_key": "(redacted)",
"uri": "s3://(redacted):(redacted)#s3-eu-west-1.amazonaws.com/(redacted)"
}
var AWS = require('aws-sdk');
AWS.config.update(Uhhh... something with my_s3_service.credentials... but what?);
const s3 = new AWS.S3();
s3.getObject({
Bucket: my_s3_service.credentials,
Key: "my-key.json"
}, (...));
Looking in AWS SDK for JavaScript - Setting Credentials in Node.js, I see several methods to provide credentials - but none starts with the credentials object I have...
This worked for me:
var servicekey = my_s3_service.credentials;
var creds = new AWS.Credentials(servicekey.api_key, servicekey.secret_key);
AWS.config.update({credentials: creds});
How I found out: I debugged how it worked locally, and saw that AWS.config.credentials was of type SharedIniFileCredentials, taken locally from %USERPROFILE%\.aws\credentials (ref), which presumably was instated when I configured AWS cli. So I guessed I needed to put an AWS.Credentials object in there
Related
I am trying to upload a file in S3 by AWS Assume Role. When I am trying to access it from CLI it works fine but from .Net SDK it gives me Access Denied error.
Here are the steps I followed in CLI -
Setup the access key/secret key for user using aws configure
Assume the Role - “aws sts assume-role --role-arn "arn:aws:iam::1010101010:role/Test-Account-Role" --role-session-name AWSCLI-Session”
Take the access key / secret key / session token from the assumed role and setup an AWS profile. The credentials are printed out/returned from the assumed role.
Switch to the assume role profile: “set AWS_PROFILE=”
Verify that the user has the role: “aws sts get-caller-identity”
Access the bucket using ls or cp or rm command - Works Successfully.
Now I am trying to access it from .Net core App -
Here is the code snippet- Note that I am using same Access and Secret key as CLI from my local.
try
{
var region = RegionEndpoint.GetBySystemName(awsRegion);
SessionAWSCredentials tempCredentials = await GetTemporaryCredentialsAsync(awsAccessKey, awsSecretKey, region, roleARN);
//Use the temp credentials received to create the new client
IAmazonS3 client = new AmazonS3Client(tempCredentials, region);
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest
{
BucketName = bucketName,
Key = $"{subFolder}/{fileName}",
FilePath = localFilePath
utility.Upload(request); //transfer
fileUploadedSuccessfully = true;
}
catch (AmazonS3Exception ex)
{
// HandleException
}
catch (Exception ex)
{
// HandleException
}
The method to get temp credentials is as follow - GetTemporaryCredentialsAsync
private static async Task<SessionAWSCredentials> GetTemporaryCredentialsAsync(string awsAccessKey, string awsSecretKey, RegionEndpoint region, string roleARN)
{
using (var stsClient = new AmazonSecurityTokenServiceClient(awsAccessKey, awsSecretKey, region))
{
var getSessionTokenRequest = new GetSessionTokenRequest
{
DurationSeconds = 7200
};
await stsClient.AssumeRoleAsync(
new AssumeRoleRequest()
{
RoleArn = roleARN,
RoleSessionName = "mySession"
});
GetSessionTokenResponse sessionTokenResponse =
await stsClient.GetSessionTokenAsync(getSessionTokenRequest);
Credentials credentials = sessionTokenResponse.Credentials;
var sessionCredentials =
new SessionAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey,
credentials.SessionToken);
return sessionCredentials;
}
}
I am getting back the temp credentials but it gives me Access Denied while uploading the file. Not sure if I am missing anything here.
Also noted that the token generated via SDK is shorter than that from CLI. I tried pasting these temp credentials to local profile and then tried to access the bucket and getting the Access Denied error then too.
There is an AWS .NET V3 example that shows this exact use case. To assume a role, you use a AmazonSecurityTokenServiceClient. In this example, the user assumes the role that allows the role to be used to list all S3 buckets. See this .NET scenario here.
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/dotnetv3/IAM/IAM_Basics_Scenario/IAM_Basics_Scenario/IAM_Basics.cs
I'm using Google Cloud Functions to listen to a topic in Pub/Sub and send data to a collection in Firestore. The problem is: whenever I test the function (using the test tab that is provided in GCP) and check the logs from that function, it always throws this error:
Error: Could not load the default credentials.
Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
That link didn't help, by the way, as they say the Application Default Credentials are found automatically, but it's not the case here.
This is how I'm using Firestore, in index.js:
const admin = require('firebase-admin')
admin.initializeApp()
var db = admin.firestore()
// ...
db.collection('...').add(doc)
In my package.json, these are the dependencies (I'm using BigQuery too, which raises the same error):
{
"name": "[function name]",
"version": "0.0.1",
"dependencies": {
"#google-cloud/pubsub": "^0.18.0",
"#google-cloud/bigquery": "^4.3.0",
"firebase-admin": "^8.6.1"
}
}
I've already tried:
Creating a new service account and using it in the function setting;
Using the command gcloud auth application-default login in Cloud Shell;
Setting the environment variable GOOGLE_APPLICATION_CREDENTIALS via Cloud Shell to a json file (I don't even know if that makes sense);
But nothing seems to work :( How can I configure this default credential so that I don't have to ever configure it again? Like, a permanent setting for the entire project so all my functions can have access to Firestore, BigQuery, IoT Core, etc. with no problems.
This is the code that I am using:
const firebase = require('firebase');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const serviceAccount = require("./key.json");
const config = {
credential: admin.credential.cert(serviceAccount),
apiKey: "",
authDomain: "project.firebaseapp.com",
databaseURL: "https://project.firebaseio.com",
projectId: "project",
storageBucket: "project.appspot.com",
messagingSenderId: "",
appId: "",
measurementId: ""
};
firebase.initializeApp(config);
const db = admin.firestore();
I'm still learning to use AWS Lambda functions. I generated mine using the Amplify framework. The Lambda function I use needs to access an AppSync API. Therefore it has the following middleware:
const tapRoute = f => R.tap(route => route.use(f));
const hydrateClient = tapRoute(async function(req, res, next) {
try {
const url = process.env.API_SAYMAPPSYNCAPI_GRAPHQLAPIENDPOINTOUTPUT;
const region = process.env.REGION;
AWS.config.update({
region,
credentials: new AWS.Credentials(
process.env.AWS_ACCESS_KEY_ID,
process.env.AWS_SECRET_ACCESS_KEY,
process.env.AWS_SESSION_TOKEN
),
});
const credentials = AWS.config.credentials;
const appsyncClient = new AWSAppSyncClient(
{
url,
region,
auth: {
type: 'AWS_IAM',
credentials,
},
disableOffline: true,
},
{
defaultOptions: {
query: {
fetchPolicy: 'network-only',
errorPolicy: 'all',
},
},
}
);
const client = await appsyncClient.hydrated();
req.client = client;
next();
} catch (error) {
console.log(error);
next(error);
}
});
As you can see I need to access the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN. When I run the function in the cloud, it automatically gets the values for these variables. How can I get them locally? Which access key do I need to use and how do I get its secret access key and the session token?
You don't need to explicitly set them if you have configured the ~/.aws/credentials file`.
If you want to configure this file, the easiest way is to simply install the aws-cli and run aws configure. You will be prompted to enter a few values, including AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
If you don't have this file configured, then you can set these values in environment variables yourself.
You can get these values by going to AWS Console -> IAM -> Users -> Select your User -> Security Credentials -> Access Keys. Then you click on "Create Access Key" and either download or write those values down, as AWS_SECRET_ACCESS_KEY is only visible during creation time. AWS_ACCESS_KEY_ID on the other hand is always visible, but it's quite useless if you don't have the secret.
AWS_SESSION_TOKEN is only required if the user in question is using MFA. If not, this value can be ignored.
< MFAOnly >
If you are using MFA though, you will need to use the aws-cli to fetch this value, like so:
aws sts get-session-token --serial-number arn:aws:iam::account-id-number:mfa/your-user --token-code MFAToken
Then, to set the temporary credentials, run aws configure again, replace the values of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with the new temporary values.
Finally, to set AWS_SESSION_TOKEN, run aws configure set aws_session_token VALUE_RETURNED_FROM_GET_SESSION_TOKEN_COMMAND
< / MFAOnly >
Keep in mind that when running in the Cloud, these credentials are not loaded as you stated. IAM roles are used instead.
const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?
I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.
I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}
I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)
I am working on my AWS cert and I'm trying to figure out how the following bit of js code works:
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3();
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
This code successfully loads a txt file containing the words "Hello" in it. I do not understand how this ^ can identify MY AWS account. It does! But how! It somehow is able to determine that I want a new bucket inside MY account, but this code was taken directly from the AWS docs. I don't know how it could figure that out....
As per Class: AWS.CredentialProviderChain, the AWS SDK for JavaScript looks for credentials in the following locations:
AWS.CredentialProviderChain.defaultProviders = [
function () { return new AWS.EnvironmentCredentials('AWS'); },
function () { return new AWS.EnvironmentCredentials('AMAZON'); },
function () { return new AWS.SharedIniFileCredentials(); },
function () {
// if AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set
return new AWS.ECSCredentials();
// else
return new AWS.EC2MetadataCredentials();
}
]
Environment Variables (useful for testing, or when running code on a local computer)
Local credentials file (useful for running code on a local computer)
ECS credentials (useful when running code in Elastic Container Service)
Amazon EC2 Metadata (useful when running code on an Amazon EC2 instance)
It is highly recommended to never store credentials within an application. If the code is running on an Amazon EC2 instance and a role has been assigned to the instance, the SDK will automatically retrieve credentials from the instance metadata.
The next best method is to store credentials in the ~/.aws/credentials file.