how to format bucket name in AWS for saving - amazon-web-services

Hi i have a lambda function that is trying to save to a bucket:
exports.handler = async (event) => {
console.log('starting');
const { Client } = require('pg');
const client = new Client();
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
var bucketName = 'arn:aws:s3:us-east-1::my_bucket_name';
var keyName = 'prova.txt';
var content = 'This is a sample text file';
var params = { 'Bucket': bucketName, 'Key': keyName, 'Body': content };
try {
console.log('saving...');
const data = await s3.putObject(params).promise();
console.log("Successfully saved object to " + bucketName + "/" + keyName);
} catch (err) {
console.log('err');
console.log(err);
};
But I get the error below. Know what i'm doing wrong?
message: "Access point ARN resource should begin with 'accesspoint/'",
code: 'InvalidAccessPointARN',
time: 2020-03-21T12:38:33.370Z
}
END RequestId: 31aba537-c25a-45bf-877e-0be8e8f98c95
REPORT RequestId: 31aba537-c25a-45bf-877e-0be8e8f98c95 Duration: 4543.02 ms Billed Duration: 4600 ms Memory Size: 128 MB Max Memory Used: 83 MB Init Duration: 107.67 ms

Your bucket is not accessible at the moment.
Go to your S3 bucket, then navigate to "Access points" tab.
Create an access point from here.
I believe you need an Internet access point, and to keep things simple, untick the "Block all public access" (not recommended for security).
Once it is created open the access point details and use "Access point ARN" from there.

Related

How to set credentials in AWS SDK v3 JavaScript?

I am scouring the documentation, and it only provides pseudo-code of the credentials for v3 (e.g. const client = new S3Client(clientParams)
How do I initialize an S3Client with the bucket and credentials to perform a getSignedUrl request? Any resources pointing me in the right direction would be most helpful. I've even searched YouTube, SO, etc and I can't find any specific info on v3. Even the documentation and examples doesn't provide the actual code to use credentials. Thanks!
As an aside, do I have to include the fake folder structure in the filename, or can I just use the actual filename? For example: bucket/folder1/folder2/uniqueFilename.zip or uniqueFilename.zip
Here's the code I have so far: (Keep in mind I was returning the wasabiObjKey to ensure I was getting the correct file name. I am. It's the client, GetObjectCommand, and getSignedUrl that I'm having issues with.
exports.getPresignedUrl = functions.https.onCall(async (data, ctx) => {
const wasabiObjKey = `${data.bucket_prefix ? `${data.bucket_prefix}/` : ''}${data.uid.replace(/-/g, '_').toLowerCase()}${data.variation ? `_${data.variation.replace(/\./g, '').toLowerCase()}` : ''}.zip`
const { S3Client, GetObjectCommand } = require('#aws-sdk/client-s3')
const s3 = new S3Client({
bucketEndpoint: functions.config().s3_bucket.name,
region: functions.config().s3_bucket.region,
credentials: {
secretAccessKey: functions.config().s3.secret,
accessKeyId: functions.config().s3.access_key
}
})
const command = new GetObjectCommand({
Bucket: functions.config().s3_bucket.name,
Key: wasabiObjKey,
})
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner")
const url = getSignedUrl(s3, command, { expiresIn: 60 })
return wasabiObjKey
})
There are a credential chain that provide credential to your API calls from SDK
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html
Loaded from AWS Identity and Access Management (IAM) roles for Amazon
EC2
Loaded from the shared credentials file (~/.aws/credentials)
Loaded from environment variables
Loaded from a JSON file on disk
Other credential-provider classes provided by the JavaScript SDK
You can embed the credential inside your source code but it's not the prefered way
new S3Client(configuration: S3ClientConfig): S3Client
Where S3ClientConfig contain a credentials property
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const { S3Client,GetObjectCommand } = require("#aws-sdk/client-s3");
let client = new S3Client({
region:'ap-southeast-1',
credentials:{
accessKeyId:'',
secretAccessKey:''
}
});
(async () => {
const response = await client.send(new GetObjectCommand({Bucket:"BucketNameHere",Key:"ObjectNameHere"}));
console.log(response);
})();
Sample answer
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: '7kwrFkEp3lEnLU+OtxjrgdmS6gQmvPdbnqqR7I8P/rdFrUPBkdKYPYykWivuHPXCF1IHgjCIbe8=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
Here's a simple approach I use (in Deno) for testing (in case you don't want to go the signedUrl approach and just let the SDK do the heavy lifting for you):
import { config as env } from 'https://deno.land/x/dotenv/mod.ts' // https://github.com/pietvanzoen/deno-dotenv
import { S3Client, ListObjectsV2Command } from 'https://cdn.skypack.dev/#aws-sdk/client-s3' // https://github.com/aws/aws-sdk-js-v3
const {AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY} = env()
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const credentials = {
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/s3clientconfig.html
const config = {
region: 'ap-southeast-1',
credentials,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3client.html
const client = new S3Client(config)
export async function list() {
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/listobjectsv2commandinput.html
const input = {
Bucket: 'BucketNameHere'
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/command.html
const cmd = new ListObjectsV2Command(input)
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/listobjectsv2command.html
return await client.send(cmd)
}

UpdateFunctionCode in lambda does not update the code

I have followed this blog to update the code of a lambda function using a jar file stored in a S3 bucket. the execution was succeded, but it is not updating the code of target lambda function
Code snippet
console.log('Loading function');
var AWS = require('aws-sdk');
var lambda = new AWS.Lambda();
exports.handler = function(event, context) {
var functionName = "runJarFile";
var bucket = "jarfiletest2";
var key = "lambda-java-example-0.0.1-SNAPSHOT.jar.zip";
console.log("uploaded to lambda function: " + functionName);
var params = {
FunctionName: functionName,
S3Key: key,
S3Bucket: bucket,
Publish: true
};
lambda.updateFunctionCode(params, function(err, data) {
if (err) {
console.log(err, err.stack);
context.fail(err);
} else {
console.log(data);
context.succeed(data);
}
});
};
Thanks in advance
It's difficult to comment on this without knowing the details about the destination function. What's the output of the GetFunction API call of that Lambda, before and after calling the UpdateFunctionConfig call?
I am interested to see the SHA-256 hash of the code, and the last modified timestamp off that API call before and after calling UpdateFunctionConfig:
{
...
"CodeSha256": "5tT2qgzYUHoqwR616pZ2dpkn/0J1FrzJmlKidWaaCgk=",
"LastModified": "2019-09-24T18:20:35.054+0000"
...
}
If the values are exactly the same, can you add this check as per the blog post to see if the bucket and the object exists?
if (bucket == "YOUR_BUCKET_NAME" && key == "YOUR_CODE.zip" && version) {
// your code
} else {
context.succeed("skipping zip " + key + " in bucket " + bucket + " with version " + version);
}
Pls try to remove 'Publish: true' to call the latest version not the specified version

Lambda function triggered continuously by ObjectRemoved event

I created a Lambda function for deleting a given thumbnail and I set a trigger on the ObjectRemoved event in order to automatically delete a thumbnail image when the original file was deleted from a given aws-S3 bucket.
However, by analyzing the monthly bill I realized that for some reason that Lambda was called hundred millions of times and wouldn't stop to be triggered. I had to disable the trigger on the Lambda to disable it.
The problem is I have not created or deleted any file on that bucket, so I wonder how it's possible the lambda function continued to be triggered continuously.
Any help is appreciated.
Thanks.
Edit:
My AWS Lambda code
var aws = require('aws-sdk');
var s3 = new aws.S3();
exports.handler = function (event, context) {
console.log('Received event:', JSON.stringify(event, null, 2));
// Get the object from the event and show its content type
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
const path = key.split('/');
const folder = path[0];
const fileName = path[1];
const deleteKey = folder + '/thumbnails/' + fileName;
s3.deleteObject({ Bucket: bucket, Key: deleteKey }, function (err, data) {
if (err) {
console.log('Error deleting object ' + deleteKey + ' from bucket ' + bucket + '. Make sure they exist and your bucket is in the same region as this function.');
context.fail('Error getting file: ' + err)
} else {
context.succeed();
}
});
};

AWS Lambda scanning a dynamoDB table

Im trying to scan a dynamodb table for all entries with prices between 1 and 13,
var AWS = require('aws-sdk');
var db = new AWS.DynamoDB();
exports.handler = function(event, context) {
var params = {
TableName: "hexagon2",
ProjectionExpression: "price",
FilterExpression: "price between :lower and :higher",
ExpressionAttributeValues: {
":lower": {"N": "1"},
":higher": {"N": "13"}
}
};
db.scan(params, function(err, data) {
if (err) {
console.log(err); // an error occurred
}
else {
console.log(data.Item); // successful response
context.done(null,{"Result": "Operation succeeded."});
}
});
};
But I always get the following error, when I test it. I definatly have 'price' as a number attribute in my table and IAM is set up too.
START RequestId: f770c78b-93a1-11e6-b5f6-e5c31cef8b2d Version: $LATEST
2016-10-16T13:10:54.299Z f770c78b-93a1-11e6-b5f6-e5c31cef8b2d undefined
END RequestId: f770c78b-93a1-11e6-b5f6-e5c31cef8b2d
REPORT RequestId: f770c78b-93a1-11e6-b5f6-e5c31cef8b2d Duration: 912.58 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 24 MB
You are trying to reference data.Item which is undefined. Scan operations return a list, not a single item. That list would be referenced via data.Items
When in doubt, read the documentation. Or you could just print out the entire data response to see the exact format of the response coming back.

Simple DB policy being ignored?

I'm trying to use AWS IAM to generate temporary tokens for a mobile app. I'm using the AWS C# SDK.
Here's my code...
The token generating service
public string GetIAMKey(string deviceId)
{
//fetch IAM key...
var credentials = new BasicAWSCredentials("MyKey", "MyAccessId");
var sts = new AmazonSecurityTokenServiceClient(credentials);
var tokenRequest = new GetFederationTokenRequest();
tokenRequest.Name = deviceId;
tokenRequest.Policy = File.ReadAllText(HostingEnvironment.MapPath("~/policy.txt"));
tokenRequest.DurationSeconds = 129600;
var tokenResult = sts.GetFederationToken(tokenRequest);
var details = new IAMDetails { SessionToken = tokenResult.GetFederationTokenResult.Credentials.SessionToken, AccessKeyId = tokenResult.GetFederationTokenResult.Credentials.AccessKeyId, SecretAccessKey = tokenResult.GetFederationTokenResult.Credentials.SecretAccessKey, };
return JsonConvert.SerializeObject(details);
}
The client
var iamkey = Storage.LoadPersistent<IAMDetails>("iamkey");
var simpleDBClient = new AmazonSimpleDBClient(iamkey.AccessKeyId, iamkey.SecretAccessKey, iamkey.SessionToken);
try
{
var details = await simpleDBClient.SelectAsync(new SelectRequest { SelectExpression = "select * from mydomain" });
return null;
}
catch (Exception ex)
{
Storage.ClearPersistent("iamkey");
}
The policy file contents
{ "Statement":[{ "Effect":"Allow", "Action":"sdb:* ", "Resource":"arn:aws:sdb:eu-west-1:* :domain/mydomain*" } ]}
I keep getting the following error...
User (arn:aws:sts::myaccountid:federated-user/654321) does not have permission to perform (sdb:Select) on resource (arn:aws:sdb:us-east-1:myaccountid:domain/mydomain)
Notice that my policy file clearly specifies two things
region should be eu-west-1
allowed action is a wild-card, ie, allow everything
But the exception thrown claims that my user doesn't have permission to us-east-1
Any ideas as to why I'm getting this error?
Ok figured it out.
You have to set the region endpoint on your call to the service from the client.
So
var simpleDBClient = new AmazonSimpleDBClient(iamkey.AccessKeyId, iamkey.SecretAccessKey, iamkey.SessionToken, Amazon.RegionEndpoint.EUWest1);