I am trying to generate a presignedurl to get an object from s3 inside a container via Fargate task. It has s3:GetObject and s3:PutObject permission. When I call s3.getSignedUrl(params) it only returns https://s3.ap-northeast-1.amazonaws.com/ instead of the full signed url here is the code that I am using inside the container
const getFileUrl = async (key, bucketName) => {
try {
const s3 = new aws.S3();
const url = await s3.getSignedUrl('getObject', { Bucket: bucketName, Key: key });
return url;
} catch (error) {
console.error(error);
return false;
}
}
Related
const S3 = require('aws-sdk/clients/s3');
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
dotenv.config();
const bucketName = process.env.AWS_BUCKET_NAME
const region = process.env.AWS_BUCKET_REGION
const accessKeyId = process.env.AWS_ACCESS_KEY
const secretAccessKey = process.env.AWS_SECRET_KEY
const s3 = new S3({
region,
accessKeyId,
secretAccessKey
})
router.get("/:id", async (req, res) => {
try {
const post = await Post.findById(req.params.id);
const getObjectParams = {
Bucket: bucketName,
Key: post.photo,
}
const command = new GetObjectCommand(getObjectParams);
const url = await getSignedUrl(s3, command, { expiresIn: 3600 });
post.imageUrl = url
res.status(200).json(post);
} catch (err) {
console.error('errorrr', err);
res.status(500).json(err);
}
});
Here is my code I've console logged post, getObjectParams, command and everything is there but when I console log url it's not logging and when I console.log errorrr it logs Cannot read properties of undefined (reading 'clone')
What is the issue here?
I think issue is with function getSignedUrl, but not sure what it is
My client are currently getting access to some objects with getSignedUrlPromise from the package aws-sdk. The request are done from the backend and the signed url is returned to the client, everything is fine.
I'm now trying to migrate from aws-sdk to #aws-sdk/client-s3. I'd like to keep to same structure, but i can't find such command in the documentation.
I'm pretty sure #aws-sdk/client-s3 is capable of returning a signed url
Are there any (non - hacky) ways to do it ?
EDIT: Relying on this, i should use #aws-sdk/s3-request-presigner on top of #aws-sdk/client-s3 to get presigned urls.
You can use #aws-sdk/s3-request-presigner. For example:
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
const clientParams = { region: "us-east-1" };
const getObjectParams = { Bucket: "mybucket", Key: "dogs/snoopy.png" };
const client = new S3Client(clientParams);
const command = new GetObjectCommand(getObjectParams);
const url = await getSignedUrl(client, command, { expiresIn: 3600 });
console.log(url);
I am scouring the documentation, and it only provides pseudo-code of the credentials for v3 (e.g. const client = new S3Client(clientParams)
How do I initialize an S3Client with the bucket and credentials to perform a getSignedUrl request? Any resources pointing me in the right direction would be most helpful. I've even searched YouTube, SO, etc and I can't find any specific info on v3. Even the documentation and examples doesn't provide the actual code to use credentials. Thanks!
As an aside, do I have to include the fake folder structure in the filename, or can I just use the actual filename? For example: bucket/folder1/folder2/uniqueFilename.zip or uniqueFilename.zip
Here's the code I have so far: (Keep in mind I was returning the wasabiObjKey to ensure I was getting the correct file name. I am. It's the client, GetObjectCommand, and getSignedUrl that I'm having issues with.
exports.getPresignedUrl = functions.https.onCall(async (data, ctx) => {
const wasabiObjKey = `${data.bucket_prefix ? `${data.bucket_prefix}/` : ''}${data.uid.replace(/-/g, '_').toLowerCase()}${data.variation ? `_${data.variation.replace(/\./g, '').toLowerCase()}` : ''}.zip`
const { S3Client, GetObjectCommand } = require('#aws-sdk/client-s3')
const s3 = new S3Client({
bucketEndpoint: functions.config().s3_bucket.name,
region: functions.config().s3_bucket.region,
credentials: {
secretAccessKey: functions.config().s3.secret,
accessKeyId: functions.config().s3.access_key
}
})
const command = new GetObjectCommand({
Bucket: functions.config().s3_bucket.name,
Key: wasabiObjKey,
})
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner")
const url = getSignedUrl(s3, command, { expiresIn: 60 })
return wasabiObjKey
})
There are a credential chain that provide credential to your API calls from SDK
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html
Loaded from AWS Identity and Access Management (IAM) roles for Amazon
EC2
Loaded from the shared credentials file (~/.aws/credentials)
Loaded from environment variables
Loaded from a JSON file on disk
Other credential-provider classes provided by the JavaScript SDK
You can embed the credential inside your source code but it's not the prefered way
new S3Client(configuration: S3ClientConfig): S3Client
Where S3ClientConfig contain a credentials property
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const { S3Client,GetObjectCommand } = require("#aws-sdk/client-s3");
let client = new S3Client({
region:'ap-southeast-1',
credentials:{
accessKeyId:'',
secretAccessKey:''
}
});
(async () => {
const response = await client.send(new GetObjectCommand({Bucket:"BucketNameHere",Key:"ObjectNameHere"}));
console.log(response);
})();
Sample answer
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: '7kwrFkEp3lEnLU+OtxjrgdmS6gQmvPdbnqqR7I8P/rdFrUPBkdKYPYykWivuHPXCF1IHgjCIbe8=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
Here's a simple approach I use (in Deno) for testing (in case you don't want to go the signedUrl approach and just let the SDK do the heavy lifting for you):
import { config as env } from 'https://deno.land/x/dotenv/mod.ts' // https://github.com/pietvanzoen/deno-dotenv
import { S3Client, ListObjectsV2Command } from 'https://cdn.skypack.dev/#aws-sdk/client-s3' // https://github.com/aws/aws-sdk-js-v3
const {AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY} = env()
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const credentials = {
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/s3clientconfig.html
const config = {
region: 'ap-southeast-1',
credentials,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3client.html
const client = new S3Client(config)
export async function list() {
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/listobjectsv2commandinput.html
const input = {
Bucket: 'BucketNameHere'
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/command.html
const cmd = new ListObjectsV2Command(input)
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/listobjectsv2command.html
return await client.send(cmd)
}
In the AWS re-Invent video, the solution uses Cognito pool + identity pool
It also uses a lambda authorizer at the API gateway to validate the token and generate the policy. I was reading How to authenticate API Gateway calls with Facebook?
and it says:
To use a federated identity, you set the API Gateway method to use “AWS_IAM” authorization. You use Cognito to create a role and associate it with your Cognito identity pool. You then use the Identity and Access Management (IAM) service to grant this role permission to call your API Gateway method.
-> If that's the case, How are we using lambda authorizer instead of IAM authorizer while we are using identity pool as well
-> What's the difference between using IAM authorizer and generating IAM policies in custom authorizer as I see happening here:
https://github.com/aws-quickstart/saas-identity-cognito/blob/96531568b5bd30106d115ad7437b2b1886379e57/functions/source/custom-authorizer/index.js
or
const Promise = require('bluebird');
const jws = require('jws');
const jwkToPem = require('jwk-to-pem');
const request = require('request-promise');
const AWS = require('aws-sdk');
AWS.config.setPromisesDependency(Promise);
const s3 = new AWS.S3({ apiVersion: '2006-03-01' });
const { env: { s3bucket }} = process
// cache for certificates of issuers
const certificates = {};
// time tenant data was loaded
let tenantLoadTime = 0;
// promise containt tenant data
let tenantPromise;
// this function returns tenant data promise
// refreshes the data if older than a minute
function tenants() {
if (new Date().getTime() - tenantLoadTime > 1000 * 60) {
console.log('Tenant info outdated, reloading');
tenantPromise = s3.getObject({
Bucket: s3bucket,
Key: 'tenants.json'
}).promise().then((data) => {
const config = JSON.parse(data.Body.toString());
console.log('Tenant config: %j', config);
const tenantMap = {};
config.forEach((t) => { tenantMap[t.iss] = t.id; });
return tenantMap;
});
tenantLoadTime = new Date().getTime();
}
return tenantPromise;
}
// helper function to load certificate of issuer
function getCertificate(iss, kid) {
if (certificates[iss]) {
// resolve with cached certificate, if exists
return Promise.resolve(certificates[iss][kid]);
}
return request({
url: `${iss}/.well-known/jwks.json`,
method: 'GET'
}).then((rawBody) => {
const { keys } = JSON.parse(rawBody);
const pems = keys.map(k => ({ kid: k.kid, pem: jwkToPem(k) }));
const map = {};
pems.forEach((e) => { map[e.kid] = e.pem; });
certificates[iss] = map;
return map[kid];
});
}
// extract tenant from a payload
function getTenant(payload) {
return tenants().then(config => config[payload.iss]);
}
// Help function to generate an IAM policy
function generatePolicy(payload, effect, resource) {
return getTenant(payload).then((tenant) => {
if (!tenant) {
return Promise.reject(new Error('Unknown tenant'));
}
const authResponse = {};
authResponse.principalId = payload.sub;
if (effect && resource) {
authResponse.policyDocument = {
Version: '2012-10-17',
Statement: [{
Action: 'execute-api:Invoke',
Effect: effect,
Resource: resource
}]
};
}
// extract tenant id from iss
payload.tenant = tenant;
authResponse.context = { payload: JSON.stringify(payload) };
console.log('%j', authResponse);
return authResponse;
});
}
function verifyPayload(payload) {
if (payload.token_use !== 'id') {
console.log('Invalid token use');
return Promise.reject(new Error('Invalid token use'));
}
if (parseInt(payload.exp || 0, 10) * 1000 < new Date().getTime()) {
console.log('Token expired');
return Promise.reject(new Error('Token expired'));
}
// check if iss is a known tenant
return tenants().then((config) => {
if (config[payload.iss]) {
return Promise.resolve();
}
console.log('Invalid issuer');
return Promise.reject();
});
}
function verifyToken(token, alg, pem) {
if (!jws.verify(token, alg, pem)) {
console.log('Invalid Signature');
return Promise.reject(new Error('Token invalid'));
}
return Promise.resolve();
}
exports.handle = function handle(e, context, callback) {
console.log('processing event: %j', e);
const { authorizationToken: token } = e;
if (!token) {
console.log('No token found');
return callback('Unauthorized');
}
const { header: { alg, kid }, payload: rawToken } = jws.decode(token);
const payload = JSON.parse(rawToken);
return verifyPayload(payload)
.then(() => getCertificate(payload.iss, kid))
.then(pem => verifyToken(token, alg, pem))
.then(() => generatePolicy(payload, 'Allow', e.methodArn))
.then(policy => callback(null, policy))
.catch((err) => {
console.log(err);
return callback('Unauthorized');
});
};
I have Cloudfront in front of an s3 bucket that serves HLS videos. I'm trying to dynamically modify the manifest files to add an auth token to the segments inside of them.
What I would really like to do is modify the body I send back to the client in a viewer response function, but since that isn't possible, I'm attempting to use a origin request function to manually fetch the object from S3, modify it, and return a Cloudfront request with the new body. I get a 503 error of "The Lambda function result failed validation: The body is not a string, is not an object, or exceeds the maximum size"
My body is under 8kb (1MB is the limit in the docs). As far as I can tell the cloudfront request object I'm generating looks good and the base64 data decodes to what I want. I've also tried using text instead of base64. I have "include body" enabled in Cloudfront.
const fs = require('fs');
const querystring = require('querystring');
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
exports.handler = async (event) => {
const cfrequest = event.Records[0].cf.request;
const queryString = querystring.parse(event.Records[0].cf.request.querystring);
const jwtToken = queryString.token;
if (cfrequest.uri.match(/\.m3u8?$/mi)) {
const s3Response = await (new Promise((resolve, reject) => {
S3.getObject({
Bucket: 'bucket',
Key: cfrequest.uri.substring(1)
}, (err, data) => {
if (err) {
reject(err)
} else {
resolve(data);
}
});
}));
const manifestFile = s3Response.Body.toString('utf8');
const newManifest = manifestFile.replace(/^((\S+)\.(m3u8|ts|vtt))$/gmi, (_, url) => `${url}?token=${jwtToken}`);
const base64NewManifest = Buffer.from(newManifest, 'utf8').toString('base64');
const tokenizedCfRequest = {
...cfrequest,
body: {
action: 'replace',
data: base64NewManifest,
encoding: 'base64'
}
};
return tokenizedCfRequest;
}
return cfrequest;
}
If you want to generate your own response you need to use a viewer request or origin request event and return a response like this:
exports.handler = async (event) => {
const cfRequest = event.Records[0].cf.request;
const queryString = querystring.parse(event.Records[0].cf.request.querystring);
const jwtToken = queryString.token;
if (cfrequest.uri.match(/\.m3u8?$/mi)) {
// ... your code here ...
const response = {
status: 200, // only mandatory field
body: base64NewManifest,
bodyEncoding: 'base64',
};
return response;
}
// Return original request if no uri match
return cfRequest;
}
See also Generating HTTP Responses in Request Triggers.