Protect Strapi uploads folder via S3 SignedUrl - amazon-web-services

uploading files from strapi to s3 works fine.
I am trying to secure the files by using signed urls:
var params = {Bucket:process.env.AWS_BUCKET, Key: `${path}${file.hash}${file.ext}`, Expires: 3000};
var secretUrl = ''
S3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
secretUrl = url
});
S3.upload(
{
Key: `${path}${file.hash}${file.ext}`,
Body: Buffer.from(file.buffer, 'binary'),
//ACL: 'public-read',
ContentType: file.mime,
...customParams,
},
(err, data) => {
if (err) {
return reject(err);
}
// set the bucket file url
//file.url = data.Location;
file.url = secretUrl;
console.log('FIle URL: ' + file.url);
resolve();
}
);
file.url (secretUrl) contains the correct URL which i can use in browser to retrieve the file.
But whenever reading the file form strapi admin panel no file nor tumbnail is shown.
I figured out that strapi adds a parameter to the file e.g ?2304.4005 which corrupts the get of the file to AWS. Where and how do I change that behaviour
Help is appreciated

Here is my solution to create a signed URL to secure your assets. The URL will be valid for a certain amount of time.
Create a collection type with a media field, which you want to secure. In my example the collection type is called invoice and the media field is called document.
Create an S3 bucket
Install and configure strapi-provider-upload-aws-s3 and AWS SDK for JavaScript
Customize the Strapi controller for your invoice endpoint (in this exmaple I use the core controller findOne)
const { sanitizeEntity } = require('strapi-utils');
var S3 = require('aws-sdk/clients/s3');
module.exports = {
async findOne(ctx) {
const { id } = ctx.params;
const entity = await strapi.services.invoice.findOne({ id });
// key is hashed name + file extension of your entity
const key = entity.document.hash + entity.document.ext;
// create signed url
const s3 = new S3({
endpoint: 's3.eu-central-1.amazonaws.com', // s3.region.amazonaws.com
accessKeyId: '...', // your accessKeyId
secretAccessKey: '...', // your secretAccessKey
Bucket: '...', // your bucket name
signatureVersion: 'v4',
region: 'eu-central-1' // your region
});
var params = {
Bucket:'', // your bucket name
Key: key,
Expires: 20 // expires in 20 seconds
};
var url = s3.getSignedUrl('getObject', params);
entity.document.url = url // overwrite the url with signed url
return sanitizeEntity(entity, { model: strapi.models.invoice });
},
};

It seems like although overwriting controllers and lifecycle of the collection models and strapi-plugin-content-manager to take into account the S3 signed urls, one of the Strapi UI components adds a strange hook/refs ?123.123 to the actual url that is received from the backend, resulting in the following error from AWS There were headers present in the request which were not signed when trying to see images from the CMS UI.
Screenshot with the faulty component
After digging the code & node_modules used by Strapi, it seems like you will find the following within strapi-plugin-upload/admin/src/components/CardPreview/index.js
return (
<Wrapper>
{isVideo ? (
<VideoPreview src={url} previewUrl={previewUrl} hasIcon={hasIcon} />
) : (
// Adding performance.now forces the browser no to cache the img
// https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
<Image src={`${url}${withFileCaching ? `?${cacheRef.current}` : ''}`} />
)}
</Wrapper>
);
};
CardPreview.defaultProps = {
extension: null,
hasError: false,
hasIcon: false,
previewUrl: null,
url: null,
type: '',
withFileCaching: true,
};
The default is set to true for withFileCaching, which therefore appends the const cacheRef = useRef(performance.now()); query param to the url for avoiding browser caches.
By setting it to false, or leaving just <Image src={url} /> should solve the issue of the extra query param and allow you to use S3 signed URLs previews also from Strapi UI.
This would also translate to use the docs https://strapi.io/documentation/developer-docs/latest/development/plugin-customization.html to customize the module strapi-plugin-upload in your /extensions/strapi-plugin-upload/...

Related

It is possible to upload an over 5GB file into S3 via curl with presigned url? [duplicate]

Is there a way to do a multipart upload via the browser using a generated presigned URL?
Angular - Multipart Aws Pre-signed URL
Example
https://multipart-aws-presigned.stackblitz.io/
https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html
Download Backend:
https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0
To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.
Here we will leave a basic example of the backend and frontend.
Backend (Serveless Typescript)
const AWSData = {
accessKeyId: 'Access Key',
secretAccessKey: 'Secret Access Key'
};
There are 3 endpoints
Endpoint 1: /start-upload
Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.
export const start: APIGatewayProxyHandler = async (event, _context) => {
const params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName /* File name */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.createMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
data: {
uploadId: res.UploadId
}
})
};
}
Endpoint 2: /get-upload-url
Create a pre-signed URL for each part that was split for the file to be uploaded.
export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
let params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName, /* File name */
PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.getSignedUrl('uploadPart', params)
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(res)
};
}
Endpoint 3: /complete-upload
After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.
export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
// Parse the post body
const bodyData = JSON.parse(event.body);
const s3 = new AWS.S3(AWSData);
const params: any = {
Bucket: bodyData.bucket, /* Bucket name */
Key: bodyData.fileName, /* File name */
MultipartUpload: {
Parts: bodyData.parts /* Parts uploaded */
},
UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
}
const data = await s3.completeMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
// 'Access-Control-Allow-Methods': 'OPTIONS,POST',
// 'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify(data)
};
}
Frontend (Angular 9)
The file is divided into 10MB parts
Having the file, the multipart upload to Endpoint 1 is requested
With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2
A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2
When you finish uploading each part you make a last request the Endpoint 3
In the example of all this the function uploadMultipartFile
I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url
from the AWS documentation:
For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request
So I think you should have to generate a presigned url for each part of the multipart upload :(
what is your use case? can't you execute a script from your server, and give s3 access to this server?

add aws signature to the postman script using pm.sendRequest

I would like to use a postman pre-fetch script to refresh my app secret from an api protected by aws signature. I am able to make a basic authentication like this. However I need an aws signature authentication
var url = "https://some.endpoint"
var auth = {
type: 'basic',
basic: [
{ key: "username", value: "postman" },
{ key: "password", value: "secrets" }
]
};
var request = {
url: url,
method: "GET",
auth: auth
}
pm.sendRequest(request, function (err, res) {
const json = res.json() // Get JSON value from the response body
console.log(json)
});
hi just create a normal postman request that work properly and then copy that request to a variable by adding the below line in test script
pm.environment.set("awsrequest", pm.request)
Now you can use the awsrequest variable to send use in pm.sendRequest
pm.sendRequest(pm.environment.get("awsrequest"))

Unsure why video uploaded to S3 is 0 bits

After a few days of trying to upload a video to AWS, I have successfully (almost) been able to. The main problem I am seeing is when I head to my S3 bucket, the file has a Size 0 B. I was hoping to see what I might be doing wrong that is causing this to occur.
On the backend I get a presignedUrl such as:
const s3 = new AWS.S3({
accessKeyId: ACCESSKEY_ID,
secretAccessKey: SECRETKEY
});
const s3Params = {
Bucket: BUCKET_NAME,
Key: uuidv4() + '.mov',
Expires: 60 * 10,
ContentType: 'mov',
ACL: 'public-read'
};
let url = await s3.getSignedUrl('putObject', s3Params);
return { url };
Once I have the url for upload. On the frontend the way I am sending the file is:
const uploadFileToS3 = async (uri) => {
const type = video.uri.split('.').pop();
const respo = await fetch(uri, {
method: 'PUT',
body: {
url: video.uri,
type,
name: 'testing'
},
headers: {
'Content-Type': type,
'x-amz-acl': 'public-read'
}
});
const some = await JSON.stringify(respo);
It does seem to be saving something since I see it in the bucket but am unable to download or view it. Just shows an empty page and it feels like nothing (the video) possibly was uploaded to S3. Any pointers to where I might be going wrong in uploading a video to S3?
Thank you for all the help.
You can not specify an URL when you upload a file. You need 2 fetches:
the first one downloads the video from video.uri
the second uploads the video to S3: body: blob
To download a file as a blob, use response.blob(). Then you can use that to upload the file (here is an example).

AWS S3 presigned URL with metadata

I am trying to create presigned-url using boto3 below
s3 = boto3.client(
's3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_ACCESS_SECRET,
region_name=settings.AWS_SES_REGION_NAME,
config=Config(signature_version='s3v4')
)
metadata = {
'test':'testing'
}
presigned_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': settings.AWS_S3_BUCKET_NAME,
'Key': str(new_file.uuid),
'ContentDisposition': 'inline',
'Metadata': metadata
})
So, after the URL is generated and I try to upload it to S3 using Ajax it gives 403 forbidden. If I remove Metadata and ContentDisposition while creating URL it gets uploaded successfully.
Boto3 version: 1.9.33
Below is the doc that I referring to:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url
Yes I got it working,
Basically after the signed URL is generated I need to send all the metadata and Content-Dispostion in header along with the signed URL.
For eg: My metadata dictionary is {'test':'test'} then I need to send this metadata in header i.e. x-amz-meta-test along with its value and content-dispostion to AWS
I was using createPresignedPost and for me I got this working by adding the metadata I wanted to the Fields param like so :-
const params = {
Expires: 60,
Bucket: process.env.fileStorageName,
Conditions: [['content-length-range', 1, 1000000000]], // 1GB
Fields: {
'Content-Type': 'application/pdf',
key: strippedName,
'x-amz-meta-pdf-type': pdfType,
'x-amz-meta-pdf-id': pdfId,
},
};
As long as you pass the data you want, in the file's metadata, to the lambda that you're using to create the preSignedPost response then the above will work. Hopefully will help someone else...
I found the metadata object needs to be key/value pairs, with the value as a string (example is Nodejs lambda):
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event) => {
const { key, type, metadata } = JSON.parse(event.body);
// example
/*metadata: {
foo: 'bar',
x: '123',
y: '22.4213213'
}*/
return await s3.getSignedUrlPromise('putObject', {
Bucket: 'the-product-uploads',
Key: key,
Expires: 300,
ContentType: type,
Metadata: metadata
});
};
Then in your request headers you need to add each k/v explicitly:
await fetch(signedUrl, {
method: "PUT",
headers: {
"Content-Type": fileData.type,
"x-amz-meta-foo": "bar",
"x-amz-meta-x": x.toString(),
"x-amz-meta-y": y.toString()
},
body: fileBuffer
});
In boto, you should provide the Metadata parameter passing the dict of your key, value metadata. You don't need to name the key as x-amz-meta as apparently boto is doing it for you now.
Also, I didn't have to pass the metadata again when uploading to the pre-signed URL:
params = {'Bucket': bucket_name,
'Key': object_key,
'Metadata': {'test-key': value}
}
response = s3_client.generate_presigned_url('put_object',
Params=params,
ExpiresIn=3600)
I'm using a similar code in a lambda function behind an API

AWS Cognito js: getCurrentUser() returns null

Building a simple application using the examples on their github page. I can log into my application using Cognito. What I can not do is logout because no matter what I try I can't get a hold of the user object. I've dorked around with various other calls to no avail (found here on their API page). The only other post on SO I found isn't applicable because I'm not using Federated Identity. The code I'm using is pretty much verbatim what's on the github page, but will post here for convenience:
login code:
var userName = $('#user_name_login').val();
var userPassword = $('#user_password_login').val();
var userData = {Username: userName, Pool : userPool};
var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(userData);
var authenticationData = {Username : userName, Password : userPassword};
var authenticationDetails = new AWSCognito.CognitoIdentityServiceProvider.AuthenticationDetails(authenticationData);
cognitoUser.authenticateUser(authenticationDetails, {
onSuccess: function (result) {
// now that we've gotten our identity credentials, we're going to check in with the federation so we can
// avail ourselves of other amazon services
//
// critical that you do this in this manner -- see https://github.com/aws/amazon-cognito-identity-js/issues/162
// for details
var loginProvider = {};
loginProvider[cognitoCredentialKey] = result.getIdToken().getJwtToken();
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: identityPoolId,
Logins: loginProvider,
});
// //AWS.config.credentials = AWSCognito.config.credentials;
// AWSCognito.config.credentials = AWS.config.credentials;
// //call refresh method in order to authenticate user and get new temp credentials
// AWS.config.credentials.refresh((error) => {
// if (error) {
// alert(error);
// } else {
// console.log('Successfully logged in!');
// }
// });
// this is the landing page once a user completes the authentication process. we're getting a
// temporary URL that is associated with the credentials we've created so we can access the
// restricted area of the s3 bucket (where the website is, bruah).
var s3 = new AWS.S3();
var params = {Bucket: '********.com', Key: 'restricted/pages/user_logged_in_test.html'};
s3.getSignedUrl('getObject', params, function (err, url) {
if (err) {
alert(err);
console.log(err);
}
else {
console.log("The URL is", url);
window.location = url;
}
});
},
mfaRequired: function(session){
new MFAConfirmation(cognitoUser, 'login');
},
onFailure: function(err) {
alert("err: " + err);
},
});
I'm attempting to logout by executing:
userPool.getCurrentUser().signOut();
Note that the userPool and such are defined in another file, and is initialized thusly:
var poolData = {
UserPoolId : '*****',
ClientId : '*****'
};
var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);
so how do I sign my users out of the application?
closing this as the issue, as stated here, turned out to be a red herring. if you're doing what I was trying to do above in using cognito to generated a signed url to access an html file located in a restricted 'folder' in the bucket and you want to be able to logout from that new window location, make sure the signed url is of the same domain as your landing page.
for example, if you land at foo.com because you've got an A or CNAME DNS record set up so that your users don't have to hit a doofy cloudfront or s3 generated url in order to get to your website, you need to make sure you ALSO generate a signed URL that has the same domain name. Otherwise you won't be able to access the bucket. Moreover - you won't be able to access your user object because the session object is keyed to a different domain name than the one you're currently at.
see this for information on how to specify what the domain of the signed url should be.
and also note that there's a lot of trouble you can get into if you are using a third-party domain registar. I just burned two weeks unborking myself because of that :-/