AWS S3 presigned URL with metadata - django

I am trying to create presigned-url using boto3 below
s3 = boto3.client(
's3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_ACCESS_SECRET,
region_name=settings.AWS_SES_REGION_NAME,
config=Config(signature_version='s3v4')
)
metadata = {
'test':'testing'
}
presigned_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': settings.AWS_S3_BUCKET_NAME,
'Key': str(new_file.uuid),
'ContentDisposition': 'inline',
'Metadata': metadata
})
So, after the URL is generated and I try to upload it to S3 using Ajax it gives 403 forbidden. If I remove Metadata and ContentDisposition while creating URL it gets uploaded successfully.
Boto3 version: 1.9.33
Below is the doc that I referring to:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url

Yes I got it working,
Basically after the signed URL is generated I need to send all the metadata and Content-Dispostion in header along with the signed URL.
For eg: My metadata dictionary is {'test':'test'} then I need to send this metadata in header i.e. x-amz-meta-test along with its value and content-dispostion to AWS

I was using createPresignedPost and for me I got this working by adding the metadata I wanted to the Fields param like so :-
const params = {
Expires: 60,
Bucket: process.env.fileStorageName,
Conditions: [['content-length-range', 1, 1000000000]], // 1GB
Fields: {
'Content-Type': 'application/pdf',
key: strippedName,
'x-amz-meta-pdf-type': pdfType,
'x-amz-meta-pdf-id': pdfId,
},
};
As long as you pass the data you want, in the file's metadata, to the lambda that you're using to create the preSignedPost response then the above will work. Hopefully will help someone else...

I found the metadata object needs to be key/value pairs, with the value as a string (example is Nodejs lambda):
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event) => {
const { key, type, metadata } = JSON.parse(event.body);
// example
/*metadata: {
foo: 'bar',
x: '123',
y: '22.4213213'
}*/
return await s3.getSignedUrlPromise('putObject', {
Bucket: 'the-product-uploads',
Key: key,
Expires: 300,
ContentType: type,
Metadata: metadata
});
};
Then in your request headers you need to add each k/v explicitly:
await fetch(signedUrl, {
method: "PUT",
headers: {
"Content-Type": fileData.type,
"x-amz-meta-foo": "bar",
"x-amz-meta-x": x.toString(),
"x-amz-meta-y": y.toString()
},
body: fileBuffer
});

In boto, you should provide the Metadata parameter passing the dict of your key, value metadata. You don't need to name the key as x-amz-meta as apparently boto is doing it for you now.
Also, I didn't have to pass the metadata again when uploading to the pre-signed URL:
params = {'Bucket': bucket_name,
'Key': object_key,
'Metadata': {'test-key': value}
}
response = s3_client.generate_presigned_url('put_object',
Params=params,
ExpiresIn=3600)
I'm using a similar code in a lambda function behind an API

Related

It is possible to upload an over 5GB file into S3 via curl with presigned url? [duplicate]

Is there a way to do a multipart upload via the browser using a generated presigned URL?
Angular - Multipart Aws Pre-signed URL
Example
https://multipart-aws-presigned.stackblitz.io/
https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html
Download Backend:
https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0
To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.
Here we will leave a basic example of the backend and frontend.
Backend (Serveless Typescript)
const AWSData = {
accessKeyId: 'Access Key',
secretAccessKey: 'Secret Access Key'
};
There are 3 endpoints
Endpoint 1: /start-upload
Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.
export const start: APIGatewayProxyHandler = async (event, _context) => {
const params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName /* File name */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.createMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
data: {
uploadId: res.UploadId
}
})
};
}
Endpoint 2: /get-upload-url
Create a pre-signed URL for each part that was split for the file to be uploaded.
export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
let params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName, /* File name */
PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.getSignedUrl('uploadPart', params)
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(res)
};
}
Endpoint 3: /complete-upload
After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.
export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
// Parse the post body
const bodyData = JSON.parse(event.body);
const s3 = new AWS.S3(AWSData);
const params: any = {
Bucket: bodyData.bucket, /* Bucket name */
Key: bodyData.fileName, /* File name */
MultipartUpload: {
Parts: bodyData.parts /* Parts uploaded */
},
UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
}
const data = await s3.completeMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
// 'Access-Control-Allow-Methods': 'OPTIONS,POST',
// 'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify(data)
};
}
Frontend (Angular 9)
The file is divided into 10MB parts
Having the file, the multipart upload to Endpoint 1 is requested
With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2
A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2
When you finish uploading each part you make a last request the Endpoint 3
In the example of all this the function uploadMultipartFile
I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url
from the AWS documentation:
For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request
So I think you should have to generate a presigned url for each part of the multipart upload :(
what is your use case? can't you execute a script from your server, and give s3 access to this server?

access denied error on `getPresignedPost` method on AWS s3

In my REST API, I call a createPresignedPost method which gives back me as usual an url and other fields. As the returned url wasn't working, I modified it by http://${url.split('/')[3]}.s3.amazonaws.com this. After appending my file, I made post request with the url and everything worked fine. But, I opened it now after a month and now it isn't working.
I searched a lot to resolve this issue, created a new ACCESS KEY ID and ACCESS KEY SECRET, modified policies, tried with new buckets. Also, I am not able to get any simplified method of debugging my errors in amazon s3. Can anyone elaborate what exactly is the problem here. This Acccess denied message is too vague to understand.
The XML errror message is:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>SYBG5Q84R219PN35</RequestId>
<HostId>CVNuh2YSb6gYqYRWPjlONc7hi8wLKfhCQC09nsmwWJz9oqUJ3mF1Ji5U39PO4Tm0xqrGxb7ql1w=</HostId>
</Error>
frontend code
const res = await Axios.get(`RESTAPI`);
const { url, fields } = res.data;
const newUrl = `http://${url.split('/')[3]}.s3.amazonaws.com`;
const formData = new FormData();
const formArray = Object.entries({ ...fields, file: blob });
formArray.forEach(([key, value]) => formData.append(key, value as any));
// Uploading file to AWS
await Axios({
method: 'POST',
url: newUrl,
headers: {
'Content-Type': 'multipart/form-data',},
data: formData,
responseType: 'text',
});
backend code
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY_VERCEL,
secretAccessKey: process.env.AWS_SECRET_KEY_VERCEL,
region: process.env.AWS_REGION_VERCEL,
signatureVersion: 'v4',
});
const s3 = new aws.S3();
const fileName = `fileName`;
const params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: fileName,
Expires: 10000000, // seconds
Conditions: [
['content-length-range', 0, 1048576], // up to 1 MB
],
};
await s3.createPresignedPost(params, (err, data) => {
if (data) {
return res.status(200).json(data);
}
if (err) {
return res.status(500).json({ message: 'Upload Failed!', type: 'subtitle' });
}
});

Protect Strapi uploads folder via S3 SignedUrl

uploading files from strapi to s3 works fine.
I am trying to secure the files by using signed urls:
var params = {Bucket:process.env.AWS_BUCKET, Key: `${path}${file.hash}${file.ext}`, Expires: 3000};
var secretUrl = ''
S3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
secretUrl = url
});
S3.upload(
{
Key: `${path}${file.hash}${file.ext}`,
Body: Buffer.from(file.buffer, 'binary'),
//ACL: 'public-read',
ContentType: file.mime,
...customParams,
},
(err, data) => {
if (err) {
return reject(err);
}
// set the bucket file url
//file.url = data.Location;
file.url = secretUrl;
console.log('FIle URL: ' + file.url);
resolve();
}
);
file.url (secretUrl) contains the correct URL which i can use in browser to retrieve the file.
But whenever reading the file form strapi admin panel no file nor tumbnail is shown.
I figured out that strapi adds a parameter to the file e.g ?2304.4005 which corrupts the get of the file to AWS. Where and how do I change that behaviour
Help is appreciated
Here is my solution to create a signed URL to secure your assets. The URL will be valid for a certain amount of time.
Create a collection type with a media field, which you want to secure. In my example the collection type is called invoice and the media field is called document.
Create an S3 bucket
Install and configure strapi-provider-upload-aws-s3 and AWS SDK for JavaScript
Customize the Strapi controller for your invoice endpoint (in this exmaple I use the core controller findOne)
const { sanitizeEntity } = require('strapi-utils');
var S3 = require('aws-sdk/clients/s3');
module.exports = {
async findOne(ctx) {
const { id } = ctx.params;
const entity = await strapi.services.invoice.findOne({ id });
// key is hashed name + file extension of your entity
const key = entity.document.hash + entity.document.ext;
// create signed url
const s3 = new S3({
endpoint: 's3.eu-central-1.amazonaws.com', // s3.region.amazonaws.com
accessKeyId: '...', // your accessKeyId
secretAccessKey: '...', // your secretAccessKey
Bucket: '...', // your bucket name
signatureVersion: 'v4',
region: 'eu-central-1' // your region
});
var params = {
Bucket:'', // your bucket name
Key: key,
Expires: 20 // expires in 20 seconds
};
var url = s3.getSignedUrl('getObject', params);
entity.document.url = url // overwrite the url with signed url
return sanitizeEntity(entity, { model: strapi.models.invoice });
},
};
It seems like although overwriting controllers and lifecycle of the collection models and strapi-plugin-content-manager to take into account the S3 signed urls, one of the Strapi UI components adds a strange hook/refs ?123.123 to the actual url that is received from the backend, resulting in the following error from AWS There were headers present in the request which were not signed when trying to see images from the CMS UI.
Screenshot with the faulty component
After digging the code & node_modules used by Strapi, it seems like you will find the following within strapi-plugin-upload/admin/src/components/CardPreview/index.js
return (
<Wrapper>
{isVideo ? (
<VideoPreview src={url} previewUrl={previewUrl} hasIcon={hasIcon} />
) : (
// Adding performance.now forces the browser no to cache the img
// https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
<Image src={`${url}${withFileCaching ? `?${cacheRef.current}` : ''}`} />
)}
</Wrapper>
);
};
CardPreview.defaultProps = {
extension: null,
hasError: false,
hasIcon: false,
previewUrl: null,
url: null,
type: '',
withFileCaching: true,
};
The default is set to true for withFileCaching, which therefore appends the const cacheRef = useRef(performance.now()); query param to the url for avoiding browser caches.
By setting it to false, or leaving just <Image src={url} /> should solve the issue of the extra query param and allow you to use S3 signed URLs previews also from Strapi UI.
This would also translate to use the docs https://strapi.io/documentation/developer-docs/latest/development/plugin-customization.html to customize the module strapi-plugin-upload in your /extensions/strapi-plugin-upload/...

Unsure why video uploaded to S3 is 0 bits

After a few days of trying to upload a video to AWS, I have successfully (almost) been able to. The main problem I am seeing is when I head to my S3 bucket, the file has a Size 0 B. I was hoping to see what I might be doing wrong that is causing this to occur.
On the backend I get a presignedUrl such as:
const s3 = new AWS.S3({
accessKeyId: ACCESSKEY_ID,
secretAccessKey: SECRETKEY
});
const s3Params = {
Bucket: BUCKET_NAME,
Key: uuidv4() + '.mov',
Expires: 60 * 10,
ContentType: 'mov',
ACL: 'public-read'
};
let url = await s3.getSignedUrl('putObject', s3Params);
return { url };
Once I have the url for upload. On the frontend the way I am sending the file is:
const uploadFileToS3 = async (uri) => {
const type = video.uri.split('.').pop();
const respo = await fetch(uri, {
method: 'PUT',
body: {
url: video.uri,
type,
name: 'testing'
},
headers: {
'Content-Type': type,
'x-amz-acl': 'public-read'
}
});
const some = await JSON.stringify(respo);
It does seem to be saving something since I see it in the bucket but am unable to download or view it. Just shows an empty page and it feels like nothing (the video) possibly was uploaded to S3. Any pointers to where I might be going wrong in uploading a video to S3?
Thank you for all the help.
You can not specify an URL when you upload a file. You need 2 fetches:
the first one downloads the video from video.uri
the second uploads the video to S3: body: blob
To download a file as a blob, use response.blob(). Then you can use that to upload the file (here is an example).

Sharing AWS S3 files publicly

I am working on a project where I can generate shareable links for users which will expire after some time also link can be regenerated.
I have tried all the solutions in web to generate signed urls but none of them is working for me.
I have used node aws-sdk 's getSignedUrl method and other third party packages but none of them is working
when i try to access file via S3's link it retun an error
here is the code I am using
const s3 = new AWS.S3({
accessKeyId: ID,
signatureVersion: 'v4',
secretAccessKey: SECRET,
region:region
});
const uploadFile = (filePath) => {
// Read content from the file
const fileContent = fs.readFileSync(filePath);
const params = {
Bucket: BUCKET_NAME,
Key: `Share-TEST/${Date.now()}`,
ACL:"private",
ContentType:"image/jpeg",
Body: fileContent
};
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(data.Location);
s3.getSignedUrl('getObject', {Bucket: BUCKET_NAME, Key: data.Key, Expires: 60}, function (err, url) {
console.log(err,url)
})
});
};
uploadFile("/path/to/file")
and the url generated is link
https://bucket-name.s3.aws-region.amazonaws.com/Share-TEST/15852504924-1?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=111111111111111/22222222/region-name/s3/aws4_request&X-Amz-Date=202033333416Z&X-Amz-Expires=60&X-Amz-Signature=333333333&X-Amz-SignedHeaders=host'
Please help me with some code or links