AWS S3 Generate a presigned url with a md5 check - amazon-web-services

Im looking to generate a pre signed url with aws s3.
It works fine with some condition (mime type for example) but im unable to use 'Content-MD5'.
I use the node js sdk and put the md5 in fields object.
const options = {
Bucket: bucket,
Expires: expires,
ContentType: 'multipart/form-data',
Conditions: [{ key }],
Fields: {
'Content-MD5': params.md5,
},
} as PresignedPost.Params;
if (acl) {
options.Conditions.push({ acl });
}
if (params.mimeType) {
options.Conditions.push({ contentType: params.mimeType });
}
But after when I upload the file, I would like AWS to check by itself the uploaded file with the MD5 given in the presigned request but I always have that error :
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy Condition failed: ["eq", "$Content-MD5", "<md5>"]</Message>
<RequestId>497312AFEEF83235</RequestId>
<HostId>KY9RxpGZzRog7hjlDk3whjAbItG/mwhpItYDL7rUNNH4BCXMfmLZsbZIPKivmSZZ3VkWxlgstOk=</HostId>
</Error>
My MD5 is generated like that in the browser ( just after recording a video ):
const reader = new FileReader();
reader.readAsBinaryString(blob);
reader.onloadend = () => {
const mdsum = CryptoJS.MD5(reader.result.toString());
resolve(CryptoJS.enc.Base64.stringify(mdsum));
};
Maybe it's not the way it works ?
edit :
If I add to the upload form data (md5 hash is the same as set in the presigned)
formData.append('Content-MD5', encodeURI(fields['Content-MD5']));
the error becomes
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>BadDigest</Code>
<Message>The Content-MD5 you specified did not match what we received.</Message>
<ExpectedDigest>2b36c76525c8d3a6dada59a6ad2867a7</ExpectedDigest><CalculatedDigest>+RifURVLd61O6QCT+SzhBg==</CalculatedDigest><RequestId>B4FF38D0FCC2E8F2</RequestId><HostId>yS7q200rJpBu48RNcGzsb1oGbDUrN8UK9+gkg6jGMl+EJSGeyQaSCfwfcMRUeNlJYapfmF304Oc=</HostId></Error>
answer:
const reader = new FileReader();
reader.readAsBinaryString(blob);
reader.onloadend = () => {
resolve(CryptoJS.enc.Base64.stringify(CryptoJS.MD5(CryptoJS.enc.Latin1.parse(reader.result.toString()))));
};

Related

Cloudfront Malformed Policy Error with AWS Cloudfront-Signer V3

I'm having an issue with the AWS Cookie-Signer V3 and Custom Policies. I'm currently using #aws-sdk/cloudfront-signer v3.254.0. I have followed the official docs of how to create and handle signed cookies - it works as long as I don't use custom policies.
Setup
I use a custom lambda via an API Gateway to obtain the Set-Cookie header with my signed cookies. These cookies will be attached to a further file-request via my AWS Cloudfront instance. In order to avoid CORS errors, I have set up custom domains for the API Gateway as well as for the Cloudfront instance.
A minified example of the signing and the return value looks as follows:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).toISOString();
// Cookie-Signer
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
url: "https://cloudfront.example.com/path-to-file/file.m3u8",
dateLessThan: getExpTime,
});
// Response
const response = {
statusCode: 200,
isBase64Encoded: false,
body: JSON.stringify({ url: url, bucket: bucket, key: key }),
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "https://example.com",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Methods": "OPTIONS,POST,GET",
},
multiValueHeaders: {
"Set-Cookie": [
`CloudFront-Expires=${signedCookie["CloudFront-Expires"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Signature=${signedCookie["CloudFront-Signature"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Key-Pair-Id=${signedCookie["CloudFront-Key-Pair-Id"]}; Domain=example.com; Path=/${path}/`,
],
},
};
This works well if I request a single file from my S3 bucket. However, since I want to stream video files from my S3 via Cloudfront and according to the AWS docs, wildcard characters are only allowed with Custom Policies. I need this wildcard to give access to the entire video folder with my video chunks. Again following the official docs, I have updated my lambda with:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).getTime();
// Custom Policy
const policyString = JSON.stringify({
Statement: [
{
Resource: "https://cloudfront.example.com/path-to-file/*",
Condition: {
DateLessThan: { "AWS:EpochTime": getExpTime },
},
},
],
});
// Cookie signing
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
policy: policyString,
url: "https://cloudfront.example.com/path-to-file/*",
});
which results in a Malformed Policy error.
What confuses me is that the getSignedCookies() method requires the url property even though I'm using a custom policy with the Ressource parameter. Since the Resource parameter is optional, I've also tried without which led to the same error.
To rule out that something is wrong with the wildcard character, I've also run a test where I've pointed to the exact file but using the custom policy. Although this works without custom policy, it does fail with the Malformed Policy error when using the custom policy.
Since there is also no example of how to use the Cloudfront Cookie-Signer V3 with custom policies, I'd be very grateful if someone can tell me how I'm supposed to type this out!
Cheers! 🙌

AWS SDK generates wrong presigned url for reading an object

I am using S3 to store videos.
And now, I am using presigned urls to restrict access to them.
I am using these functions to generate the presigned url:
var getVideoReadSignedUrl = async function (url) {
const key = awsS3Helpers.parseVideoKeyFromVideoUrl(url);
return new Promise((resolve, reject) => {
s3.getSignedUrl(
"getObject",
{
Bucket: AWS_BUCKET_NAME,
Key: key,
Expires: 300,
},
(err, url) => {
console.log(
"🚀 ~ file: s3-config.js ~ line 77 ~ returnnewPromise ~ err",
err
);
console.log(
"🚀 ~ file: s3-config.js ~ line 77 ~ returnnewPromise ~ url",
url
);
if (err) {
reject(err);
} else {
resolve(url);
}
}
);
});
};
And this:
const parseVideoKeyFromVideoUrl = (object_url) => {
const string_to_remove =
"https://" + AWS_BUCKET_NAME + ".s3." + REGION + ".amazonaws.com/";
const object_key = object_url.replace(string_to_remove, "");
return object_key;
};
This is an example of a video url:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2F60e589xxxxxxxxx463c.mp4
So I call getVideoReadSignedUrl to get the signed url to give access to it:
getVideoReadSignedUrl(
"https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2F60e589xxxxxxxxx463c.mp4"
);
parseVideoKeyFromVideoUrl correctly parses the key from the url:
videos%2F60e589xxxxxxxxx463c.mp4
BUT, this is what getVideoReadSignedUrl generates:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%252F60e589xxxxxxxxx463c.mp4?X-Amz-Algorithm=AWS4-xxx-xxxxx&X-Amz-Credential=AKIARxxxxxMVEWKUW%2F20221120%2Feu-xxxx-3%2Fs3%2Faws4_request&X-Amz-Date=20221120xxxx19Z&X-Amz-Expires=300&X-Amz-Signature=0203efcfaxxxxxxc53815746f75a357ff9d53fe581491d&X-Amz-SignedHeaders=host
When I open that url in the browser it tells me that the key does not exist:
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>videos%2F60e589xxxxxxxxx463c.mp4</Key>
Even though in the error message the key is the same as in the original url.
But, I noticed a slight difference in the key between the original url and the presigned url:
Key in original url:
videos% |2F60| e589xxxxxxxxx463c.mp4
Key in signed url:
videos% |252F60| e589xxxxxxxxx463c.mp4
Not sure if this is causing the issue.
NOTE:
For the upload, I am using multipart upload and the reason the video url is structured like that is because of file_name here:
const completeMultiPartUpload = async (user_id, parts, upload_id) => {
let file_name;
file_name = `videos/${user_id}.mp4`;
let params = {
Bucket: AWS_BUCKET_NAME,
Key: file_name,
MultipartUpload: {
Parts: parts,
},
UploadId: upload_id,
};
return data;
};
However, it should be stored like this:
/videos/e589xxxxxxxxx463c.mp4
and not like this:
/videos%2F60e589xxxxxxxxx463c.mp4
I am not sure why it replaces / with %2F60 and this may be what's causing the whole issue.
After more investigation, I found that completeMultiPartUpload returns this:
{
Location:
"https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4",
Bucket: "lodeep-storage-3",
Key: "videos/xxxxxxxxxxxxxxxxx.mp4",
ETag: '"ee6xxxxxxxxxxxxf11-1"',
}
And so the actualy object key is saved like this in S3:
"videos/xxxxxxxxxxxxxxxxx.mp4"
And this is the object url if I get it from AWS console:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos/xxxxxxxxxxxxxxxxx.mp4
But, in the database, this url gets saved like this since it's what completeMultiPartUpload function finally returns:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4
Notice how / is replaced with %2F.
So, when I am generating the signed url to read the video, the function that parses the key from the url parseVideoKeyFromVideoUrl, instead of getting the correct url in the S3:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos/xxxxxxxxxxxxxxxxx.mp4
It gets the one stored in the database:
https://BUCKET_NAME.s3.REGION.amazonaws.com/videos%2Fxxxxxxxxxxxxxxxxx.mp4
And so instead of returning this as a key:
/videos/xxxxxxxxxxxxxxxxx.mp4
It returns this:
/videos%2Fxxxxxxxxxxxxxxxxx.mp4
Which is an incorrect key. And so the signed url to read the video is incorrect. And I get this key doesn't exist error.

uplode image to amazon s3 using #aws-sdk/client-s3 ang get its location

i am trying upload an in image file to s3 but get this error says :
ERROR: MethodNotAllowed: The specified method is not allowed against this resource.
my code using #aws-sdk/client-s3 package to upload wth this code :
const s3 = new S3({
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
}
});
exports.uploadFile = async options => {
options.internalPath = options.internalPath || (`${config.s3.internalPath + options.moduleName}/`);
options.ACL = options.ACL || 'public-read';
logger.info(`Uploading [${options.path}]`);
const params = {
Bucket: config.s3.bucket,
Body: fs.createReadStream(options.path),
Key: options.internalPath + options.fileName,
ACL: options.ACL
};
try {
const s3Response = await s3.completeMultipartUpload(params);
if (s3Response) {
logger.info(`Done uploading, uploaded to: ${s3Response.Location}`);
return { url: s3Response.Location };
}
} catch (err) {
logger.error(err, 'unable to upload:');
throw err;
}
};
I am not sure what this error mean and once the file is uploaded I need to get his location in s3
thanks for any help
For uploading a single image file you need to be calling s3.upload() not s3.completeMultipartUpload().
If you had very large files and wanted to upload then in multiple parts, the workflow would look like:
s3.createMultipartUpload()
s3.uploadPart()
s3.uploadPart()
...
s3.completeMultipartUpload()
Looking at the official documentation, It looks like the new way to do a simple S3 upload in the JavaScript SDK is this:
s3.send(new PutObjectCommand(uploadParams));

AWS.S3.upload() 403 Error When Attempting Multipart Upload

TL;DR
When attempting to upload a file directly from the browser using the s3.upload() method provided by the AWS SDK for Javascript in the Browser combined with temporary IAM Credentials generated by a call to AWS.STS.getFederationToken() everything works fine for non-multipart uploads, and for the first part of a multipart upload.
But when s3.upload() attempts to send the second part of a multipart upload S3 responds with a 403 Access Denied error.
Why?
The Context
I'm implementing an uploader in my app that will enable multipart (chunked) uploads directly from the browser to my S3 bucket.
To achieve this, I'm utilizing the s3.upload() method of the AWS SDK for Javascript in the Browser, which I understand to be nothing more than sugar for its underlying utilization of new AWS.S3.ManagedUpload().
A simple illustration of what I'm attempting can be found here: https://aws.amazon.com/blogs/developer/announcing-the-amazon-s3-managed-uploader-in-the-aws-sdk-for-javascript/
Additionally, I'm also using AWS.STS.getFederationToken() as a means to vend temporary IAM User credentials from my API layer to authorize the uploads.
The 1,2,3:
The user initiates an upload by choosing a file via a standard HTML <input type="file">.
This triggers an initial request to my API layer to ensure the user has the necessary privileges on my own system to perform this action, if that's true then my server calls AWS.STS.getFederationToken() with a Policy param that scopes their privileges down to nothing more than uploading the file to the key provided. And then returns the resulting temporary creds to the browser.
Now that the browser has the temp creds it needs, it can go about using them to create a new AWS.S3 client and then execute the AWS.S3.upload() method to perform a (supposedly) automagical multipart upload of the file.
The Code
api.myapp.com/vendUploadCreds.js
This is the API layer method called that generates and vends the temporary upload creds. At this point in the process the account has already been authenticated and authorized to receive the creds and upload the file.
module.exports = function vendUploadCreds(request, response) {
var account = request.params.account;
var file = request.params.file;
var bucket = 'cdn.myapp.com';
var sts = new AWS.STS({
AccessKeyId : process.env.MY_AWS_ACCESS_KEY_ID,
SecretAccessKey : process.env.MY_AWS_SECRET_ACCESS_KEY
});
/// The following policy is *exactly* the same as the S3 policy
/// attached to the IAM user that executes this STS request.
var policy = {
Version : '2012-10-17',
Statement : [
{
Effect : 'Allow',
Action : [
's3:ListBucket',
's3:ListBucketMultipartUploads',
's3:ListBucketVersions',
's3:ListMultipartUploadParts',
's3:AbortMultipartUpload',
's3:GetObject',
's3:GetObjectVersion',
's3:PutObject',
's3:PutObjectAcl',
's3:PutObjectVersionAcl',
's3:DeleteObject',
's3:DeleteObjectVersion'
],
Resource : [
'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
],
Condition : {
StringEquals : {
's3:x-amz-acl' : ['private']
}
}
}
]
};
sts.getFederationToken({
DurationSeconds : 129600, /// 36 hours
Name : account._id + '-uptoken',
Policy : JSON.stringify(policy)
}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
response.send(data);
});
}
console.myapp.com/uploader.js
This is a truncated illustration of the uploader on the browser-side that first calls the vendUploadCreds API method and then uses the resulting temporary creds to execute the multipart upload.
uploader.getUploadCreds(account, file) {
/// A request is sent to api.myapp.com/vendUploadCreds
/// Upon successful response, the creds are returned.
request('https://api.myapp.com/vendUploadCreds', {
params : {
account : account,
file : file
}
}, function(error, data) {
upload.credentials = data.credentials;
this.uploadFile(upload);
});
}
uploader.uploadFile : function(upload) {
var uploadID = upload.id;
/// The `upload` object coming through via the args has
/// a `credentials` property containing the creds obtained
/// via the `vendUploadCreds` method above.
var credentials = new AWS.Credentials({
accessKeyId : upload.credentials.AccessKeyId,
secretAccessKey : upload.credentials.SecretAccessKey,
sessionToken : upload.credentials.SessionToken
});
AWS.config.region = 'us-east-1';
var s3 = new AWS.S3({
credentials,
signatureVersion : 'v2', /// 'v4' also attempted
params : {
Bucket : 'cdn.myapp.com'
}
});
var uploader = s3.upload({
Key : upload.key,
ACL : 'private',
ContentType : upload.file.type,
Body : upload.file
},{
queueSize : 3,
partSize : 1024 * 1024 * 5
});
uploader.on('httpUploadProgress', function(event) {
var total = event.total;
var loaded = event.loaded;
var percent = loaded / total;
percent = Math.ceil(percent * 100);
console.log('Uploaded ' + percent + '% of ' + upload.key);
});
uploader.send(function(error, result) {
console.log(error, result);
});
}
cdn.myapp.com S3 Bucket CORS Configuration
So far as I can tell, this is wide open, so CORS shouldn't be the issue?
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
The Error
Okay, so when I attempt to upload a file, it gets really confusing:
Any file under 5Mb uploads just fine. Files under 5Mb (the minimum part size for an S3 Multipart Upload) do not require multipart upload so s3.upload() sends them as a standard PUT request. Makes sense, and they succeed just fine.
Any file over 5Mb seems to upload fine, but only for the first part. Then when s3.upload() attempts to send the second part S3 responds with a 403 Access Denied error.
I hope you're a fan of info because here's a dump of the error that I get from Chrome when I attempt to upload Astrud Gilberto's melancholy classic "So Nice (Summer Samba)" (MP3, 6.6Mb):
General
Request URL:https://s3.amazonaws.com/cdn.myapp.com/5a2cbda70b9b741661ad98df/files/Astrud-Gilberto-So-Nice-1512903188573.mp3?partNumber=2&uploadId=ljaviv9n25aRKwc4HKGhBbbXTWI3wSGZwRRi39fPSEvU2dcM9G7gO6iu5w7va._dMTZil4e_b53Iy5ngojJqRr5F6Uo_ZXuF27yaqizeARmUVf5ZVeah8ZjYwkZV8C0i3rhluYoxFHUPxlLMjaKLww--
Request Method:PUT
Status Code:403 Forbidden
Remote Address:52.216.165.77:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
Access-Control-Allow-Methods:GET, PUT, POST, DELETE
Access-Control-Allow-Origin:*
Access-Control-Expose-Headers:ETag
Access-Control-Max-Age:3000
Connection:close
Content-Type:application/xml
Date:Sun, 10 Dec 2017 10:53:12 GMT
Server:AmazonS3
Transfer-Encoding:chunked
Vary:Origin, Access-Control-Request-Headers, Access-Control-Request-Method
x-amz-id-2:0Mzo7b/qj0r5Is7aJIIJ/U2VxTTulWsjl5kJpTnEhy/B0fQDlRuANcursnxI71LA16AdePVSc/s=
x-amz-request-id:DA008A5116E0058F
Request Headers
Accept:*/*
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
Authorization:AWS ASIAJAR5KXKAOPTC64PQ:Wo9lbflZuVVS9+UTTDSjU0iPUbI=
Cache-Control:no-cache
Connection:keep-alive
Content-Length:1314943
Content-Type:application/octet-stream
DNT:1
Host:s3.amazonaws.com
Origin:http://132.12.23.145:8080
Pragma:no-cache
Referer:http://132.12.23.145:8080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36
X-Amz-Date:Sun, 10 Dec 2017 10:53:09 GMT
x-amz-security-token:FQoDYXdzENT//////////wEaDK9srK2+5FN91W+T+SLSA/LdEwpOiY7wDkgggOMhuGEiqIXAQrFMk/EqvZFl8Npqx414WsL9E310rj5mU1RGXsxuN+ers1r6NVPpJIlXSDG7bnwlGabejNvDL9vMX5HJHGbZOEVUoaL60/T5NM+0TZtH61vHAEVmRVFKOB0tSez8TEU1jQ2cJME0THn5RuV/6CuIpA9dlEYO7/ajB5UKT3F1rBkt12b0DeWmKG2pvTJRwa8nrsF6Hk6dk1B1Hl1fUwAh9rD17O9Roi7MFLKisPH+96WX08liC8k+n+kPPOox6ZZM/lOMwlNinDjLc2iC+JD/6uxyAGpNbQ7OHAUsF7DOiMvw6Nv6PrImrBvnK439BhLOk1VXCfxxmtTWGim8TD1w1EciZcJhsuCMpDF8fMnhF/JFw3KNOJXHUtpTGRjNbOPcPojVs3FgIt+9MllIA0pGMr2bYmA3HvKewnhD2qeKkG3DPDIbpwuRoY4wIXCP5OclmoHp5nE5O94aRIvkBvS1YmqDQO+jTiI7/O7vlX63q9sGqdIA4nwzh5ASTRJhC2rKgxepFirEB53dCev8i9f1pwXG3/4H3TvPCLVpK94S7/csNJexJP75bPBpo4nDeIbOBKKIMuUDK1pQsyuGwuUolKS00QU=
X-Amz-User-Agent:aws-sdk-js/2.164.0 callback
Query String Params
partNumber:2
uploadId:ljaviv9n25aRKwc4HKGhBbbXTWI3wSGZwRRi39fPSEvU2dcM9G7gO6iu5w7va._dMTZil4e_b53Iy5ngojJqRr5F6Uo_ZXuF27yaqizeARmUVf5ZVeah8ZjYwkZV8C0i3rhluYoxFHUPxlLMjaKLww--
Actual Response Body
And here's the body of the response from S3:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>8277A4969E955274</RequestId><HostId>XtQ2Ezv0Wa81Rm2jymB5ZwTe+OHfwTcnNapYMgceqZCJeb75YwOa1AZZ5/10CAeVgmfeP0BFXnM=</HostId></Error>
The Questions
It's obviously not an issue with the creds created by the sts.generateFederationToken() request, because if it were then the smaller (non-multipart) uploads would fail as well, right?
It's obviously not an issue with the CORS configuration on the cdn.myapp.com bucket, because if it were then the smaller (non-multipart) uploads would fail as well, right?
Why would S3 accept partNumber=1 of a multipart upload, and then 403 on the partNumber=2 of the same upload?
A Solution
After many hours of wrestling with this I figured out that the issue was with the Condition block of the IAM Policy that I was sending through as the Policy param of my AWS.STS.getFederationToken() request. Specifically, AWS.S3.upload() only sends an x-amz-acl header for the first PUT request, which is the call to S3.initiateMultipartUpoad.
The x-amz-acl header is not included in the subsequent PUT requests for the actual parts of the upload.
I had the following condition on my IAM Policy, which I was using to ensure that any uploads must have an ACL of 'private':
Condition : {
StringEquals : {
's3:x-amz-acl' : ['private']
}
}
So the initial PUT request to S3.initiateMultipartUpload was fine, but the subsequent PUTs failed because they didn't have the x-amz-acl header.
The solution was to edit the policy I was attaching to the temporary user and move the s3:PutObject permission into its own statement, and then adjust the condition to apply only if the targeted value exists. The final policy looks like so:
var policy = {
Version : '2012-10-17',
Statement : [
{
Effect : 'Allow',
Action : [
's3:PutObject'
],
Resource : [
'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
],
Condition : {
StringEqualsIfExists : {
's3:x-amz-acl' : ['private']
}
}
},
{
Effect : 'Allow',
Action : [
's3:AbortMultipartUpload'
],
Resource : [
'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
]
}
]
};
Hopefully that'll help someone else from wasting three days on this.

How to generate an AWS S3 Pre-Signed URL

I try get the pre-signed URL to an Amazon S3 object using the Aws\S3\S3Client::createPresignedRequest() method:
$s3 = new S3Client($config);
$command = $s3->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => $key,
'ResponseContentDisposition'=>'attachment; filename="' . $fileName . '"',
));
$request = $s3->createPresignedRequest($command, $time);
// Get the actual presigned-url
$this->signedUrl = (string)$request->getUri();
I get presigned-url like this:
https://s3.amazonaws.com/img/1c9a149e-57bc-11e5-9347-58743fdfa18a?X-Amz-Content-Sha256=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=13JZVPMFV04D8A3AQPG2%2F20150910%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20150910T181455Z&X-Amz-SignedHeaders=Host&X-Amz-Expires=1200&X-Amz-Signature=0d99ae98ea13e2974322575f95f5a19e94e13dc859b2509cecc21cd41c01c65d
and this url returned error:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
....
Generating a pre-signed URL is done entirely on the client side with no interaction with the S3 service APIs. As such, there is no validation that the object actually exists, at the time a pre-signed URL is created. (A pre-signed URL can technically even be created before the object is uploaded).
The NoSuchKey error means exactly that -- there is no such object with the specified key in the bucket, where key, in S3 parlance, refers to the path+filename (the URI) of the object. (It's referred to as a key as in the term key/value store -- which S3 is -- the path to the object is the "key," and the object body/payload is the "value.")
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html
Above error indicates the Key doesn't exist or file you are trying to generating presignedUrl doesn't exist. You can generate presignedURL even if object or key is not present.
I have faced multiple issues, similar like one above but i solved it by using await function in my code to wait until the s3 key is uploaded.
In my scenario, I was using Lambda to upload the file in S3 and generate presignedUrl and send to the client
Lambda Code:
const AWS = require('aws-sdk');
function uploadAndGeneratePreSignedUrlHandler () {
const s3 = new AWS.S3();
const basicS3ParamList = {
Bucket: "Bucket_NAME",
Key: "FILE_NAME", // PATH or FileName
};
const uploadS3ParamList = {
...basicS3ParamList,
Body: "DATA" // Buffer data or file
}
try {
await s3.upload(uploadS3ParamList).promise();
const presignedURL = s3.getSignedUrl('getObject', basicS3ParamList);
return presignedURL;
} catch (error) {
console.log('error')
}
}
Client side :
I created pop-up for download
// React
const popUpEventForDownload = (testParams) => {
try {
const fetchResponse = await axios({ method: 'GET', url: 'GATEWAY_URL', data: testParams });
const { presignedURL } = fetchResponse.data;
downloadCSVFile(presignedURL, 'test')
} catch(error) {
console.log('error');
}
}
downloadCSVFile = (sUrl, fileName) => {
//If in Chrome or Safari or Firefox - download via virtual link click
//Creating new link node.
var link = document.createElement('a');
link.href = sUrl;
if (link.download !== undefined) {
//Set HTML5 download attribute. This will prevent file from opening if supported.
link.download = `${fileName}.CSV`;
}
//Dispatching click event.
if (document.createEvent) {
var e = document.createEvent('MouseEvents');
e.initEvent('click', true, true);
link.dispatchEvent(e);
return true;
}
// Force file download (whether supported by server).
var query = '?download';
window.open(sUrl + query);
}