S3 presigned url fails when large files - amazon-web-services

I have a presigned that works fine for any small file.
When I try to upload larger files, I get ACCESS DENIED in the post without any other message in the body.
The funny thing about it is that if I keep trying, after a few denied hits it works. It is totally random ...
When access is not denied, the condition works by giving the correct error return with a message when the file is larger than 100mb..but the problem is that good part of the posts get denied...
This denied happens in the post for the address of amazon, so I dont have acess to any log of it...
The same POST & SCRIPT:
OK FILE:
ACCESS DENIED:
Here is the code:
const S3 = new AWS.S3({
signatureVersion: 'v4',
region: region
});
const params = {
Expires: linkExpiresSecs,
Bucket: bucketName,
Conditions: [
["content-length-range", 1, 104857600]
],
Fields: {
key: keyFile
}
};
const response = await S3.createPresignedPost(params);

i think the validity of the link expires before the file is downloaded for larger files.
As for the behavior that sometimes the download succeeds, that could be due to network situation e.g. less congestion. or some part of the file was previously cached.

Related

How do I make putObject request to presignedUrl using s3 AWS

I am working with AWS S3 Bucket, and trying to upload image from react native project managed by expo. I have express on the backend. I have created a s3 file on backend that handles getting the presigned url, and this works, and returns the url to the front end inside this thunk function from reduxjs toolkit. I used axios to send request to my server, this works. I have used axios and fetch to try the final put to the presigned url but when it reached the s3 bucket there is nothing in the file just an empty file with 200 bytes everytime. When I use the same presigned url from postman and upload and image in binary section then send the post request the image uploads to the bucket no problems. When I send binary or base64 to bucket from RN app it just uploads those values in text form. I attempted react-native-image-picker but was having problems with that too. Any ideas would be helpful thanks. I have included a snippet from redux slice. If you need more info let me know.
redux slice projects.js
// create a project
// fancy funtion here ......
export const createProject = createAsyncThunk(
"projects/createProject",
async (postData) => {
// sending image to s3 bucket and getting a url to store in d
const response = await axios.get("/s3")
// post image directly to s3 bucket
const s3Url = await fetch(response.data.data, {
method: "PUT",
body: postData.image
});
console.log(s3Url)
console.log(response.data.data)
// make another request to my server to store extra data
try {
const response = await axios.post('/works', postData)
return response.data.data;
} catch (err) {
console.log("Create projects failed: ", err)
}
}
)

How to efficiently allow for users to view Amazon S3 content?

I am currently creating a basic app with React-Native (frontend) and Flask/MongoDB (backend). I am planning on using AWS S3 as cheap cloud storage for all the images and videos that are going to be uploaded and viewed. My current idea (and this could be totally off), is when a user uploads content, it will go through my Flask API and then to the S3 storage. When a user wants to view content, I am not sure what the plan of attack is here. Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
I am quite new to using AWS and if there is already a post discussing this topic, please let me know, and I'd be more than happy to take down this duplicate. I just can't seem to find anything.
Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
If the content is public, you just provide an URL which points directly to the file on the S3 bucket.
If the content is private, you generate presigned url on your backend for the file for which you want to give access. This URL should be valid for a short amount of time (for example: 15/30 minutes). You can regenerate it, if it becomes unavailable.
Moreover, you can generate a presigned URL which can be used for uploads directly from the front-end to the S3 bucket. This might be an option if you don't want the upload traffic to go through the backend or you want faster uploads.
There is an API boto3, try to use it.
It is not so difficult, I have done something similar, will post code here.
I have done like #Ervin said.
frontend asks backend to generate credentials
backend sends to frontend the credentials
Frontend upload file to S3
Frontend warns backend it has done.
Backend validate if everything is ok.
Backend will create a link to download, you have a lot of security options.
example of item 6) To generate a presigned url to download content.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
params = {}
params['Bucket'] = bucket
params['Key'] = attachment_model.s3_filename
params['ResponseContentDisposition'] = 'attachment; filename={0}'.format(attachment_model.filename)
if attachment_model.mimetype is not None:
params['ResponseContentType'] = attachment_model.mimetype
url = client.generate_presigned_url('get_object', ExpiresIn=3600, Params=params)
example of item 2) Backend will create presigned credentials to post your file on S3, send s3_credentials to frontend
acl_permission = 'private' if private_attachment else 'public-read'
condition = [{'acl': acl_permission},
["starts-with", "$key", '{0}/'.format(folder_name)],
{'Content-Type': mimetype }]
bucket = app.config.get('BOTO3_BUCKET', None)
fields = {"acl": acl_permission, 'Bucket': bucket, 'Content-Type': mimetype}
client = boto_flask.clients.get('s3')
s3_credentials = client.generate_presigned_post(bucket, s3_filename, Fields=fields, Conditions=condition, ExpiresIn=3600)
example of item 5) Here are an example how backend can check if file on S3 are ok.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
response = client.head_object(Bucket=bucket, Key=s3_filename)
if response is None:
return None, None
md5 = response.get('ETag').replace('"', '')
size = response.get('ContentLength')
Here are an example how frontend will ask for credentials, upload file to S3 and inform backend it is done.
I tried to remove a lot of particular code.
//frontend asking backend to create credentials, frontend will send some file metadata
AttachmentService.createPostUrl(payload).then((responseCredentials) => {
let form = new FormData();
Object.keys(responseCredentials.s3.fields).forEach(key => {
form.append(key, responseCredentials.s3.fields[key]);
});
form.append("file", file);
let payload = {
data: form,
url: responseCredentials.s3.url
}
//Frontend will send file to S3
axios.post(payload.url, payload.data).then((res) => {
return Promise.resolve(true);
}).then((result) => {
//when it is done, frontend informs backend
AttachmentService.uploadSuccess(...).then((refreshCase) => {
//Success
});
});
});

x-googl-acl isn't making uploaded files public

Currently I have been trying to upload objects (videos) to my Google Storage Cloud. I have found out the reason (possibly) that I haven't been able to make them public is due to ACL or IAM permission. The way it's currently done is I get a signedUrl from the backend as
const getGoogleSignedUrl = async (root, args, context) => {
const { filename } = args;
const googleCloud = new Storage({
keyFilename: ,
projectId: 'something'
});
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'video/quicktime',
extensionHeaders: {'x-googl-acl': 'public-read'}
};
const bucketName = 'something';
// Get a v4 signed URL for uploading file
const [url] = await googleCloud
.bucket(bucketName)
.file(filename)
.getSignedUrl(options);
return { url };
}
Once I have gotten temporary permission from the backend as a url I try to make a put request to uploaded the file as:
const response = await fetch(url, {
method: 'PUT',
body: blob,
headers: {
'x-googl-acl': 'public-read',
'content-type': 'video/quicktime'
}
}).then(res => console.log("thres is ", res)).catch(e => console.log(e));
Even though the file does get uploaded to google cloud storage. It is always showing public access as Not public. Any help would be helpful since I am starting to not understand how making an object public with google cloud works.
Within AWS (previously) it was easy to make an object public by adding x-aws-acl to a put request.
Thank you in advance.
Update
I have changed the code to reflex currently what it looks like. Also when I look at the object in Google Storage after it's been uploaded I see
Public access Not authorized
Type video/quicktime
Size 369.1 KB
Created Feb 11, 2021, 5:49:02 PM
Last modified Feb 11, 2021, 5:49:02 PM
Hold status None
Retention policy None
Encryption type Google-managed key
Custom time —
Public URL Not applicable
Update 2
As stated. The issue where I wasn't able to upload the file after trying to add the recommended header was because I wasn't providing the header correctly. I changed the header from x-googl-acl to x-goog-acl which has allowed me to upload it to the cloud.
New problem is now Public access is showing as Not authorized
Update 3
In order to try something new I followed the direction listed here https://www.jhanley.com/google-cloud-setting-up-gcloud-with-service-account-credentials/. Once I finished everything. The next steps I took was
1 - Upload a new video to the cloud. This will be done using the new json provided
const googleCloud = new Storage({
keyFilename: json_file_given),
projectId: 'something'
});
Once the file has been uploaded I noticed there was no changes in regards to it being public. Still has Public access Not authorized
2 - Once checking the status of the object uploaded I went on to follow similar approach on the bottom to make sure I am using the same account as the json object that uploaded the file.
gcloud auth activate-service-account test#development-123456.iam.gserviceaccount.com --key-file=test_google_account.json
3 - Once I noticed I am using the right person with the right permission I performed the next step of
gsutil acl ch -u AllUsers:R gs://example-bucket/example-object
This is return actually resulted in a response of No changes to gs://crit_bull_1/google5.mov

Codename one Upload a picture taken by the camera to amazon s3 bucket

Original Question
I have tried as you have suggested, there are few things I have run into few issues.
Here are the following I did: filePath = Capture.capturePhoto(1024, -1);
I had issue passing the S3_BUCKET_URL in the MutipartRequest Constructor so I used rq.setUrl(S3_BUCKET_URL)
I had to add rq.setHttpMethod("PUT"); // since I got error as 405: Method was not supported
Finally I got no errors and I did see a sub-folder created under the bucket I created. I set the url as "https://s3.amazonaws.com/bucket123/test/" I saw test bucket created but image files wasn't uploaded. The folder size wasn't 0 as it showed the size of the file but no image files were found in that sub folder.
When I try to access the folder through S3 explorer I got the the Access Denied. 1. I manually created a sub-folder using S3 explorer and gave the read and write permission yet once the uploadFileToS3 method called from codename I saw the permissions was lost.
2. So I did change the acl to "public-read-write" still same effect.
Please advise if I am missing anything.
Thank you.
Modified question
I have modified the old questions and following this with Sahi's answer.
// Modified the url as following - full path including bucket name and key
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/myfile.jpg";
//String filePath = captured from Camera;
public void uploadFileToS3(String filePath, String uniqueFileName) {
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
//rq.addArgument("acl", "private");
//rq.addArgument("key", uniqueFileName);
rq.addData(uniqueFileName, filePath, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Now I do see that the file has been uploaded to the bucket as myfile.jpg under my bucket. The bucket has all the rights since the bucket is given rights to read and write also applied the permissions to sub folders.
Now the myfile.jpg lost all the permissions and I am not able to download that file even though I see it in the bucket.
I accessed it by the url but the file cannot be opened.
I am not sure what I am missing in the header or request. I am really trying to get this working but not getting where I wanted to. This is very important piece for the app I am working on.
Please provide any feedback.
Thank you.
--------------------------------------------------------------------------
Adding code
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
When I had the url as above, the my-bucket was created as unknown object (no folder icon) but I did see the file size was actual image size.
//String filePath = captured from Camera;
public void uploadFileToS3(String filePath, String uniqueFileName) {
//uniqueFileName including sub folder - subfolder/my-image.jpg
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
rq.addArgument("acl", "public-read-write");
rq.addArgument("key", uniqueFileName);
rq.addData(uniqueFileName, filePath, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Looks like I am not uploading the object properly. I did go through the amazon document and I wasn't able to find anything related to upload through http. I am really stuck and I hope to get this resolved. Would you have any working code that simply uploads to s3?
More details: I am adding more detail to the question as I am still not able to resolve this.
// Original S3_BUCKET_URL
String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
// With this url I am getting 400 : Bad Request Error
// I added the uniqueFilename (key) to the url as
String uniqueFilename = "imageBucket/image12345.jpg";
String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket /"+uniqueFilename;
//Now I do see the file I believe it's the file, the
//subfolder inherits the rights from bucket (read, write).
//Not the image file.
//When I click on the file using s3 client browser I got message
//prompted Access Denied.
// my function to call s3 bucket
public void takePicture()
{
String stringImg = Capture.capturePhoto(1024, -1);
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
rq.addArgument("key", uniqueFilename );
rq.addArgument("acl", "public-read-write");
rq.setHttpMethod("PUT"); // If I don't specify this I am getting
// 405 : Method Not Allowed Error.
rq.addData("file", stringImg, "image/jpeg");
//Captured Image from camera
rq.setFilename("file", uniqueFilename);
NetworkManager.getInstance().addToQueue();
}
At this point I believe I have done all the suggested way and I am still getting the same error. I believe I am not doing anything wrong and I don't want to believe that I am the only one trying to do this :)
I really need your help to get this resolved in order to keep this project. I am wondering if there is any Amazon s3 sdk or library I can use in codename one.
Please review this and let me know if this really can be achieved by codename one.
Thank you.
I've tested this code and it should work however it depends on a fix to Codename One to deal with some stupid behavior of the Amazon API:
Form hi = new Form("Upload");
Button upload = new Button("Upload");
hi.add(upload);
upload.addActionListener(e -> {
String file = Capture.capturePhoto();
if(file != null) {
try {
String uniqueFileName = "Image" + System.currentTimeMillis() + ".jpg";
MultipartRequest rq = new MultipartRequest();
rq.setUrl("https://s3.amazonaws.com/my-bucket/");
rq.addArgument("key", uniqueFileName);
rq.addArgument("acl", "private");
rq.addData("file", file, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueueAndWait(rq);
if(rq.getResponseCode() != 200 && rq.getResponseCode() != 204) {
System.out.println("Error response: " + new String(rq.getResponseData()));
}
ToastBar.showMessage("Upload completed", FontImage.MATERIAL_INFO);
} catch(IOException err) {
Log.e(err);
}
}
});
hi.show();
When running the code below (with post) I didn't get the error you described, instead I got an error from Amazon indicating the key wasn't in the right order. Apparently Amazon relies on the order of the submitted fields which is damn stupid. Unfortunately we ignore that order when you add an argument and since this is a post request the code to workaround this is non-trivial.
I've made a fix to Codename One that will respect the natural order in which arguments are added but since we already made this weeks release this will only go in next Friday (December 23rd 2016). So the code above should work as expected following that update.
Older answer for reference
I haven't tested this but something like this should work:
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
public void uploadFileToS3(String filePath, String uniqueFileName) {
MultipartRequest rq = new MultipartRequest(S3_BUCKET_URL);
rq.addArgument("acl", "private");
rq.addArgument("key", uniqueFileName);
rq.addData("file", filePath, "image/jpeg");
rq.setFilename("file", uniqueFileName);
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Notice that this doesn't do authorization so it assumes the bucket is accessible at least for write.
Generate a presigned URL on you server:
Pass thar URL to the mobile client.
IAmazonS3 client;
client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
// Generate a pre-signed URL.
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.Now.AddMinutes(5)
};
string url = null;
url = s3Client.GetPreSignedURL(request);
Mobile client can upload a file to that URL using old plain PUT HTTP request:
// Upload a file using the pre-signed URL.
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
// Upload object.
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
You can take a look at official documentation at http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLDotNetSDK.html

Invalid Date When Uploading to AWS S3

I'm trying to upload an image to S3 via putObject and a pre-signed URL.
Here is the URL that was provided when I generated the pre-signed URL:
https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<token>
When I attempt to upload the file via a PUT, AWS responds with:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid date (should be seconds since epoch): 1458177431311</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Here is the curl version of the request I was using:
curl -X PUT -H "Cache-Control: no-cache" -H "Postman-Token: 78e46be3-8ecc- 4156-be3d-7e2f4688a127" -H "Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW" -F "file=#[object Object]" "https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<mySecurityToken>"
Since the timestamp is generated by AWS, it should be correct. I have tried changing it to include decimals and got the same error.
Could the problem be in the way I'm uploading the file in my request?
Update - Add code for generating the signed URL
The signed URL is being generated via the AWS Javascript SDK:
var AWS = require('aws-sdk')
var uuid = require('node-uuid')
var Promise = require('bluebird')
var s3 = new AWS.S3()
var params = {
Bucket: bucket, // bucket is stored as .env variable
Key: uuid.v4() + '.jpg' // file is always a jpg
}
return new Promise(function (resolve, reject) {
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
reject(new Error(err))
}
var payload = { url: url }
resolve(payload)
})
})
My access key and secret key are loaded via environment variables as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Amazon S3 only supports 32 bit timestamps in signed urls:
2147483647 is the largest timestamp you can have.
I was creating timestamps 20 years in the future and it was breaking my signed URL's. Using a value less than 2,147,483,647 fixes the issue.
Hope this helps someone!
You just get bitten by bad documentation. Seems relate to this link
Amazon S3 invalid date when using expires in url_for
The integer are mean for " (to specify the number of seconds after the current time)".
Since you enter the time in epoch(0 = 1970-1-1), So it is current epoch time + your time, 46+46 years= 92 years. Appear to be crazy expiration period for s3.