Get Non Expiry S3 URL to store it to dynamodb flutter - amazon-web-services

Guys, I am working on an application that requires to upload an image to s3 and keep the URL in the dynamodb database, however after the upload the geturl function which I have generates the URL for a certain time which has an expiry, how do I get a URL with no expiry
Future<String> getUrl() async {
try {
print('In getUrl');
String key = _uploadFileResult;
try {
GetUrlResult result = await Amplify.Storage.getUrl(key: key);
print(result.url);
return result.url;
} on StorageException catch (e) {
print(e.message);
}
} catch (e) {
print('GetUrl Err: ' + e.toString());
}
}

All S3 pre-signed URLs have an expiration time. Pre-signed URLs exist to share private S3 objects for a limited time.
If that's a problem for you then one option is to make the object public, if appropriate, and simply store its URL of the form https://mybucket.s3.amazonaws.com/images/cat.png.
Alternatively, write a small application that responds to a specific URL (e.g. https://myapi.mydomain.com/images/cat.png and have that app create a pre-signed URL for the related object and issue a 302 redirect to send the client to the temporary, pre-signed URL.

Related

How do I make putObject request to presignedUrl using s3 AWS

I am working with AWS S3 Bucket, and trying to upload image from react native project managed by expo. I have express on the backend. I have created a s3 file on backend that handles getting the presigned url, and this works, and returns the url to the front end inside this thunk function from reduxjs toolkit. I used axios to send request to my server, this works. I have used axios and fetch to try the final put to the presigned url but when it reached the s3 bucket there is nothing in the file just an empty file with 200 bytes everytime. When I use the same presigned url from postman and upload and image in binary section then send the post request the image uploads to the bucket no problems. When I send binary or base64 to bucket from RN app it just uploads those values in text form. I attempted react-native-image-picker but was having problems with that too. Any ideas would be helpful thanks. I have included a snippet from redux slice. If you need more info let me know.
redux slice projects.js
// create a project
// fancy funtion here ......
export const createProject = createAsyncThunk(
"projects/createProject",
async (postData) => {
// sending image to s3 bucket and getting a url to store in d
const response = await axios.get("/s3")
// post image directly to s3 bucket
const s3Url = await fetch(response.data.data, {
method: "PUT",
body: postData.image
});
console.log(s3Url)
console.log(response.data.data)
// make another request to my server to store extra data
try {
const response = await axios.post('/works', postData)
return response.data.data;
} catch (err) {
console.log("Create projects failed: ", err)
}
}
)

How to efficiently allow for users to view Amazon S3 content?

I am currently creating a basic app with React-Native (frontend) and Flask/MongoDB (backend). I am planning on using AWS S3 as cheap cloud storage for all the images and videos that are going to be uploaded and viewed. My current idea (and this could be totally off), is when a user uploads content, it will go through my Flask API and then to the S3 storage. When a user wants to view content, I am not sure what the plan of attack is here. Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
I am quite new to using AWS and if there is already a post discussing this topic, please let me know, and I'd be more than happy to take down this duplicate. I just can't seem to find anything.
Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
If the content is public, you just provide an URL which points directly to the file on the S3 bucket.
If the content is private, you generate presigned url on your backend for the file for which you want to give access. This URL should be valid for a short amount of time (for example: 15/30 minutes). You can regenerate it, if it becomes unavailable.
Moreover, you can generate a presigned URL which can be used for uploads directly from the front-end to the S3 bucket. This might be an option if you don't want the upload traffic to go through the backend or you want faster uploads.
There is an API boto3, try to use it.
It is not so difficult, I have done something similar, will post code here.
I have done like #Ervin said.
frontend asks backend to generate credentials
backend sends to frontend the credentials
Frontend upload file to S3
Frontend warns backend it has done.
Backend validate if everything is ok.
Backend will create a link to download, you have a lot of security options.
example of item 6) To generate a presigned url to download content.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
params = {}
params['Bucket'] = bucket
params['Key'] = attachment_model.s3_filename
params['ResponseContentDisposition'] = 'attachment; filename={0}'.format(attachment_model.filename)
if attachment_model.mimetype is not None:
params['ResponseContentType'] = attachment_model.mimetype
url = client.generate_presigned_url('get_object', ExpiresIn=3600, Params=params)
example of item 2) Backend will create presigned credentials to post your file on S3, send s3_credentials to frontend
acl_permission = 'private' if private_attachment else 'public-read'
condition = [{'acl': acl_permission},
["starts-with", "$key", '{0}/'.format(folder_name)],
{'Content-Type': mimetype }]
bucket = app.config.get('BOTO3_BUCKET', None)
fields = {"acl": acl_permission, 'Bucket': bucket, 'Content-Type': mimetype}
client = boto_flask.clients.get('s3')
s3_credentials = client.generate_presigned_post(bucket, s3_filename, Fields=fields, Conditions=condition, ExpiresIn=3600)
example of item 5) Here are an example how backend can check if file on S3 are ok.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
response = client.head_object(Bucket=bucket, Key=s3_filename)
if response is None:
return None, None
md5 = response.get('ETag').replace('"', '')
size = response.get('ContentLength')
Here are an example how frontend will ask for credentials, upload file to S3 and inform backend it is done.
I tried to remove a lot of particular code.
//frontend asking backend to create credentials, frontend will send some file metadata
AttachmentService.createPostUrl(payload).then((responseCredentials) => {
let form = new FormData();
Object.keys(responseCredentials.s3.fields).forEach(key => {
form.append(key, responseCredentials.s3.fields[key]);
});
form.append("file", file);
let payload = {
data: form,
url: responseCredentials.s3.url
}
//Frontend will send file to S3
axios.post(payload.url, payload.data).then((res) => {
return Promise.resolve(true);
}).then((result) => {
//when it is done, frontend informs backend
AttachmentService.uploadSuccess(...).then((refreshCase) => {
//Success
});
});
});

S3 - Upload - how to generate a pre-signed url that gives EVERYONE read access to the object?

I'm trying to provide a pre-signed url that, once the image is uploaded, grants the group Everyone read access to the uplodaded image.
So far, I'm generating the pre-signed url with the following steps:
val req = GeneratePresignedUrlRequest(params.s3Bucket,"$uuid.jpg",HttpMethod.PUT)
req.expiration = expiration
req.addRequestParameter("x-amz-acl","public-read")
req.addRequestParameter("ContentType","image/jpeg")
val url: URL = s3Client.generatePresignedUrl(req)
But the image, once I check in S3, does not have the expected read access.
The HTTP client that performs the upload needs to include the x-amz-acl: public-read header.
In your example, you're generating a request that includes that header. But, then you're generating a presigned URL from that request.
URLs don't contain HTTP headers, so whatever HTTP client you're using to perform the actual upload is not sending setting the header when it sends the request to the generated URL.
This simple answer is working for me.
val url = getS3Connection()!!.generatePresignedUrl(
"bucketname", "key",
Date(Date().time + 1000 * 60 * 300)
)

AWS S3 Presigned URL with other query parameters

I create a pre-signed URL and get back something like
https://s3.amazonaws.com/MyBucket/MyItem/
?X-Amz-Security-Token=TOKEN
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20171206T014837Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=3600
&X-Amz-Credential=CREDENTIAL
&X-Amz-Signature=SIGNATURE
I can now curl this no problem. However, if I now add another query parameter, I will get back a 403, i.e.
https://s3.amazonaws.com/MyBucket/MyItem/
?X-Amz-Security-Token=TOKEN
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20171206T014837Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=3600
&X-Amz-Credential=CREDENTIAL
&X-Amz-Signature=SIGNATURE
&Foo=123
How come? Is it possible to generate a pre-signed url that supports custom queries?
It seems to be technically feasible to insert custom query parameters into a v4 pre-signed URL, before it is signed, but not all of the AWS SDKs expose a way to do this.
Here's an example of a roundabout way to do this with the AWS JavaScript SDK:
const AWS = require('aws-sdk');
var s3 = new AWS.S3({region: 'us-east-1', signatureVersion: 'v4'});
var req = s3.getObject({Bucket: 'mybucket', Key: 'mykey'});
req.on('build', () => { req.httpRequest.path += '?session=ABC123'; });
console.log(req.presign());
I've tried this with custom query parameters that begin with X- and without it. Both appeared to work fine. I've tried with multiple query parameters (?a=1&b=2) and that worked too.
The customized pre-signed URLs work correctly (I can use them to get S3 objects) and the query parameters make it into CloudWatch Logs so can be used for correlation purposes.
Note that if you want to supply a custom expiration time, then do it as follows:
const Expires = 120;
const url = req.presign(Expires);
I'm not aware of other (non-JavaScript) SDKs that allow you to insert query parameters into the URL construction process like this so it may be a challenge to do this in other languages. I'd recommend using a small JavaScript Lambda function (or API Gateway plus Lambda function) that would simply create and return the customized pre-signed URL.
The custom query parameters are also tamper-proof. They are included in the signing of the URL so, if you tamper with them, the URL becomes invalid, yielding 403 Forbidden.
I used this code to generate your pre-signed URL. The result was:
https://s3.amazonaws.com/MyBucket/MyItem
?Foo=123
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIA...27%2Fus-east-1%2Fs3%2Faws4_request
&X-Amz-Date=20180427T0012345Z
&X-Amz-Expires=3600
&X-Amz-Signature=e3...7b
&X-Amz-SignedHeaders=host
None of this is a guarantee that this technique will continue to work, of course, if AWS changes things under the covers but for right now it seems to work and is certainly useful.
Attribution: the source of this discovery was aws-sdk-js/issues/502.
If you change one of the headers or add / subtract, then you have to resign the URL.
This is part of the AWS signing design and this process is designed for higher levels of security. One of the AWS reasons for changing to signing version 4 from signing version 2.
The signing design does not know which headers are important and which are not. That would create a nightmare trying to track all of the AWS services.
I created this solution for Ruby SDK. It is sort of a hack, but it works as expected:
require 'aws-sdk-s3'
require 'active_support/core_ext/object/to_query.rb'
# Modified S3 pre signer class that can inject query params to the URL
#
# Usage example:
#
# bucket_name = "bucket_name"
# key = "path/to/file.json"
# filename = "download_file_name.json"
# duration = 3600
#
# params = {
# bucket: bucket_name,
# key: key,
# response_content_disposition: "attachment; filename=#{filename}",
# expires_in: duration
# }
#
# signer = S3PreSignerWithQueryParams.new({'x-your-custom-field': "banana", 'x-some-other-field': 1234})
# url = signer.presigned_url(:get_object, params)
#
# puts "url = #{url}"
#
class S3PreSignerWithQueryParams < Aws::S3::Presigner
def initialize(query_params = {}, options = {})
#query_params = query_params
super(options)
end
def build_signer(cfg)
signer = super(cfg)
my_params = #query_params.to_h.to_query()
signer.define_singleton_method(:presign_url,
lambda do |options|
options[:url].query += "&" + my_params
super(options)
end)
signer
end
end
While not documented, you can add parameters as arguments to the call to presigned_url.
obj.presigned_url(:get,
expires_in: expires_in_sec,
response_content_disposition: "attachment"
)
https://bucket.s3.us-east-2.amazonaws.com/file.txt?response-content-disposition=attachment&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=PUBLICKEY%2F20220309%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20220309T031958Z&X-Amz-Expires=43200&X-Amz-SignedHeaders=host&X-Amz-Signature=SIGNATUREVALUE
If you are looking on for JavaScript SDK V3:
import { HttpRequest } from "#aws-sdk/protocol-http";
import { S3RequestPresigner } from "#aws-sdk/s3-request-presigner";
import { parseUrl } from "#aws-sdk/url-parser";
import { Sha256 } from "#aws-crypto/sha256-browser";
import { Hash } from "#aws-sdk/hash-node";
import { formatUrl } from "#aws-sdk/util-format-url";
// Make custom query in Record<string, string | Array<string> | null> format
const customQuery = {
hello: "world",
};
const s3ObjectUrl = parseUrl(
`https://${bucketName}.s3.${region}.amazonaws.com/${key}`
);
s3ObjectUrl.query = customQuery; //Insert custom query here
const presigner = new S3RequestPresigner({
credentials,
region,
sha256: Hash.bind(null, "sha256"), // In Node.js
//sha256: Sha256 // In browsers
});
// Create a GET request from S3 url.
const url = await presigner.presign(new HttpRequest(s3ObjectUrl));
console.log("PRESIGNED URL: ", formatUrl(url));
Code template taken from: https://aws.amazon.com/blogs/developer/generate-presigned-url-modular-aws-sdk-javascript/

The best way to send file to GCS wihout user confirmation

I am developing an application that needs to send files to Google Cloud Storage.
The webapp will have a HTML page that the user choose files to do upload.
The user do not have Google Account.
The amount files to send is 5 or less.
I do not want to send files to GAE and GAE send to GCS. I would like that my user to do upload directly to GCS.
I did this code for upload:
function sentStorage() {
var file = document.getElementById("myFile").files[0];
url = 'https://www.googleapis.com/upload/storage/v1/b/XXX/o?uploadType=resumable&name=' + file.name;
xhr = new XMLHttpRequest();
var token = 'ya29.XXXXXXXXXXXXXXX';
xhr.open('POST', url);
xhr.setRequestHeader('Content-Type', file.type);
// resumable
//url = 'https://www.googleapis.com/upload/storage/v1/b/XXXXXX/o?uploadType=resumable&name=' + file.name;
//xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
//xhr.setRequestHeader('Content-Length', file.size);
xhr.setRequestHeader('x-goog-project-id', 'XXXXXXXXXX');
xhr.setRequestHeader('Authorization', 'Bearer ' + token);
xhr.send(file);
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
var response = JSON.parse(xhr.responseText);
if (xhr.status === 200) {
alert('codigo 200');
} else {
var message = 'Error: ' + response.error.message;
console.log(message);
alert(message);
}
}
};
}
I get a serviceaccount information (Google Console) and generate a token Bearer for it. I used a python file that read the "json account information" and generate the token.
My requisit is that user do not need to confirm any Google Account information for send files, this obligation is from my application. (Users do not have Google Account) and the html page send the files directly to GCS without send to GAE or GCE, so, I need to use HTML form or Javascript. I prefer Javascript.
Only users of this application can do upload (the application has an authentication with database), so, anonymous user can not do it.
My questions are:
This token will expire? I used a serviceaccount for generate this token.
There is a better api javascript to do it?
This security solution is better or I should use a different approach?
Sending either a refresh or an access token to an untrusted end user is very dangerous. The bearer of an access token has complete authority to act as the associated account (within the scope used to generate it) until the access token expires a few minutes later. You don't want to do that.
There are a few good alternatives. The easiest way is to create exactly the upload request you want, then sign the URL for that request using the private key of a service account. That signed URL, which will be valid for a few minutes, could then be used to upload a single object. You'll need to sign the URL on the server side before giving it to the customer. Here's the documentation on signed URLs: https://cloud.google.com/storage/docs/access-control/signed-urls