How do I make putObject request to presignedUrl using s3 AWS - amazon-web-services

I am working with AWS S3 Bucket, and trying to upload image from react native project managed by expo. I have express on the backend. I have created a s3 file on backend that handles getting the presigned url, and this works, and returns the url to the front end inside this thunk function from reduxjs toolkit. I used axios to send request to my server, this works. I have used axios and fetch to try the final put to the presigned url but when it reached the s3 bucket there is nothing in the file just an empty file with 200 bytes everytime. When I use the same presigned url from postman and upload and image in binary section then send the post request the image uploads to the bucket no problems. When I send binary or base64 to bucket from RN app it just uploads those values in text form. I attempted react-native-image-picker but was having problems with that too. Any ideas would be helpful thanks. I have included a snippet from redux slice. If you need more info let me know.
redux slice projects.js
// create a project
// fancy funtion here ......
export const createProject = createAsyncThunk(
"projects/createProject",
async (postData) => {
// sending image to s3 bucket and getting a url to store in d
const response = await axios.get("/s3")
// post image directly to s3 bucket
const s3Url = await fetch(response.data.data, {
method: "PUT",
body: postData.image
});
console.log(s3Url)
console.log(response.data.data)
// make another request to my server to store extra data
try {
const response = await axios.post('/works', postData)
return response.data.data;
} catch (err) {
console.log("Create projects failed: ", err)
}
}
)

Related

How to submit a file to AWS Lambda via API Gateway

I have the following architecture:
(1) A front-end form, which has a user input of file:
var data = new FormData();
data.append('username', 'USER123');
data.append('file', selectedFile); //selectedFile is a file which I capture form a form (user-submitted)
await fetch('myURL/test', {
method: 'post',
headers: {
"Access-Control-Allow-Origin": "*"
},
body:data
}).then(res => res.json()).then(json => console.log(json))
console.log('Done')
(2) An AWS API Gateway (which has a POST route of 'myURL/test'),
(3) An AWS Lambda Integration with the following Python Code:
def lambda_handler(event, context): #Lambda functions works and returns a response of "Hello from Lambda" during a POST request
# TODO implement
print(event['body']) //prints some base64 string
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
What I am trying to achieve: User submits form (with .zip file) from my front-end, (2) Lambda function receives the file, (3) Lambda function checks whether file is correct size, type, (4) Lambda function uploads file to S3 Bucket. The thing is I am unable to receive the file in my lambda function. Printing of the event['body'] prints out some weird string that I'm unsure on how to process.
Any tips?
Verify that the Incoming Request at the Gateway is actually passing the POST body content (the file) to the Lambda. Likely you'll need to setup a mapping template along these lines...
{ "content": "$input.body" }
Here's a pretty good article showing the end-to-end setup to upload a file:
https://medium.com/swlh/upload-binary-files-to-s3-using-aws-api-gateway-with-aws-lambda-2b4ba8c70b8e

Uploading Base64 file to S3 signed URL

I need to upload an image to S3 using signed URL. I have the image in a base64 string. The below code runs without throwing any error, but at the end I see a text file with base64 content in the S3, not the binary image.
Can you please point out what I am missing?
Generate Signed URL (Lambda function JavaScript)
const signedUrlExpireSeconds = 60 * 100;
var url = s3.getSignedUrl("putObject", {
Bucket: process.env.ScreenshotBucket,
Key: s3Key,
ContentType: "image/jpeg",
ContentEncoding: "base64",
Expires: signedUrlExpireSeconds,
});
Upload to S3 (Java Code)
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofString(body))
.header("Content-Encoding", "base64").header("Content-Type", "image/jpeg").uri(URI.create(url)).build();
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
if (response.statusCode() != 200) {
throw new Exception(response.body());
}
I am not familiar with the AWS JavaScript SDK. But it seems that setting the 'Content-Type' metadata of the object (not the Content-Type of the putObject HTTP request) to 'image/jpeg' should do the trick.
Fixed it while just playing around with the combinations.
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofString(body))
Changed to
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofByteArray(body))

How to efficiently allow for users to view Amazon S3 content?

I am currently creating a basic app with React-Native (frontend) and Flask/MongoDB (backend). I am planning on using AWS S3 as cheap cloud storage for all the images and videos that are going to be uploaded and viewed. My current idea (and this could be totally off), is when a user uploads content, it will go through my Flask API and then to the S3 storage. When a user wants to view content, I am not sure what the plan of attack is here. Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
I am quite new to using AWS and if there is already a post discussing this topic, please let me know, and I'd be more than happy to take down this duplicate. I just can't seem to find anything.
Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
If the content is public, you just provide an URL which points directly to the file on the S3 bucket.
If the content is private, you generate presigned url on your backend for the file for which you want to give access. This URL should be valid for a short amount of time (for example: 15/30 minutes). You can regenerate it, if it becomes unavailable.
Moreover, you can generate a presigned URL which can be used for uploads directly from the front-end to the S3 bucket. This might be an option if you don't want the upload traffic to go through the backend or you want faster uploads.
There is an API boto3, try to use it.
It is not so difficult, I have done something similar, will post code here.
I have done like #Ervin said.
frontend asks backend to generate credentials
backend sends to frontend the credentials
Frontend upload file to S3
Frontend warns backend it has done.
Backend validate if everything is ok.
Backend will create a link to download, you have a lot of security options.
example of item 6) To generate a presigned url to download content.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
params = {}
params['Bucket'] = bucket
params['Key'] = attachment_model.s3_filename
params['ResponseContentDisposition'] = 'attachment; filename={0}'.format(attachment_model.filename)
if attachment_model.mimetype is not None:
params['ResponseContentType'] = attachment_model.mimetype
url = client.generate_presigned_url('get_object', ExpiresIn=3600, Params=params)
example of item 2) Backend will create presigned credentials to post your file on S3, send s3_credentials to frontend
acl_permission = 'private' if private_attachment else 'public-read'
condition = [{'acl': acl_permission},
["starts-with", "$key", '{0}/'.format(folder_name)],
{'Content-Type': mimetype }]
bucket = app.config.get('BOTO3_BUCKET', None)
fields = {"acl": acl_permission, 'Bucket': bucket, 'Content-Type': mimetype}
client = boto_flask.clients.get('s3')
s3_credentials = client.generate_presigned_post(bucket, s3_filename, Fields=fields, Conditions=condition, ExpiresIn=3600)
example of item 5) Here are an example how backend can check if file on S3 are ok.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
response = client.head_object(Bucket=bucket, Key=s3_filename)
if response is None:
return None, None
md5 = response.get('ETag').replace('"', '')
size = response.get('ContentLength')
Here are an example how frontend will ask for credentials, upload file to S3 and inform backend it is done.
I tried to remove a lot of particular code.
//frontend asking backend to create credentials, frontend will send some file metadata
AttachmentService.createPostUrl(payload).then((responseCredentials) => {
let form = new FormData();
Object.keys(responseCredentials.s3.fields).forEach(key => {
form.append(key, responseCredentials.s3.fields[key]);
});
form.append("file", file);
let payload = {
data: form,
url: responseCredentials.s3.url
}
//Frontend will send file to S3
axios.post(payload.url, payload.data).then((res) => {
return Promise.resolve(true);
}).then((result) => {
//when it is done, frontend informs backend
AttachmentService.uploadSuccess(...).then((refreshCase) => {
//Success
});
});
});

Get Non Expiry S3 URL to store it to dynamodb flutter

Guys, I am working on an application that requires to upload an image to s3 and keep the URL in the dynamodb database, however after the upload the geturl function which I have generates the URL for a certain time which has an expiry, how do I get a URL with no expiry
Future<String> getUrl() async {
try {
print('In getUrl');
String key = _uploadFileResult;
try {
GetUrlResult result = await Amplify.Storage.getUrl(key: key);
print(result.url);
return result.url;
} on StorageException catch (e) {
print(e.message);
}
} catch (e) {
print('GetUrl Err: ' + e.toString());
}
}
All S3 pre-signed URLs have an expiration time. Pre-signed URLs exist to share private S3 objects for a limited time.
If that's a problem for you then one option is to make the object public, if appropriate, and simply store its URL of the form https://mybucket.s3.amazonaws.com/images/cat.png.
Alternatively, write a small application that responds to a specific URL (e.g. https://myapi.mydomain.com/images/cat.png and have that app create a pre-signed URL for the related object and issue a 302 redirect to send the client to the temporary, pre-signed URL.

The best way to send file to GCS wihout user confirmation

I am developing an application that needs to send files to Google Cloud Storage.
The webapp will have a HTML page that the user choose files to do upload.
The user do not have Google Account.
The amount files to send is 5 or less.
I do not want to send files to GAE and GAE send to GCS. I would like that my user to do upload directly to GCS.
I did this code for upload:
function sentStorage() {
var file = document.getElementById("myFile").files[0];
url = 'https://www.googleapis.com/upload/storage/v1/b/XXX/o?uploadType=resumable&name=' + file.name;
xhr = new XMLHttpRequest();
var token = 'ya29.XXXXXXXXXXXXXXX';
xhr.open('POST', url);
xhr.setRequestHeader('Content-Type', file.type);
// resumable
//url = 'https://www.googleapis.com/upload/storage/v1/b/XXXXXX/o?uploadType=resumable&name=' + file.name;
//xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
//xhr.setRequestHeader('Content-Length', file.size);
xhr.setRequestHeader('x-goog-project-id', 'XXXXXXXXXX');
xhr.setRequestHeader('Authorization', 'Bearer ' + token);
xhr.send(file);
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
var response = JSON.parse(xhr.responseText);
if (xhr.status === 200) {
alert('codigo 200');
} else {
var message = 'Error: ' + response.error.message;
console.log(message);
alert(message);
}
}
};
}
I get a serviceaccount information (Google Console) and generate a token Bearer for it. I used a python file that read the "json account information" and generate the token.
My requisit is that user do not need to confirm any Google Account information for send files, this obligation is from my application. (Users do not have Google Account) and the html page send the files directly to GCS without send to GAE or GCE, so, I need to use HTML form or Javascript. I prefer Javascript.
Only users of this application can do upload (the application has an authentication with database), so, anonymous user can not do it.
My questions are:
This token will expire? I used a serviceaccount for generate this token.
There is a better api javascript to do it?
This security solution is better or I should use a different approach?
Sending either a refresh or an access token to an untrusted end user is very dangerous. The bearer of an access token has complete authority to act as the associated account (within the scope used to generate it) until the access token expires a few minutes later. You don't want to do that.
There are a few good alternatives. The easiest way is to create exactly the upload request you want, then sign the URL for that request using the private key of a service account. That signed URL, which will be valid for a few minutes, could then be used to upload a single object. You'll need to sign the URL on the server side before giving it to the customer. Here's the documentation on signed URLs: https://cloud.google.com/storage/docs/access-control/signed-urls