How to upload image by using AWS Gateway Generated SDK - amazon-web-services

I am working on uploading an image to S3 by only using AWS gateway. The workflow is working now and I can upload file to S3, but the file encode and decode failed. The postman could work well to upload image to S3, but I don't know how it did.
I am using API Gateway generate JS SDK
Here is the uploading function
function uploadFile(file) {
const reader = new FileReader();
reader.onload = () => {
var params = {
'key': file.name,
};
var body = reader.result.split(',')[1]
var additionalParams = {
headers: {
'Content-Type': file.type,
}
};
sdk.uploadPut(params, body, additionalParams)
.then(function (result) {
console.log('this is upload result', result)
}).catch(function (result) {
console.log('this is upload error', result)
});
}
reader.readAsDataURL(file);
}
and This is API Gateway integration request
It seems I am uploading a base64 string to .jpg rather than raw data.
Can I get some tip on this?

Related

It is possible to upload an over 5GB file into S3 via curl with presigned url? [duplicate]

Is there a way to do a multipart upload via the browser using a generated presigned URL?
Angular - Multipart Aws Pre-signed URL
Example
https://multipart-aws-presigned.stackblitz.io/
https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html
Download Backend:
https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0
To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.
Here we will leave a basic example of the backend and frontend.
Backend (Serveless Typescript)
const AWSData = {
accessKeyId: 'Access Key',
secretAccessKey: 'Secret Access Key'
};
There are 3 endpoints
Endpoint 1: /start-upload
Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.
export const start: APIGatewayProxyHandler = async (event, _context) => {
const params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName /* File name */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.createMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
data: {
uploadId: res.UploadId
}
})
};
}
Endpoint 2: /get-upload-url
Create a pre-signed URL for each part that was split for the file to be uploaded.
export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
let params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName, /* File name */
PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.getSignedUrl('uploadPart', params)
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(res)
};
}
Endpoint 3: /complete-upload
After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.
export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
// Parse the post body
const bodyData = JSON.parse(event.body);
const s3 = new AWS.S3(AWSData);
const params: any = {
Bucket: bodyData.bucket, /* Bucket name */
Key: bodyData.fileName, /* File name */
MultipartUpload: {
Parts: bodyData.parts /* Parts uploaded */
},
UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
}
const data = await s3.completeMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
// 'Access-Control-Allow-Methods': 'OPTIONS,POST',
// 'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify(data)
};
}
Frontend (Angular 9)
The file is divided into 10MB parts
Having the file, the multipart upload to Endpoint 1 is requested
With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2
A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2
When you finish uploading each part you make a last request the Endpoint 3
In the example of all this the function uploadMultipartFile
I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url
from the AWS documentation:
For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request
So I think you should have to generate a presigned url for each part of the multipart upload :(
what is your use case? can't you execute a script from your server, and give s3 access to this server?

Image downloaded from AWS S3 is not opening correctly in any viewer

I created a simple api using API Gateway to upload images to my S3 bucket.
And when i try to upload a jpeg file using curl, its getting uploaded into S3.
curl --request PUT -H "Content-Type: image/jpeg" --data-binary "#s.jpeg" https://173xxxxxxxf.execute-api.eu-west-2.amazonaws.com/Test/memxxxx/s.jpeg
But I am noticing multiple issues here
the file size is almost doubled(85KB original size, 153KB in S3)
If I download the file from S3, I am not able to view the image in any default image viewer.
If I upload the same image to S3 using drag & drop and then download it, it works without any issues.
What's the best way to handle it properly( I assume the issue is with the header type)
Edit:
I also tried to set the content as base64 encoded string and still facing the same issue
const https = require("https");
var fs = require("fs");
var data = base64_encode("filename.jpg");
const options = {
hostname: "173ixxxxxxf.execute-api.eu-west-2.amazonaws.com",
port: 443,
path: "/Test/xxxx/filename.jpeg",
method: "PUT",
headers: {
"Content-Type": "image/jpg",
"Content-Length": data.length,
},
};
const req = https.request(options, (res) => {
console.log(`statusCode: ${res.statusCode}`);
res.on("data", (d) => {
process.stdout.write(d);
});
});
req.on("error", (error) => {
console.error(error);
});
req.write(data);
req.end();
function base64_encode(file) {
// read binary data
var bitmap = fs.readFileSync(file);
// convert binary data to base64 encoded string
return new Buffer(bitmap).toString("base64");
}

Uploaded file to S3 via PreSigned URL from Flutter App. but the file is corrupted when i download it

I am working on a Flutter App, where I upload image file (PUT Request) to AWS S3 using a presigned URL. The upload is successful as I can see the file in S3. But when I click and download it from the bucket, the downloaded file is corrupted.
I am using Dio library for uploading the file.
Uploading the image file as binary via postman works perfectly
uploadFileToPresignedS3(
File payload, String fileName, String presignedURL) async {
try {
Dio dio = new Dio();
FormData formData = new FormData.from(
{"name": fileName, "file1": new UploadFileInfo(payload, fileName)});
dio.put(presignedURL, data: formData);
} catch (ex) {
print(ex);
}
}
Expected: The uploaded file not to be corrupted
Actual result: The uploaded file is corrupted
Use this code to upload file (image) to S3 pre-signed-url using Dio and show upload progress:
await dio.put(
url,
data: image.openRead(),
options: Options(
contentType: "image/jpeg",
headers: {
"Content-Length": image.lengthSync(),
},
),
onSendProgress: (int sentBytes, int totalBytes) {
double progressPercent = sentBytes / totalBytes * 100;
print("$progressPercent %");
},
);
Note: Do not set Content-Type header along with Content-Length like this:
headers: {
"Content-Length": image.lengthSync(),
"Content-Type": "image/jpeg",
},
Due to some reason, it will result in corrupted uploaded file.
Just in case: Instead of print("$progressPercent %") you can use setState() to show updates in UI.
Hope this helps.
To piggy back off of Rabi Roshans's comment you need to modify contenType to "application/octet-stream". In your backend's S3 params you need to do the same.
client code
await dio.put(
url,
data: image.openRead(),
options: Options(
contentType: "application/octet-stream",
headers: {
"Content-Length": image.lengthSync(),
},
),
onSendProgress: (int sentBytes, int totalBytes) {
double progressPercent = sentBytes / totalBytes * 100;
print("$progressPercent %");
},
);
s3 backend
var presignedUrl = s3.getSignedUrl("putObject", {
Bucket: "your_bucke_name",
Key: "filename.ext",
Expires: 120, // expirations in seconds
ContentType: "application/octet-stream", // this must be added or you will get 403 error
})
;
I created this class to send an image to s3 using pre-signed url, I'm using camera lib to send a photo to s3.
import 'dart:convert';
import 'dart:io';
import 'package:camera/camera.dart';
import 'package:http/http.dart';
import 'package:http_parser/http_parser.dart';
class AwsApi {
Future<String> uploadToSignedUrl({required XFile file, required String signedUrl}) async {
Uri uri = Uri.parse(signedUrl);
var response = await put(uri, body: await file.readAsBytes(), headers: {"Content-Type": "image/jpg"});
return response;
}
}

Get PDF from API using AWS API gateway and Lambda

I have an internal API that provides data in different formats by just passing the id + format. For Example if I want to get a PDF of a produc with ID = 1 I will just make a call to the app with apiurl/latest/1.pdf.
This works fine when I am in the internal network since the host is only available internally. To access it publicly, we have implemented an authorization using API gateway and Lambda. Lambda takes are of the authorization and return the result just fine :
When I request JSON data
When I request XML data
Here a sample version of the lambda:
var request = require('request');
exports.handler = function(event, context, callback) {
var fUrl = event.fUrl + event.pid;
if(event.fsUrl.indexOf('product') >-1){
fUrl = fUrl + '.' + event.format
}
request({
url: fUrl,
}, function(error, response, body) {
if(error){
return callback(error);
}else{
return callback(null, response.body);
}
});
}
but not PDF. Some screens from the postman. I used both Send and Download in Postman.
Any thoughts?

PDF uploading to AWS S3 corrupted

I managed to get my generated pdf uploaded to s3 from my node JS server. Pdf looks okay on my local folder but when I tried to access it from the AWS console, it indicates "Failed to load PDF document".
I have tried uploading it via the s3.upload and s3.putObject APIs, (for the putObject I also used an .on finish checker to ensure that the file has been fully loaded before sending the request). But the file in the S3 bucket is still the same (small) size, 26 bytes and cannot be loaded. Any help is greatly appreciated!!!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
pdfDoc.end();
writeStream.on('finish', function(){
const s3 = new aws.S3();
aws.config.loadFromPath('./modules/awsconfig.json');
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: '/pdf/inspectionReport.pdf',
Expires: 60,
ContentType: 'application/pdf'
};
s3.putObject(s3Params, function(err,res){
if(err)
console.log(err);
else
console.log(res);
})
I realised that pdfDoc.end() must come before piping starts. Have also used a callback to ensure that s3 upload is called after pdf write is finished. See code below, hope it helps!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
pdfDoc.end();
async.parallel([
function(callback){
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
console.log('pdf write finished!');
callback();
}
], function(err){
const s3 = new aws.S3();
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: pdfDoc,
Expires: 60,
ContentType: 'application/pdf'
};
s3.upload(s3Params, function(err,result){
if(err) console.log(err);
else console.log(result);
});
}
);