Unsure why video uploaded to S3 is 0 bits - amazon-web-services

After a few days of trying to upload a video to AWS, I have successfully (almost) been able to. The main problem I am seeing is when I head to my S3 bucket, the file has a Size 0 B. I was hoping to see what I might be doing wrong that is causing this to occur.
On the backend I get a presignedUrl such as:
const s3 = new AWS.S3({
accessKeyId: ACCESSKEY_ID,
secretAccessKey: SECRETKEY
});
const s3Params = {
Bucket: BUCKET_NAME,
Key: uuidv4() + '.mov',
Expires: 60 * 10,
ContentType: 'mov',
ACL: 'public-read'
};
let url = await s3.getSignedUrl('putObject', s3Params);
return { url };
Once I have the url for upload. On the frontend the way I am sending the file is:
const uploadFileToS3 = async (uri) => {
const type = video.uri.split('.').pop();
const respo = await fetch(uri, {
method: 'PUT',
body: {
url: video.uri,
type,
name: 'testing'
},
headers: {
'Content-Type': type,
'x-amz-acl': 'public-read'
}
});
const some = await JSON.stringify(respo);
It does seem to be saving something since I see it in the bucket but am unable to download or view it. Just shows an empty page and it feels like nothing (the video) possibly was uploaded to S3. Any pointers to where I might be going wrong in uploading a video to S3?
Thank you for all the help.

You can not specify an URL when you upload a file. You need 2 fetches:
the first one downloads the video from video.uri
the second uploads the video to S3: body: blob
To download a file as a blob, use response.blob(). Then you can use that to upload the file (here is an example).

Related

It is possible to upload an over 5GB file into S3 via curl with presigned url? [duplicate]

Is there a way to do a multipart upload via the browser using a generated presigned URL?
Angular - Multipart Aws Pre-signed URL
Example
https://multipart-aws-presigned.stackblitz.io/
https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html
Download Backend:
https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0
To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.
Here we will leave a basic example of the backend and frontend.
Backend (Serveless Typescript)
const AWSData = {
accessKeyId: 'Access Key',
secretAccessKey: 'Secret Access Key'
};
There are 3 endpoints
Endpoint 1: /start-upload
Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.
export const start: APIGatewayProxyHandler = async (event, _context) => {
const params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName /* File name */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.createMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
data: {
uploadId: res.UploadId
}
})
};
}
Endpoint 2: /get-upload-url
Create a pre-signed URL for each part that was split for the file to be uploaded.
export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
let params = {
Bucket: event.queryStringParameters.bucket, /* Bucket name */
Key: event.queryStringParameters.fileName, /* File name */
PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
};
const s3 = new AWS.S3(AWSData);
const res = await s3.getSignedUrl('uploadPart', params)
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify(res)
};
}
Endpoint 3: /complete-upload
After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.
export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
// Parse the post body
const bodyData = JSON.parse(event.body);
const s3 = new AWS.S3(AWSData);
const params: any = {
Bucket: bodyData.bucket, /* Bucket name */
Key: bodyData.fileName, /* File name */
MultipartUpload: {
Parts: bodyData.parts /* Parts uploaded */
},
UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
}
const data = await s3.completeMultipartUpload(params).promise()
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
// 'Access-Control-Allow-Methods': 'OPTIONS,POST',
// 'Access-Control-Allow-Headers': 'Content-Type',
},
body: JSON.stringify(data)
};
}
Frontend (Angular 9)
The file is divided into 10MB parts
Having the file, the multipart upload to Endpoint 1 is requested
With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2
A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2
When you finish uploading each part you make a last request the Endpoint 3
In the example of all this the function uploadMultipartFile
I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url
from the AWS documentation:
For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request
So I think you should have to generate a presigned url for each part of the multipart upload :(
what is your use case? can't you execute a script from your server, and give s3 access to this server?

Url Image from S3 not displaying the image

I am trying to upload an image to S3 through graphql using the apollo-upload-client library which just give the ability to send images through a graphql query.
So the image is storying itself in the S3 bucket, but when I try to read the Location url it doesn't seems to work. When I read the url with an <img src="img_url" /> it just shows:
And when I try to manually enter the link, it just automatically downloads a strange text file with a lot of weird symbols.
This is what the upload looks like:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: createReadStream(),
Key: uuid(),
ContentType: mimetype,
})
.promise();
return response.Location;
}
An example of the File object looks like this:
{
filename: 'Screenshot 2021-06-15 at 13.18.10.png',
mimetype: 'image/png',
encoding: '7bit',
createReadStream: [Function: createReadStream]
}
What I am doing wrong? It returns an actual S3 link but the link itself isn't displaying any image. And I tried to upload the same image to S3 manually and it works just fine. Thanks in advance for any advice!
So after a deeper research, it seems that the problem is with the serverless framework, specially with serverless-offline. It doesn't allow transport of binary data.
So I tried to convert the createReadStream to a base64 string, but that didn't work either.
This is the try:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
const { createReadStream, filename, mimetype } = await file;
const response = await s3
.upload({
ACL: 'public-read',
Bucket: environment.bucketName,
Body: (await stream2buffer(createReadStream())).toString('base64'),
Key: `${uuid()}${extname(filename)}`,
ContentEncoding: 'base64',
ContentType: mimetype // image/jpg, image/png, ...
})
.promise();
return response.Location;
}
async function stream2buffer(stream: Stream): Promise<Buffer> {
return new Promise<Buffer>((resolve, reject) => {
let _buf = Array<any>();
stream.on('data', (chunk) => _buf.push(chunk));
stream.on('end', () => resolve(Buffer.concat(_buf)));
stream.on('error', (err) => reject(`error converting stream - ${err}`));
});
}
I also tried to install the serverless-apigw-binary plugin, that that didn't work either.
plugins:
- serverless-webpack
- serverless-offline
- serverless-apigw-binary
It is uploading the same corrupted image to s3.
These are some posts with the same problem, but none of them the got a solution.
https://stackoverflow.com/questions/61050997/file-uploaded-successfully-to-s3-using-serverless-api-but-it-doesnt-opencorrup
Uploading image to s3 from AWS Lambda with NodeJS results in corrupted file
UPDATE: So it is definitely not a problem with my s3.upload function or the s3 itself. It seems that the issue is getting the image to the server. I am pretty sure that is has something to do with serverless.
I've created a small function that just receives the image and load it into a local folder. And I am getting the image corrupted:
export async function uploadImageResolver(
_parent,
{ file }: MutationUploadImageArgs,
context: Context,
): Promise<string> {
// identify(context);
const { createReadStream, filename } = await file;
createReadStream().pipe(
createWriteStream(__dirname + `/../../../images/${filename}`),
);
return ''
}
UPDATE 2: I figured out that it works when deploying. So it has to be something with serverless-offline.

S3 upload using javascript sdk speed calculation

I am using AWS javascript SDK to upload a file to S3 using multipart upload.
// Use S3 ManagedUpload class as it supports multipart uploads
var upload = new AWS.S3.ManagedUpload({
params: {
Bucket: albumBucketName,
Key: photoKey,
Body: file,
ACL: "public-read"
}
});
But I would like to also show the speed at which the upload is happening in the UI. Document doesn't provide any API to get the speed. So would like to know how to calculate the upload speed.
regards
Achuth
.on('httpUploadProgress', function(e) {
console.log(e.loaded);
});
You can use .on listner , and e.loaded will provide you the uploaded bytes value, which can be used to calculate percentage of upload.
new AWS.S3.ManagedUpload({
params: {
Bucket: albumBucketName,
Key: photoKey,
Body: file,
ACL: "public-read"
}
}).on('httpUploadProgress', function(e) {
console.log(e.loaded);
});

AWS Lambda function for upload/putObject image works only on local machine

I have a lambda function which uses 'request' to get a stream of a file by URL and suppose to upload it to a bucket on s3.
It is working perfectly on my local machine using node but not inside the lambda.
After running the lambda, I have an empty file with the name I wanted.
Stuff you should know
The lambda function is async
The node version is 8.10
In the example you see putObject, but I have also tried with upload
Even when adding a manual sleep of 90-120 seconds to let the lambda run, the file is not uploaded
I tried using context.succeed or callback('some result'), but it still did not work properly
This is the relevant part of the code
module.exports.handler = async(event, context, callback) => {
const path = 'bucketToUpload';
const name = 'imageFileName.jpg';
let options = {
uri: responseUrl, // This is the url of the image
encoding: null
};
let reqRes = await request(options); // Here I have the stream
let awsPutRes = await s3.client.putObject({
Body: reqRes.body,
Key: name,
Bucket: path
}).promise();
};
Would really appreciate any help or directions for this issue.
Look like you issue in s3 putObject function.
Use s3.putObject instead s3.client.putObject
const s3 = new AWS.S3();
s3.putObject({
Bucket: process.env.BUCKET,
Key: event.key,
Body: buffer,
}).promise()
Also this link to Upload image on s3 from URL help you : here
So after some more tries I was able to solve the issue.
Here is part of the code:
async function upload(fileStream, fileName, bucketName) {
let params = {
Body: fileStream,
Key: fileName,
Bucket: bucketName
};
await s3.client.putObject(params).promise();
}
module.exports.handler = async(event, context) => {
try {
let s3UploadParams = {
uri: imageDownloadUrl,
encoding: null
};
let imageFileStream = await request(s3UploadParams);
await upload(imageFileStream, s3FileName, bucketName);
} catch (err) {
context.fail(null, 'Error trying to upload to aws' + err);
}
}
Instead of using 'request' lib I am using 'request-promise-native' to get the stream from the url.

PDF uploading to AWS S3 corrupted

I managed to get my generated pdf uploaded to s3 from my node JS server. Pdf looks okay on my local folder but when I tried to access it from the AWS console, it indicates "Failed to load PDF document".
I have tried uploading it via the s3.upload and s3.putObject APIs, (for the putObject I also used an .on finish checker to ensure that the file has been fully loaded before sending the request). But the file in the S3 bucket is still the same (small) size, 26 bytes and cannot be loaded. Any help is greatly appreciated!!!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
pdfDoc.end();
writeStream.on('finish', function(){
const s3 = new aws.S3();
aws.config.loadFromPath('./modules/awsconfig.json');
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: '/pdf/inspectionReport.pdf',
Expires: 60,
ContentType: 'application/pdf'
};
s3.putObject(s3Params, function(err,res){
if(err)
console.log(err);
else
console.log(res);
})
I realised that pdfDoc.end() must come before piping starts. Have also used a callback to ensure that s3 upload is called after pdf write is finished. See code below, hope it helps!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
pdfDoc.end();
async.parallel([
function(callback){
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
console.log('pdf write finished!');
callback();
}
], function(err){
const s3 = new aws.S3();
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: pdfDoc,
Expires: 60,
ContentType: 'application/pdf'
};
s3.upload(s3Params, function(err,result){
if(err) console.log(err);
else console.log(result);
});
}
);