Recording and uploading a wav to Amazon S3 - amazon-web-services

I want a user to be able to record and upload a .wav to an S3 bucket. Using this, I am able to achieve this, working correctly, as a .webm. file. I am now trying to adapt this to use a MediaRecorder bolt-on that allows the support of .wav files. So I am now trying to integrate that code with RecordRTC which adds .wav support to MediaRecorder.
This functionality essentially works, in that I end up with a .wav file in my Amazon S3 bucket, but the file is corrupted. I think the main place of concern is in the callback function for ondataavailable (a lot of the code afterwards probably can be ignored, but is there just in case). The line console.log(blob); in the following code shows that the blob type is audio/webm.
Any ideas how this can be fixed?
Edit: The resulting file is actually a .webm file, according to link, not a .wav after all! So why not? (However, my computer still shows it as a .wav in the File Inspector)
function isConstructor(obj) {
return !!obj.prototype && !!obj.prototype.constructor.name;
}
class AudioStream {
constructor(region, IdentityPoolId, audioStoreWithBucket) {
this.region = region; //s3 region
this.IdentityPoolId = IdentityPoolId; //identity pool id
this.bucketName = audioStoreWithBucket; //audio file store
this.s3; //variable defination for s3
this.dateinfo = new Date();
this.timestampData = this.dateinfo.getTime(); //timestamp used for file uniqueness
this.etag = []; // etag is used to save the parts of the single upload file
this.recordedChunks = []; //empty Array
this.booleanStop = false; // this is for final multipart complete
this.incr = 0; // multipart requires incremetal so that they can merge all parts by ascending order
this.filename = this.timestampData.toString() + ".wav"; //unique filename
this.uploadId = ""; // upload id is required in multipart
this.recorder; //initializing recorder variable
this.audioConstraints = {
audio: true
};
}
audioStreamInitialize() {
var self = this;
AWS.config.region = self.region;
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: self.IdentityPoolId,
});
self.s3 = new AWS.S3();
navigator.mediaDevices.getUserMedia(self.audioConstraints)
.then(function(stream) {
self.recorder = RecordRTC(stream, {
type: 'audio',
mimeType: 'audio/wav',
recorderType: MediaStreamRecorder,
disableLogs: true,
// get intervals based blobs
// value in milliseconds
timeSlice: 1800000,
// requires timeSlice above
// returns blob via callback function
ondataavailable: function(blob) {
console.log("ondata!")
var normalArr = [];
/*
Here we push the stream data to an array for future use.
*/
console.log(blob);
self.recordedChunks.push(blob);
normalArr.push(blob);
/*
here we create a blob from the stream data that we have received.
*/
var bigBlob = new Blob(normalArr, {
type: 'audio/wav'
});
/*
if the length of recordedChunks is 1 then it means its the 1st part of our data.
So we createMultipartUpload which will return an upload id.
Upload id is used to upload the other parts of the stream
else.
It Uploads a part in a multipart upload.
*/
if (self.recordedChunks.length == 1) {
self.startMultiUpload(bigBlob, self.filename)
} else {
/*
self.incr is basically a part number.
Part number of part being uploaded. This is a positive integer between 1 and 10,000.
*/
self.incr = self.incr + 1
self.continueMultiUpload(bigBlob, self.incr, self.uploadId, self.filename, self.bucketName);
}
} // end ondataavailable
});
/*
Called to handle the dataavailable event, which is periodically triggered each time timeslice milliseconds of media have been recorded
(or when the entire media has been recorded, if timeslice wasn't specified).
The event, of type BlobEvent, contains the recorded media in its data property.
You can then collect and act upon that recorded media data using this event handler.
*/
});
}
disableAllButton() {
//$("#formdata button[type=button]").attr("disabled", "disabled");
}
enableAllButton() {
//$("#formdata button[type=button]").removeAttr("disabled");
}
/*
The MediaRecorder method start(), which is part of the MediaStream Recording API,
begins recording media into one or more Blob objects.
You can record the entire duration of the media into a single Blob (or until you call requestData()),
or you can specify the number of milliseconds to record at a time.
Then, each time that amount of media has been recorded, an event will be delivered to let you act upon the recorded media,
while a new Blob is created to record the next slice of the media
*/
startRecording(id) {
var self = this;
//self.enableAllButton();
//$("#record_q1").attr("disabled", "disabled");
/*
1800000 is the number of milliseconds to record into each Blob.
If this parameter isn't included, the entire media duration is recorded into a single Blob unless the requestData()
method is called to obtain the Blob and trigger the creation of a new Blob into which the media continues to be recorded.
*/
/*
PLEASE NOTE YOU CAN CHANGE THIS PARAM OF 1800000 but the size should be greater then or equal to 5MB.
As for multipart upload the minimum breakdown of the file should be 5MB
*/
//this.recorder.start(1800000);
this.recorder.startRecording();
Shiny.setInputValue("timecode", self.filename);
}
stopRecording(id) {
var self = this;
self.recorder.stopRecording();
}
pauseRecording(id) {
var self = this;
self.recorder.pauseRecording();
//$("#pause_q1").addClass("hide");
//$("#resume_q1").removeClass("hide");
}
resumeRecording(id) {
var self = this;
self.recorder.resumeRecording();
//$("#resume_q1").addClass("hide");
//$("#pause_q1").removeClass("hide");
}
/*
Initiates a multipart upload and returns an upload ID.
Upload id is used to upload the other parts of the stream
*/
startMultiUpload(blob, filename) {
var self = this;
var audioBlob = blob;
var params = {
Bucket: self.bucketName,
Key: filename,
ContentType: 'audio/wav',
ACL: 'private',
};
self.s3.createMultipartUpload(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
self.uploadId = data.UploadId
self.incr = 1;
self.continueMultiUpload(audioBlob, self.incr, self.uploadId, self.filename, self.bucketName);
}
});
}
continueMultiUpload(audioBlob, PartNumber, uploadId, key, bucketName) {
var self = this;
var params = {
Body: audioBlob,
Bucket: bucketName,
Key: key,
PartNumber: PartNumber,
UploadId: uploadId
};
console.log(params);
self.s3.uploadPart(params, function(err, data) {
if (err) {
console.log(err, err.stack)
} // an error occurred
else {
/*
Once the part of data is uploaded we get an Entity tag for the uploaded object(ETag).
which is used later when we complete our multipart upload.
*/
self.etag.push(data.ETag);
if (self.booleanStop === true) {
self.completeMultiUpload();
}
}
});
}
/*
Completes a multipart upload by assembling previously uploaded parts.
*/
completeMultiUpload() {
var self = this;
var outputTag = [];
/*
here we are constructing the Etag data in the required format.
*/
self.etag.forEach((data, index) => {
const obj = {
ETag: data,
PartNumber: ++index
};
outputTag.push(obj);
});
var params = {
Bucket: self.bucketName, // required
Key: self.filename, // required
UploadId: self.uploadId, // required
MultipartUpload: {
Parts: outputTag
}
};
self.s3.completeMultipartUpload(params, function(err, data) {
if (err) {
console.log(err, err.stack);
} // an error occurred
else {
// initialize variable back to normal
self.etag = [], self.recordedChunks = [];
self.uploadId = "";
self.booleanStop = false;
//self.disableAllButton();
self.removeLoader();
console.log("sent!");
}
});
}
/*
set loader
*/
setLoader() {
//$("#kc-container").addClass("overlay");
//$(".preloader-wrapper.big.active.loader").removeClass("hide");
}
/*
remove loader
*/
removeLoader() {
// $("#kc-container").removeClass("overlay");
//$(".preloader-wrapper.big.active.loader").addClass("hide");
}
getFilename() {
return this.filename;
}
}

Related

Can't upload folder with large amount of files to google storage. I using "#ffmpeg-installer/ffmpeg" and #google-cloud/storage

I upload file to google storage using "#ffmpeg-installer/ffmpeg" and #google-cloud/storage in my node.js App.
Step 1. file uploading to fs is in child processes - one process for each type of resolution (totaly six).
step 2. encription (converting to stream)
step 3. upload to google storage
I use "Upload a directory to a bucket" in order to send the video from the client to the Google Cloud Storage bucket.
This way is working fine only with small video.
for example when I upload video with duration one hour it split on chunk and totally I get more three thousands files. But the problem occurs when there are more than 1500 files
So actually i upload folder with large amount of files, but not all of this files are uploaded to cloud.
maybe someone had the similar problem and helps fix it.
const uploadFolder = async (bucketName, directoryPath, socketInstance) => {
try {
let dirCtr = 1;
let itemCtr = 0;
const fileList = [];
const onComplete = async () => {
const folderName = nanoid(46);
await Promise.all(
fileList.map(filePath => {
const fileName = path.relative(directoryPath, filePath);
const destination = `${ folderName }/${ fileName }`;
return storage
.bucket(bucketName)
.upload(filePath, { destination })
.then(
uploadResp => ({ fileName: destination, status: uploadResp[0] }),
err => ({ fileName: destination, response: err })
);
})
);
if (socketInstance) socketInstance.emit('uploadProgress', {
message: `Added files to Google bucket`,
last: false,
part: false
});
return folderName;
};
const getFiles = async directory => {
const items = await fs.readdir(directory);
dirCtr--;
itemCtr += items.length;
for(const item of items) {
const fullPath = path.join(directory, item);
const stat = await fs.stat(fullPath);
itemCtr--;
if (stat.isFile()) {
fileList.push(fullPath);
} else if (stat.isDirectory()) {
dirCtr++;
await getFiles(fullPath);
}
}
}
await getFiles(directoryPath);
return onComplete();
} catch (e) {
log.error(e.message);
throw new Error('Can\'t store folder.');
}
};

Amazon S3 multipart upload part size via lambda

I have a few lambda functions that allow to make a multipart upload to an Amazon S3 bucket. These are responsible for creating the multipart upload, then another one for each part upload and the last one for completing the upload.
First two seem to work fine (they respond with statusCode 200), but the last one fails. On Cloudwatch, I can see an error saying 'Your proposed upload is smaller than the minimum allowed size'.
This is not true, since I'm uploading files bigger than 5Mb minimum size specified on docs. However, I think the issue is happening in every single part upload.
Why? Because each part only has 2Mb of data. On docs, I can see that every but the last part needs to be at least 5Mb sized. However, when I try to upload parts bigger than 2Mb, I get a CORS error, most probably because I have passed the 6Mb lambda payload limit.
Can anyone help me with this? Below I leave my client-side code, just in case you can see any error on it.
setLoading(true);
const file = files[0];
const size = 2000000;
const extension = file.name.substring(file.name.lastIndexOf('.'));
try {
const multiStartResponse = await startMultiPartUpload({ fileType: extension });
console.log(multiStartResponse);
let part = 1;
let parts = [];
/* eslint-disable no-await-in-loop */
for (let start = 0; start < file.size; start += size) {
const chunk = file.slice(start, start + size + 1);
const textChunk = await chunk.text();
const partResponse = await uploadPart({
file: textChunk,
fileKey: multiStartResponse.data.Key,
partNumber: part,
uploadId: multiStartResponse.data.UploadId,
});
console.log(partResponse);
parts.push({ ETag: partResponse.data.ETag, PartNumber: part });
part++;
}
/* eslint-enable no-await-in-loop */
const completeResponse = await completeMultiPartUpload({
fileKey: multiStartResponse.data.Key,
uploadId: multiStartResponse.data.UploadId,
parts,
});
console.log(completeResponse);
} catch (e) {
console.log(e);
} finally {
setLoading(false);
}
It seems that uploading parts via lambda is simply not possible, so we need to use a different approach.
Now, our startMultiPartUpload lambda returns not only an upload ID but also a bunch of signedURLs, generated with S3 aws-sdk class, using getSignedUrlPromise method, and 'uploadPart' as operation, as shown below:
const getSignedPartURL = (bucket, fileKey, uploadId, partNumber) =>
s3.getSignedUrlPromise('uploadPart', { Bucket: bucket, Key: fileKey, UploadId:
uploadId, PartNumber: partNumber })
Also, since uploading a part this way does not return an ETag (or maybe it does, but I just couldn't achieve it), we need to call listParts method on S3 class after uploading each part in order to get those ETags. I'll leave my React code below:
const uploadPart = async (url, data) => {
try {
// return await uploadPartToS3(url, data);
return fetch(url, {
method: 'PUT',
body: data,
}).then((e) => e.body);
} catch (e) {
console.error(e);
throw new Error('Unknown error');
}
};
// If file is bigger than 50Mb then perform a multi part upload
const uploadMultiPart = async ({ name, size, originFileObj },
updateUploadingMedia) => {
// chunk size determines each part size. This needs to be > 5Mb
const chunkSize = 60000000;
let chunkStart = 0;
const extension = name.substring(name.lastIndexOf('.'));
const partsQuan = Math.ceil(size / chunkSize);
// Start multi part upload. This returns both uploadId and signed urls for each
part.
const startResponse = await startMultiPartUpload({
fileType: extension,
chunksQuan: partsQuan,
});
console.log('start response: ', startResponse);
const {
signedURLs,
startUploadResponse: { Key, UploadId },
} = startResponse.data;
try {
let promises = [];
/* eslint-disable no-await-in-loop */
for (let i = 0; i < partsQuan; i++) {
// Split file into parts and upload each one to it's signed url
const chunk = await originFileObj.slice(chunkStart, chunkStart +
chunkSize).arrayBuffer();
chunkStart += chunkSize;
promises.push(uploadPart(signedURLs[i], chunk));
if (promises.length === 5) {
await Promise.all(promises);
promises = [];
}
console.log('UPLOAD PART RESPONSE', uploadResponse);
}
/* eslint-enable no-await-in-loop */
// wait until every part is uploaded
await allProgress({ promises, name }, (media) => {
updateUploadingMedia(media);
});
// Get parts list to build complete request (each upload does not retrieve ETag)
const partsList = await listParts({
fileKey: Key,
uploadId: UploadId,
});
// build parts object for complete upload
const completeParts = partsList.data.Parts.map(({ PartNumber, ETag }) => ({
ETag,
PartNumber,
}));
// Complete multi part upload
completeMultiPartUpload({
fileKey: Key,
uploadId: UploadId,
parts: completeParts,
});
return Key;
} catch (e) {
console.error('ERROR', e);
const abortResponse = await abortUpload({
fileKey: Key,
uploadId: UploadId,
});
console.error(abortResponse);
}
};
Sorry for identation, I corrected it line by line as best as I could :).
Some considerations:
-We use 60Mb chunks because our backend took too long generating all those signed urls for big files.
-Also, this solution is meant to upload really big files, that's why we await every 5 parts.
However, we are stil facing issues to upload huge files (about 35gb) since after uploading 100/120 parts, fetch requests suddenly starts to fail and no more parts are uploaded. If someone knows what's going on, it would be amazing. I publish this as an answer because I think most people will find this very useful.

File upload to Amazon S3 from Salesforce LWC (without apex)

I have tried create a LWC component which job is to upload a file in Amazom S3 bucket. I have configured AWS bucket perfectly test it upload a file by postman. But I could not file from LWC component. I was getting this error.
I am following this tutorial.
I have configured CSP Trusted Sites and CORS in salesforce.Images below:
Here is my code:
import { LightningElement, track, wire } from "lwc";
import { getRecord } from "lightning/uiRecordApi";
import { loadScript } from "lightning/platformResourceLoader";
import AWS_SDK from "#salesforce/resourceUrl/awsjssdk";
import getAWSCredential from '#salesforce/apex/CRM_AWSUtility.getAWSCredential';
export default class FileUploadComponentLWC extends LightningElement {
/*========= Start - variable declaration =========*/
s3; //store AWS S3 object
isAwsSdkInitialized = false; //flag to check if AWS SDK initialized
#track awsSettngRecordId; //store record id of custom metadata type where AWS configurations are stored
selectedFilesToUpload; //store selected file
#track showSpinner = false; //used for when to show spinner
#track fileName; //to display the selected file name
/*========= End - variable declaration =========*/
//Called after every render of the component. This lifecycle hook is specific to Lightning Web Components,
//it isn’t from the HTML custom elements specification.
renderedCallback() {
if (this.isAwsSdkInitialized) {
return;
}
Promise.all([loadScript(this, AWS_SDK)])
.then(() => {
//For demo, hard coded the Record Id. It can dynamically be passed the record id based upon use cases
// this.awsSettngRecordId = "m012v000000FMQJ";
})
.catch(error => {
console.error("error -> " + error);
});
}
//Using wire service getting AWS configuration from Custom Metadata type based upon record id passed
#wire(getAWSCredential)
awsConfigData({ error, data }) {
if (data) {
console.log('data: ',data)
let awsS3MetadataConf = {};
let currentData = data[0]
//console.log("AWS Conf ====> " + JSON.stringify(currentData));
awsS3MetadataConf = {
s3bucketName: currentData.Bucket_Name__c,
awsAccessKeyId: currentData.Access_Key__c,
awsSecretAccessKey: currentData.Secret_Key__c,
s3RegionName: 'us-east-1'
};
this.initializeAwsSdk(awsS3MetadataConf); //Initializing AWS SDK based upon configuration data
} else if (error) {
console.error("error ====> " + JSON.stringify(error));
}
}
//Initializing AWS SDK
initializeAwsSdk(confData) {
const AWS = window.AWS;
AWS.config.update({
accessKeyId: confData.awsAccessKeyId, //Assigning access key id
secretAccessKey: confData.awsSecretAccessKey //Assigning secret access key
});
AWS.config.region = confData.s3RegionName; //Assigning region of S3 bucket
this.s3 = new AWS.S3({
apiVersion: "2006-03-01",
params: {
Bucket: confData.s3bucketName //Assigning S3 bucket name
}
});
console.log('S3: ',this.s3)
this.isAwsSdkInitialized = true;
}
//get the file name from user's selection
handleSelectedFiles(event) {
if (event.target.files.length > 0) {
this.selectedFilesToUpload = event.target.files[0];
this.fileName = event.target.files[0].name;
console.log("fileName ====> " + this.fileName);
}
}
//file upload to AWS S3 bucket
uploadToAWS() {
if (this.selectedFilesToUpload) {
console.log('uploadToAWS...')
this.showSpinner = true;
let objKey = this.selectedFilesToUpload.name
.replace(/\s+/g, "_") //each space character is being replaced with _
.toLowerCase();
console.log('objKey: ',objKey);
//starting file upload
this.s3.putObject(
{
Key: objKey,
ContentType: this.selectedFilesToUpload.type,
Body: this.selectedFilesToUpload,
ACL: "public-read"
},
err => {
if (err) {
this.showSpinner = false;
console.error(err);
} else {
this.showSpinner = false;
console.log("Success");
this.listS3Objects();
}
}
);
}
this.showSpinner = false;
console.log('uploadToAWS Finish...')
}
//listing all stored documents from S3 bucket
listS3Objects() {
console.log("AWS -> " + JSON.stringify(this.s3));
this.s3.listObjects((err, data) => {
if (err) {
console.log("Error listS3Objects", err);
} else {
console.log("Success listS3Objects", data);
}
});
}
}
Please help someone. Thank you advance.
Problem solved. We have found problem in AWS configuration.

How to get the thumbnail of the image uploaded to S3 in ASP.NET?

I am trying to upload large images to AWS S3 using the Multipart Upload API. From UI, i am sending the chunks(blob) of an image and when the last part arrives, completing the upload and getting the upload file url. It is working very nicely.
Sample Code:
public UploadPartResponse UploadChunk(Stream stream, string fileName, string uploadId, List<PartETag> eTags, int partNumber, bool lastPart)
{
stream.Position = 0;
//Step 1: build and send a multi upload request
if (partNumber == 1)
{
var initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName
};
var initResponse = _s3Client.InitiateMultipartUpload(initiateRequest);
uploadId = initResponse.UploadId;
}
//Step 2: upload each chunk (this is run for every chunk unlike the other steps which are run once)
var uploadRequest = new UploadPartRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartNumber = partNumber,
InputStream = stream,
IsLastPart = lastPart,
PartSize = stream.Length
};
var response = _s3Client.UploadPart(uploadRequest);
//Step 3: build and send the multipart complete request
if (lastPart)
{
eTags.Add(new PartETag
{
PartNumber = partNumber,
ETag = response.ETag
});
var completeRequest = new CompleteMultipartUploadRequest
{
BucketName = _settings.Bucket,
Key = fileName,
UploadId = uploadId,
PartETags = eTags
};
try
{
var res = _s3Client.CompleteMultipartUpload(completeRequest);
return res.Location;
}
catch
{
//do some logging and return null response
return null;
}
}
response.ResponseMetadata.Metadata["uploadid"] = uploadRequest.UploadId;
return response;
}
Now, i need to get the thumbnail of the uploaded image and upload that image too in a Thumbnails directory.
So basically, when the last part(chunk) arrives for the original image, i am completing the upload and retrieving the file url. At that time, i need to upload the thumbnail also and get back the thumbnail url.
I saw that people are referring of lambda function but don't know how to incorporate into my multipart api code setup.
Can anyone give me some direction here? Thanks in advance.

AWS S3 Bucket Upload using CollectionFS and cfs-s3 meteor package

I am using Meteor.js with Amazon S3 Bucket for uploading and storing photos. I am using the meteorite packges collectionFS and aws-s3. I have setup my aws-s3 connection correctly and the images collection is working fine.
Client side event handler:
'click .submit': function(evt, templ) {
var user = Meteor.user();
var photoFile = $('#photoInput').get(0).files[0];
if(photoFile){
var readPhoto = new FileReader();
readPhoto.onload = function(event) {
photodata = event.target.result;
console.log("calling method");
Meteor.call('uploadPhoto', photodata, user);
};
}
And my server side method:
'uploadPhoto': function uploadPhoto(photodata, user) {
var tag = Random.id([10] + "jpg");
var photoObj = new FS.File({name: tag});
photoObj.attachData(photodata);
console.log("s3 method called");
Images.insert(photoObj, function (err, fileObj) {
if(err){
console.log(err, err.stack)
}else{
console.log(fileObj._id);
}
});
The file that is selected is a .jpg image file but upon upload I get this error on the server method:
Exception while invoking method 'uploadPhoto' Error: DataMan constructor received data that it doesn't support
And no matter whether I directly pass the image file, or attach it as data or use the fileReader to read as text/binary/string. I still get that error. Please advise.
Ok, maybe some thoughts. I have done things with collectionFS some months ago, so take care to the docs, because my examples maybe not 100% correct.
Credentials should be set via environment variables. So your key and secret is available on server only. Check this link for further reading.
Ok first, here is some example code which is working for me. Check yours for differences.
Template helper:
'dropped #dropzone': function(event, template) {
addImage(event);
}
Function addImage:
function addImagePreview(event) {
//Go throw each file,
FS.Utility.eachFile(event, function(file) {
//Some Validationchecks
var reader = new FileReader();
reader.onload = (function(theFile) {
return function(e) {
var fsFile = new FS.File(image.src);
//setMetadata, that is validated in collection
//just own user can update/remove fsFile
fsFile.metadata = {owner: Meteor.userId()};
PostImages.insert(fsFile, function (err, fileObj) {
if(err) {
console.log(err);
}
});
};
})(file);
// Read in the image file as a data URL.
reader.readAsDataURL(file);
});
}
Ok, your next point is the validation. The validation can be done with allow/deny rules and with a filter on the FS.Collection. This way you can do all your validation AND insert via client.
Example:
PostImages = new FS.Collection('profileImages', {
stores: [profileImagesStore],
filter: {
maxSize: 3145728,
allow: {
contentTypes: ['image/*'],
extensions: ['png', 'PNG', 'jpg', 'JPG', 'jpeg', 'JPEG']
}
},
onInvalid: function(message) {
console.log(message);
}
});
PostImages.allow({
insert: function(userId, doc) {
return (userId && doc.metadata.owner === userId);
},
update: function(userId, doc, fieldNames, modifier) {
return (userId === doc.metadata.owner);
},
remove: function(userId, doc) {
return false;
},
download: function(userId) {
return true;
},
fetch: []
});
Here you will find another example click
Another point of error is maybe your aws configuration. Have you done everything like it is written here?
Based on this post click it seems that this error occures when FS.File() is not constructed correctly. So maybe this should be you first way to start.
A lot for reading so i hope this helps you :)