hope you are having great day.
I am trying to play an audio list on click. every thing works just find . but when I continue pressing and reaching the last audio . it gives me this error
TypeError: undefined is not an object (evaluating '_data.default[songIndex - 1].audio')]
the reason is that there in no file to play anymore in the array. now I want when I reach the last Item I should automatically start from the beginning.
const goNext = async () => {
const { sound } = await Audio.Sound.createAsync({
uri: songs[songIndex + 1].audio,
});
setSound(sound);
await sound.playAsync();
slider.current.scrollToOffset({
offset: (songIndex + 1) * width,
});
};
you should use % to repeat the music list
const goNext = async () => {
const { sound } = await Audio.Sound.createAsync({
uri: songs[(songIndex + 1)%songs.length].audio,
});
setSound(sound);
await sound.playAsync();
slider.current.scrollToOffset({
offset: (songIndex + 1)%songs.length * width,
});
};
Related
I have a few lambda functions that allow to make a multipart upload to an Amazon S3 bucket. These are responsible for creating the multipart upload, then another one for each part upload and the last one for completing the upload.
First two seem to work fine (they respond with statusCode 200), but the last one fails. On Cloudwatch, I can see an error saying 'Your proposed upload is smaller than the minimum allowed size'.
This is not true, since I'm uploading files bigger than 5Mb minimum size specified on docs. However, I think the issue is happening in every single part upload.
Why? Because each part only has 2Mb of data. On docs, I can see that every but the last part needs to be at least 5Mb sized. However, when I try to upload parts bigger than 2Mb, I get a CORS error, most probably because I have passed the 6Mb lambda payload limit.
Can anyone help me with this? Below I leave my client-side code, just in case you can see any error on it.
setLoading(true);
const file = files[0];
const size = 2000000;
const extension = file.name.substring(file.name.lastIndexOf('.'));
try {
const multiStartResponse = await startMultiPartUpload({ fileType: extension });
console.log(multiStartResponse);
let part = 1;
let parts = [];
/* eslint-disable no-await-in-loop */
for (let start = 0; start < file.size; start += size) {
const chunk = file.slice(start, start + size + 1);
const textChunk = await chunk.text();
const partResponse = await uploadPart({
file: textChunk,
fileKey: multiStartResponse.data.Key,
partNumber: part,
uploadId: multiStartResponse.data.UploadId,
});
console.log(partResponse);
parts.push({ ETag: partResponse.data.ETag, PartNumber: part });
part++;
}
/* eslint-enable no-await-in-loop */
const completeResponse = await completeMultiPartUpload({
fileKey: multiStartResponse.data.Key,
uploadId: multiStartResponse.data.UploadId,
parts,
});
console.log(completeResponse);
} catch (e) {
console.log(e);
} finally {
setLoading(false);
}
It seems that uploading parts via lambda is simply not possible, so we need to use a different approach.
Now, our startMultiPartUpload lambda returns not only an upload ID but also a bunch of signedURLs, generated with S3 aws-sdk class, using getSignedUrlPromise method, and 'uploadPart' as operation, as shown below:
const getSignedPartURL = (bucket, fileKey, uploadId, partNumber) =>
s3.getSignedUrlPromise('uploadPart', { Bucket: bucket, Key: fileKey, UploadId:
uploadId, PartNumber: partNumber })
Also, since uploading a part this way does not return an ETag (or maybe it does, but I just couldn't achieve it), we need to call listParts method on S3 class after uploading each part in order to get those ETags. I'll leave my React code below:
const uploadPart = async (url, data) => {
try {
// return await uploadPartToS3(url, data);
return fetch(url, {
method: 'PUT',
body: data,
}).then((e) => e.body);
} catch (e) {
console.error(e);
throw new Error('Unknown error');
}
};
// If file is bigger than 50Mb then perform a multi part upload
const uploadMultiPart = async ({ name, size, originFileObj },
updateUploadingMedia) => {
// chunk size determines each part size. This needs to be > 5Mb
const chunkSize = 60000000;
let chunkStart = 0;
const extension = name.substring(name.lastIndexOf('.'));
const partsQuan = Math.ceil(size / chunkSize);
// Start multi part upload. This returns both uploadId and signed urls for each
part.
const startResponse = await startMultiPartUpload({
fileType: extension,
chunksQuan: partsQuan,
});
console.log('start response: ', startResponse);
const {
signedURLs,
startUploadResponse: { Key, UploadId },
} = startResponse.data;
try {
let promises = [];
/* eslint-disable no-await-in-loop */
for (let i = 0; i < partsQuan; i++) {
// Split file into parts and upload each one to it's signed url
const chunk = await originFileObj.slice(chunkStart, chunkStart +
chunkSize).arrayBuffer();
chunkStart += chunkSize;
promises.push(uploadPart(signedURLs[i], chunk));
if (promises.length === 5) {
await Promise.all(promises);
promises = [];
}
console.log('UPLOAD PART RESPONSE', uploadResponse);
}
/* eslint-enable no-await-in-loop */
// wait until every part is uploaded
await allProgress({ promises, name }, (media) => {
updateUploadingMedia(media);
});
// Get parts list to build complete request (each upload does not retrieve ETag)
const partsList = await listParts({
fileKey: Key,
uploadId: UploadId,
});
// build parts object for complete upload
const completeParts = partsList.data.Parts.map(({ PartNumber, ETag }) => ({
ETag,
PartNumber,
}));
// Complete multi part upload
completeMultiPartUpload({
fileKey: Key,
uploadId: UploadId,
parts: completeParts,
});
return Key;
} catch (e) {
console.error('ERROR', e);
const abortResponse = await abortUpload({
fileKey: Key,
uploadId: UploadId,
});
console.error(abortResponse);
}
};
Sorry for identation, I corrected it line by line as best as I could :).
Some considerations:
-We use 60Mb chunks because our backend took too long generating all those signed urls for big files.
-Also, this solution is meant to upload really big files, that's why we await every 5 parts.
However, we are stil facing issues to upload huge files (about 35gb) since after uploading 100/120 parts, fetch requests suddenly starts to fail and no more parts are uploaded. If someone knows what's going on, it would be amazing. I publish this as an answer because I think most people will find this very useful.
Currently, this is how I read from C++ using Flutter:
final Uint8List result = await platform.invokeMethod(Common.MESSAGE_METHOD, {"message": buffer});
It is handled by Kotlin like this:
MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL).setMethodCallHandler { call, result ->
if (call.method == MESSAGE_METHOD) {
val message: ByteArray? = call.argument<ByteArray>("message")
//... //response = Read something FROM C++
result.success(response)
Since this happens in the main thread, if I take too much time to answer, I make Flutter's UI slow.
Is there a solution to get C++ data in an async way?
I know that Flutter has support for event channels to send data back from C++ to Flutter. But what about just requesting the data on the Flutter side and waiting for it to arrive in a Future, so I can have lots of widgets inside a FutureBuilder that resolves to something when ready?
If reading something from C++ is a heavy process, You can use AsysncTask to perform it in the background for android.
internal class HeavyMsgReader(var result: MethodChannel.Result) : AsyncTask<ByteArray?, Void?, String?>() {
override fun doInBackground(vararg message: ByteArray?): String {
//... //response = Read something FROM C++
return "response"
}
override fun onPostExecute(response: String?) {
result.success(response)
}
}
Calling async task:
MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL).setMethodCallHandler { call, result ->
if (call.method == MESSAGE_METHOD) {
val message: ByteArray? = call.argument<ByteArray>("message")
HeavyMsgReader(result).execute(message);
Hopefully this will work
import 'dart:async';
Future<Uint8List> fetchData(buffer) async {
final Uint8List result = await platform.invokeMethod(Common.MESSAGE_METHOD, {"message": buffer});
return result;
}
And just call it, like this
fetchData(buffer).then((result) => {
print(result)
}).catchError(print);
Proof that its working:
import 'dart:async';
Future<String> fetchUserOrder() async {
await Future.delayed(Duration(seconds: 5));
return 'Callback!';
}
Future<void> main() async {
fetchUserOrder().then((result) => {
print(result)
}).catchError(print);
while(true){
print('main_thread_running');
await Future.delayed(Duration(seconds: 1));
}
}
output:
main_thread_running
main_thread_running
main_thread_running
main_thread_running
main_thread_running
Callback!
main_thread_running
main_thread_running
...
I have a video stored in s3 bucket with authenticated-read ACL.
I need to read and make a trailer with ffmpeg (nodejs)
Here's the code I use to generate the trailer
exports.generatePreview = (req, res) => {
const getParams = {
Bucket: S3_CREDENTIALS.bucketName,
Key: req.params.key
}
s3.getSignedUrl('getObject', getParams, (err, signedRequest) => {
console.log(signedRequest, err, 'getSignedUrl')
ffmpeg(new URL(signedRequest))
.size('640x?')
.aspect('4:3')
.seekInput('3:00')
.duration('0:30')
.then(function (video) {
s3.putObject({ Bucket: S3_CREDENTIALS.bucketName, key: 'preview_' + req.body.key, Body: video }, function (err, data) {
console.log(err, data)
})
});
});
}
Unfortunately, the constructor path seems not to read remote url. If I try to execute an ffmpeg command line with the same signedurl (i.e. ffmpeg -i "https://[bucketname].s3.eu-west-1.amazonaws.com/[key.mp4]?[signedParams]" -vn -acodec pcm_s16le -ar 44100 -ac 2 video.wav)
The error I get is that the signedRequest url 'The input file does not exist'
It seems fs.readFileSync https is not supported even if I try the request with http with the same result. fs.readFileSync(signedurl) => gives the same result
How to overcome this issue?
If you're using node-ffmpeg this isn't possible because the library only accepts a string pointing to a local path, but fluent-ffmpeg does support readstreams so give that a try.
For example (untested, just spitballing):
const ffmpeg = require('fluent-ffmpeg');
const stream = require('stream');
exports.generatePreview = (req, res) => {
let params = {Bucket: S3_CREDENTIALS.bucketName, Key: req.params.key};
// Retrieve object stream
let readStream = s3.getObject(params).createReadStream();
// Set up the ffmpeg process
let ffmpegProcess = new ffmpeg(readStream)
//Add your args here
.toFormat('mp4');
ffmpegProcess.on('error', (err, stdout, stderr) => {
// Handle errors here
}).on('end', () => {
// Processing is complete
}).pipe(() => {
// Create a new stream
let pt = new stream.PassThrough();
// Reuse the same params object and set the Body to the stream
params.Key = 'preview_' + req.body.key;
params.Body = pt;
// Upload and wait for the result
s3.upload(params, (err, data) => {
if (err)
return console.error(err);
console.log("done");
})
});
});
This will have high memory requirements so if this is a Lambda function you might play around with retrieving only the first X bytes of the file and converting only that.
I'm trying to use imagemagick in my Google Cloud function. The function is triggered by uploading a file to a Google Cloud Storage bucket. I have grander plans, but trying to get there one step at a time. Starting with identify.
// imagemagick_setup
const gm = require('gm').subClass({imageMagick: true});
const path = require('path');
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
exports.processListingImage = (event, context) => {
const object = event.data || event; // Node 6: event.data === Node 8+: event
const filename = object.name;
console.log("Filename: ", filename);
const fullFileObject = storage.bucket(object.bucket).file(object.name);
console.log("Calling resize function");
let resizePromise = resizeImage( fullFileObject );
<more stuff>
};
function resizeImage( file, sizes ) {
const tempLocalPath = `/tmp/${path.parse(file.name).base}`;
return file
.download({destination: tempLocalPath})
.catch(err => {
console.error('Failed to download file.', err);
return Promise.reject(err);
})
.then( () => {
// file now downloaded, get it's metadata
return new Promise((resolve, reject) => {
gm( tempLocalPath )
.identify( (err, result) => {
if (err)
{
console.log("Error reading metadata: ", err);
}
else
{
console.log("Well, there seems to be metadata: ", result);
}
});
});
});
} // end resizeImage()
The local file path is: "/tmp/andy-test.raw". But when the identify function runs, I get an error:
identify-im6.q16: unable to open image `/tmp/magick-12MgKrSna0qp9U.ppm': No such file or directory # error/blob.c/OpenBlob/2701.
Why is identify looking for a different file than what I (believe) I told it to look for? Eventually, I am going to resize the image and write it back out to Cloud Storage, but I wanted to get identify to run first..
Mark had the right answer - if I upload a jpg file, it works. Onto the next challenge.
I am trying without success to use the readAsDataURL function of the Cordova File plugin to get a base64 version of a video file. My code looks like this:
recordVideo()
{
return new Promise(resolve =>
{
let options: CaptureVideoOptions = { limit: 1, duration: 2 };
MediaCapture.captureVideo(options)
.then(
(data: MediaFile[]) => {
console.log('Media: recordVideo: cordova.file.dataDirectory = ' + cordova.file.dataDirectory + ', path = ' + data[0].fullPath.substring(1));
// Turn the video file into base64
let base64File = File.readAsDataURL(cordova.file.dataDirectory, data[0].fullPath.substring(1));
console.log('Media: recordVideo: got video with data = ' + JSON.stringify(data));
console.log('Media: recordVideo: base64File = ' + JSON.stringify(base64File));
resolve(data);
},
(err: CaptureError) => console.error('ERROR - Media: recordVideo: captureVideo error = ' + err)
);
});
}
The output from the first console.log shows the values of the parameters passed to the readAsDataURL:
Media: recordVideo: cordova.file.dataDirectory = file:///var/mobile/Containers/Data/Application/764345DC-A77D-43C2-9DF7-CDBE6A0DC372/Library/NoCloud/, path = private/var/mobile/Containers/Data/Application/764345DC-A77D-43C2-9DF7-CDBE6A0DC372/tmp/50713961066__4FD8AF8D-BD36-43A4-99CC-F328ADFD7E38.MOV
The second console.log shows the data returned by the MediaCapture plugin:
Media: recordVideo: got video with data = [{"name":"50713961066__4FD8AF8D-BD36-43A4-99CC-F328ADFD7E38.MOV","localURL":"cdvfile://localhost/temporary/50713961066__4FD8AF8D-BD36-43A4-99CC-F328ADFD7E38.MOV","type":"video/quicktime","lastModified":null,"lastModifiedDate":1485446813000,"size":195589,"start":0,"end":0,"fullPath":"/private/var/mobile/Containers/Data/Application/764345DC-A77D-43C2-9DF7-CDBE6A0DC372/tmp/50713961066__4FD8AF8D-BD36-43A4-99CC-F328ADFD7E38.MOV"}]
The last console.log shows the value returned by the readAsDataURL:
Media: recordVideo: base64File = {"__zone_symbol__state":null,"__zone_symbol__value":[]}
There is next to no documentation on using this (that I can find).
Function readAsDataURL gets path and filename as parameters and returns a promise. The usage is
File.readAsDataURL("path_to_the_FileName", "Filename").then(result => {
this.base64File = result;
});
As per the console log, the filename and full path to the filename are obtained from data (promise returned from MediaCapture.captureVideo).
So you can use it as below
var path = "file://"+data[0].fullPath.substring(7,data[0].fullPath.lastIndexOf("/"));
File.readAsDataURL(path, data[0].name).then(result => {
this.base64File = result;
}
If issue is this that File.readAsDataURL is not returning response nor catching error then move the cordova.js after vendor.js script (in the index.html). I was facing this issue in my ionic 3 app. I solved this from this link here.