RestAPI Streem Video How i can do for this
based requeste input:Video,Timesteemp,
output: Return Based on Timeframe or base64 data Next Video Chunks send back
Related
That is, I am recording a time sensitive video, so I want to reduce upload latency as much as possible. What I want to do is, as I'm recording the video, I want to upload the recorded chunks while still recording.
So let's say I am recording a 10 min video. Instead of uploading the entire video at the end, I want to record, say, a minute, upload that as part of my multipart upload, while recording the next min. Once the recording is complete, I only need to upload the last min at the end and then call s3 multipart to complete the upload. This way, the latency between the end of the recording to when it's available in S3 is closer to the upload time of 1 min of video instead of the upload time for 10 min of video
Is this possible with S3 multipart uploads?
I am using Pre-Signed url (generated from our server), to upload to S3 bucket. Using URLSession background session to upload from file to signed URL.
What I have noticed is, if the video is bigger (more than 30 or 50 MB), the upload is really slow. My internet speed is decent with close to 300 Mbps also did real time speed testing and it was coming to >10 MBPS download and upload.
Here is how I am creating session and upload task from file,
let sessionConfiguration : URLSessionConfiguration = URLSessionConfiguration.background(withIdentifier: "SOME_REVERSE_DOMAIN_STRING.backgroundSession")
sessionConfiguration.allowsCellularAccess = true
let backgroundSession: URLSession = URLSession(configuration: sessionConfiguration,delegate: self,delegateQueue:OperationQueue.main)
Upload Task, a basic usage nothing fancy here:
uploadsSession.uploadTask(with: request, fromFile: fileUrl!)
task.resume()
Should I use AWS SDK or Amplify framework and upload? will it make any difference.
To accelerate process you can use multipart upload. In your case without SDK you have to generate a signed URL request for each operation.
Next option is to use S3 Transfer Acceleration.
I'm using a lambda function to receive a bytes array of audio data, save it as mp3, store it in S3, and then use the S3 object to start a Transcribe job.
Everything's been processed correctly. I can see the .mp3 file in S3. I've also downloaded it to my local machine and played it, and it plays correctly as mp3.
However, when I start the transcription job I get back an error:
The media format that you specified doesn't match the detected media format. Check the media format and try your request again.
This is my call to start the AWS Transcribe job:
transcribe.start_transcription_job(
TranscriptionJobName=job_name,
Media={'MediaFileUri': job_uri},
MediaFormat='mp3',
LanguageCode='en-US'
)
Any idea what may be causing this?
Cheers!
mp3 requires compression, if you just save byte array, then it's not in .mp3 format. You can use soxi to validate audio files: http://sox.sourceforge.net/soxi.html
I'm trying to record and save Audio to AWS S3. I'm able to record and play the newly recorded audio in the App, but when trying to save the following file (using https://github.com/benjreinhart/react-native-aws3), the file turns up empty in S3:
const file = { uri: sound, name:${name}.caf, type: "audio/x-caf" };
Has anyone found an audio type that works with .caf?
I use AWS Elastic Encoder to encode to HLS (with success), and have been trying to get the same mp4 files transcoded to play in MPEG-Dash.
When I transcode into HLS, I typically choose 30 sec segments, and for a 5 min video, I get 12 files and a playlist (using one of the built in presets)
When I transcode same file into MPEG-Dash (using 30 second segments) - I still get one large file. No segments. And no audio. The playlist format seems to be ok - in .mpd format. I am using a built in preset.
Am I supposed to do TWO transcodes for every mpeg-dash transcode? One in video, and the other in audio, with a playlist to tie the two together?
Is there an online tutorial which outlines how to encode into MPEG-Dash format?
Or what do most of you use?