Elastic Transcoder - MPEG Dash output - tutorials? - amazon-web-services

I use AWS Elastic Encoder to encode to HLS (with success), and have been trying to get the same mp4 files transcoded to play in MPEG-Dash.
When I transcode into HLS, I typically choose 30 sec segments, and for a 5 min video, I get 12 files and a playlist (using one of the built in presets)
When I transcode same file into MPEG-Dash (using 30 second segments) - I still get one large file. No segments. And no audio. The playlist format seems to be ok - in .mpd format. I am using a built in preset.
Am I supposed to do TWO transcodes for every mpeg-dash transcode? One in video, and the other in audio, with a playlist to tie the two together?
Is there an online tutorial which outlines how to encode into MPEG-Dash format?
Or what do most of you use?

Related

Get AWS MediaLive video duration after live stream ends

I'm using AWS MediaLive & MediaStore for live streaming and I'm looking for a way to get the duration of the final video, after the live stream ends.
I'm using HLS Output group type and I'm archiving it to S3. One way I was able to do this, is to get the m3u8 file which contains all segments and sum the duration of all the segments.
Is there any better way? Maybe by using MediaPackage ?
Thank you!
Using a VOD type HLS output is the best way, since the manifest of a VOD HLS rendition contains a list of all segments and the duration of each segment in the EXTINF tag. Adding EXT-X-PROGRAM-DATE-TIME tags to the manifest may also help you to determine the start time of the live event.
Any other option, such as trying to determine the start and end time based on the MediaLive channel channel start/stop, is not as accurate, since this does not take into account the fact that the source could start minutes if not hours after the channel start.

Aws MediaConvert - Create one output video file with a single audio track and multiple video inputs

I’m working with Aws MediaConvert in order to create video-files concatenation.
I'm able for now to create concatenation of n videos in one output mpeg4 file, with or without audio "inside each video input".
What i'm looking to achieve is to create the same but with one single audio track for the whole video that i would import and muting each video inputs audio if there are.
I don't know if MediaConvert allows that (not found my case in Aws MediaConvert Documentation).
I made a small schema representing what i'm trying to achieve :
I figured out i can do that with two jobs, one that will concatenate all my video input and mute their audios if there are. And the second one merging the single audio track in the result of the previous one.
This solution however doesn't feel to be the best one.
Do you know if can achieve what i'm trying to do in one job with Aws MediaConvert and if yes, which settings have I to tweak ?
Many thanks in advance !
Maybe you can have look at this link, especially for the following part
If your audio is in a separate file from your video, choose the External file slider switch element and provide the URI to your audio input file that is stored in Amazon S3 ...
By choosing audio from external file and set proper timestamp offset, maybe you can combine your two jobs to one.

AWS Transcribe is not recognizing the media format of my file correctly

I'm using a lambda function to receive a bytes array of audio data, save it as mp3, store it in S3, and then use the S3 object to start a Transcribe job.
Everything's been processed correctly. I can see the .mp3 file in S3. I've also downloaded it to my local machine and played it, and it plays correctly as mp3.
However, when I start the transcription job I get back an error:
The media format that you specified doesn't match the detected media format. Check the media format and try your request again.
This is my call to start the AWS Transcribe job:
transcribe.start_transcription_job(
TranscriptionJobName=job_name,
Media={'MediaFileUri': job_uri},
MediaFormat='mp3',
LanguageCode='en-US'
)
Any idea what may be causing this?
Cheers!
mp3 requires compression, if you just save byte array, then it's not in .mp3 format. You can use soxi to validate audio files: http://sox.sourceforge.net/soxi.html

What is usage and purpose of Bitrate in AWS Elastic Transcoder Presets

I want to transcode a video using aws elastic transcoder jobs. I have checked video transcoding with preset id "1351620000001-400050" for "Smooth 800k" and this preset having max bitrate 688
Is this mean 688 kbps??
And the input video transcoded with in OR equal to 688??
Refer the image,
If it is, in my case it behaves differently,
An input video bitrate of "10479 kbps", it was transcoded into 5812 kbps.
Is it expected behaviour??
What is the purpose and usage of Bitrate in AWS ealstic transcoder presets?
Kindly provide your inputs.
Bit Rate is the video bit rate of output file in kilobits/second. If you select a video with a lower bit rate than the selected bit rate, your video bit rate will be lower. Valid values for bit rate depend on the codec that you chose.
You can encode videos in different bit rates to support different types of devices and different types of connection e.g. bandwidth available.
Amazon has a good page describing all of this.
Elastic Transcoder Preset

Video Streaming: MPEG-DASH , AWS cloudfront, dash.js

I am creating a video streaming application hosted on AWS. I have got mp4 which are hosted on AWS S3. To stream video files, I want to transcode mp4 to MPEG-DASH (mpd) format and store in a different AWS S3 bucket. I will be AWS cloudfront to stream above transcoded mpd files and use dash.js or videogular to stream on client side.
The problem I am facing is here how to transcode mp4 to mpd.(without using AWS transcoder, bit expensive). I was thinking to leverage AWS Lambda to listen the source S3 bucket and output to a different S3 bucket. But could not find a module to transcode programmatically(to convert it to Lambda function). Has anyone done it yet and would like to give some insight?
An mpd file is actually just a text based index file - it contains URLs to the video and audio steams but no media itself.
The media for MPEG DASH is stored in segments, for mp4 in a fragmented mp4 format.
If you want to create fragmented mp4 from mp4 yourself, then there are some tools which you can look at to do this, or even use as part of a batch process.
One example is mp4Dash (https://www.bento4.com/documentation/mp4dash/). You can see examples here on this link to convert a single mp4 file, or to convert multiple bit rate versions of a single file, which is more typical when using DASH for Adaptve Bit Rate Streaming (ABR - allows the client choose the bit rate of the next segment to download depending on the current network conditions):
Single MP4 input file
mp4dash video.mp4
Multi-bitrate set of MP4 files
mp4dash video_1000.mp4 video_2000.mp4 video_3000.mp4
Another example is mp4Box: https://gpac.wp.imt.fr/mp4box/dash/
Its worth nothing that there are actually multiple ways to stream DASH in AWS - Elastic Transcode can create MPEG DASH stream which you can stored and stream from S3, you can use cloud front and services like Unified Streaming or Wowza etc. Streaming is complicated so if this is for a high volume important service it may be worth looking at these and seeing if there is an option or combination which meets your needs without being too expensive.