What is usage and purpose of Bitrate in AWS Elastic Transcoder Presets - amazon-web-services

I want to transcode a video using aws elastic transcoder jobs. I have checked video transcoding with preset id "1351620000001-400050" for "Smooth 800k" and this preset having max bitrate 688
Is this mean 688 kbps??
And the input video transcoded with in OR equal to 688??
Refer the image,
If it is, in my case it behaves differently,
An input video bitrate of "10479 kbps", it was transcoded into 5812 kbps.
Is it expected behaviour??
What is the purpose and usage of Bitrate in AWS ealstic transcoder presets?
Kindly provide your inputs.

Bit Rate is the video bit rate of output file in kilobits/second. If you select a video with a lower bit rate than the selected bit rate, your video bit rate will be lower. Valid values for bit rate depend on the codec that you chose.
You can encode videos in different bit rates to support different types of devices and different types of connection e.g. bandwidth available.
Amazon has a good page describing all of this.
Elastic Transcoder Preset

Related

AWS service for video optimization and compression

I am trying to build a video/audio/image upload feature for a mobile application. Currently we have set the file size limit to be 1 GB for video and 50 MB for audio and images. These uploaded files will be stored in an s3 bucket and we will use AWS Cloudfront CDN to serve them to users.
I am trying to compress/optimize the size of the media content using some AWS service after they store in S3 bucket. Ideally it will be great if I can put some restriction on the output file like no video file should be greater than 200 MB or with quality greater than 720p. Can someone please help me on this that what AWS service should I use and with some helpful links if available. Thanks
The AWS Elemental MediaConvert service transcodes files on-demand. The service supports output templates which can specify output parameters including resolution, so guaranteeing a 720P maximum resolution is simple.
AWS S3 supports File Events to trigger other AWS actions, such as running a Lambda Function when a new file arrives in a bucket. The Lambda function can load & customize a job template, then submit a transcoding job to MediaConvert to transcode the newly arrived file. See ( https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) for details.
Limiting the size of an output file is not currently a feature within MediaConvert, but you could leverage other AWS tools to do this. Checking the size of a transcoded output could be done with another Lambda Function when the output file arrives in a certain bucket. This second Lambda Fn could then decide to re-transcode the input file with more aggressive job settings (higher compression, different codec, time clipping, etc) in order to produce a smaller output file.
Since file size is a factor for you, I recommend using QVBR or VBR Bit compression with a max bitrate cap to allow you to better predict the worst case file size at a given quality, duration & bitrate. You can allocate your '200MB' per file budget in different ways. For example, you could make 800 seconds (~13min) of 2mbps video, or 1600 seconds (~26min) of 1mbps video, et cetera. You may want to consider several quality tiers, or have your job assembly Lambda Fn do the math for you based on input file duration, which could be determined using mediainfo, ffprobe or other utilities.
FYI there are three ways customers can obtain help with AWS solution design and implementation:
[a] AWS Paid Professional Services - There is a large global AWS ProServices team able to help via paid service engagements.
The fastest way to start this dialog is by submitting the AWS Sales team 'contact me' form found here, and specifying 'Sales Support' : https://aws.amazon.com/contact-us/
[b] AWS Certified Consulting Partners -- AWS certified partners with expertise in many verticals. See search tool & listings here: https://iq.aws.amazon.com/services
[c] AWS Solutions Architects -- these services focused on Enterprise-level AWS accounts. The Sales contact form in item [a] is the best way to engage them. Purchasing AWS Enterprise Support will entitle the customer to a dedicated TAM /SA combination.

Video Streaming: MPEG-DASH , AWS cloudfront, dash.js

I am creating a video streaming application hosted on AWS. I have got mp4 which are hosted on AWS S3. To stream video files, I want to transcode mp4 to MPEG-DASH (mpd) format and store in a different AWS S3 bucket. I will be AWS cloudfront to stream above transcoded mpd files and use dash.js or videogular to stream on client side.
The problem I am facing is here how to transcode mp4 to mpd.(without using AWS transcoder, bit expensive). I was thinking to leverage AWS Lambda to listen the source S3 bucket and output to a different S3 bucket. But could not find a module to transcode programmatically(to convert it to Lambda function). Has anyone done it yet and would like to give some insight?
An mpd file is actually just a text based index file - it contains URLs to the video and audio steams but no media itself.
The media for MPEG DASH is stored in segments, for mp4 in a fragmented mp4 format.
If you want to create fragmented mp4 from mp4 yourself, then there are some tools which you can look at to do this, or even use as part of a batch process.
One example is mp4Dash (https://www.bento4.com/documentation/mp4dash/). You can see examples here on this link to convert a single mp4 file, or to convert multiple bit rate versions of a single file, which is more typical when using DASH for Adaptve Bit Rate Streaming (ABR - allows the client choose the bit rate of the next segment to download depending on the current network conditions):
Single MP4 input file
mp4dash video.mp4
Multi-bitrate set of MP4 files
mp4dash video_1000.mp4 video_2000.mp4 video_3000.mp4
Another example is mp4Box: https://gpac.wp.imt.fr/mp4box/dash/
Its worth nothing that there are actually multiple ways to stream DASH in AWS - Elastic Transcode can create MPEG DASH stream which you can stored and stream from S3, you can use cloud front and services like Unified Streaming or Wowza etc. Streaming is complicated so if this is for a high volume important service it may be worth looking at these and seeing if there is an option or combination which meets your needs without being too expensive.

Elastic Transcoder - MPEG Dash output - tutorials?

I use AWS Elastic Encoder to encode to HLS (with success), and have been trying to get the same mp4 files transcoded to play in MPEG-Dash.
When I transcode into HLS, I typically choose 30 sec segments, and for a 5 min video, I get 12 files and a playlist (using one of the built in presets)
When I transcode same file into MPEG-Dash (using 30 second segments) - I still get one large file. No segments. And no audio. The playlist format seems to be ok - in .mpd format. I am using a built in preset.
Am I supposed to do TWO transcodes for every mpeg-dash transcode? One in video, and the other in audio, with a playlist to tie the two together?
Is there an online tutorial which outlines how to encode into MPEG-Dash format?
Or what do most of you use?

How to distinct presets from AWS transcoding job if all providing same quality?

I am transcoding a video from any format to HLS formats using AWS Elastic transcoding service. I am using five presets in single job for adaptive bit rate.
If video is of high input quality then video transcodes in different outputs qualities like 224p,270p, 360p,540p,720p.
But if video is of low input quality then video transcodes in different output qualities like 224p,270p, 360p,360p,360p. For low input quality there is three similar output quality i.e. 360p,360p,360p which is unnecessary cost of transcoding. How to avoid two presets for output quality 360p from AWS elastic transcoding job? Want to generate only output quality like 224p, 270p, 360p.
You could use Lambda and mediainfo/ffmpeg to determine the resolution of the source and drop the file into a seperate bucket/pipeline for the appropriate encoding stack.
Though it may be overkill, here's an example of using mediainfo on lambda to extract and store the data in dynamo.

Get output frame rate from AWS elastic transcoder?

I've tried the get-job method to retrieve information about the frame rate.
But it seems that I can only specify the input frame rate. What I want to do is to set the input frame rate to auto and retrieve the frame rate from the output.
Does anyone know if this is possible or do I have to choose another transcoding service?
You can do it with Elastic Transcoder, but it takes two steps. You first must retrieve the preset ID used for the job. Then, retrieve that preset to get the frame rate that was used for the transcoding job.
Here are the docs for getting a job:
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/get-job.html
And here are the docs for retrieving preset info:
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/get-preset.html