Get output frame rate from AWS elastic transcoder? - amazon-web-services

I've tried the get-job method to retrieve information about the frame rate.
But it seems that I can only specify the input frame rate. What I want to do is to set the input frame rate to auto and retrieve the frame rate from the output.
Does anyone know if this is possible or do I have to choose another transcoding service?

You can do it with Elastic Transcoder, but it takes two steps. You first must retrieve the preset ID used for the job. Then, retrieve that preset to get the frame rate that was used for the transcoding job.
Here are the docs for getting a job:
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/get-job.html
And here are the docs for retrieving preset info:
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/get-preset.html

Related

AWS service for video optimization and compression

I am trying to build a video/audio/image upload feature for a mobile application. Currently we have set the file size limit to be 1 GB for video and 50 MB for audio and images. These uploaded files will be stored in an s3 bucket and we will use AWS Cloudfront CDN to serve them to users.
I am trying to compress/optimize the size of the media content using some AWS service after they store in S3 bucket. Ideally it will be great if I can put some restriction on the output file like no video file should be greater than 200 MB or with quality greater than 720p. Can someone please help me on this that what AWS service should I use and with some helpful links if available. Thanks
The AWS Elemental MediaConvert service transcodes files on-demand. The service supports output templates which can specify output parameters including resolution, so guaranteeing a 720P maximum resolution is simple.
AWS S3 supports File Events to trigger other AWS actions, such as running a Lambda Function when a new file arrives in a bucket. The Lambda function can load & customize a job template, then submit a transcoding job to MediaConvert to transcode the newly arrived file. See ( https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) for details.
Limiting the size of an output file is not currently a feature within MediaConvert, but you could leverage other AWS tools to do this. Checking the size of a transcoded output could be done with another Lambda Function when the output file arrives in a certain bucket. This second Lambda Fn could then decide to re-transcode the input file with more aggressive job settings (higher compression, different codec, time clipping, etc) in order to produce a smaller output file.
Since file size is a factor for you, I recommend using QVBR or VBR Bit compression with a max bitrate cap to allow you to better predict the worst case file size at a given quality, duration & bitrate. You can allocate your '200MB' per file budget in different ways. For example, you could make 800 seconds (~13min) of 2mbps video, or 1600 seconds (~26min) of 1mbps video, et cetera. You may want to consider several quality tiers, or have your job assembly Lambda Fn do the math for you based on input file duration, which could be determined using mediainfo, ffprobe or other utilities.
FYI there are three ways customers can obtain help with AWS solution design and implementation:
[a] AWS Paid Professional Services - There is a large global AWS ProServices team able to help via paid service engagements.
The fastest way to start this dialog is by submitting the AWS Sales team 'contact me' form found here, and specifying 'Sales Support' : https://aws.amazon.com/contact-us/
[b] AWS Certified Consulting Partners -- AWS certified partners with expertise in many verticals. See search tool & listings here: https://iq.aws.amazon.com/services
[c] AWS Solutions Architects -- these services focused on Enterprise-level AWS accounts. The Sales contact form in item [a] is the best way to engage them. Purchasing AWS Enterprise Support will entitle the customer to a dedicated TAM /SA combination.

Get AWS MediaLive video duration after live stream ends

I'm using AWS MediaLive & MediaStore for live streaming and I'm looking for a way to get the duration of the final video, after the live stream ends.
I'm using HLS Output group type and I'm archiving it to S3. One way I was able to do this, is to get the m3u8 file which contains all segments and sum the duration of all the segments.
Is there any better way? Maybe by using MediaPackage ?
Thank you!
Using a VOD type HLS output is the best way, since the manifest of a VOD HLS rendition contains a list of all segments and the duration of each segment in the EXTINF tag. Adding EXT-X-PROGRAM-DATE-TIME tags to the manifest may also help you to determine the start time of the live event.
Any other option, such as trying to determine the start and end time based on the MediaLive channel channel start/stop, is not as accurate, since this does not take into account the fact that the source could start minutes if not hours after the channel start.

Aws MediaConvert - Create one output video file with a single audio track and multiple video inputs

I’m working with Aws MediaConvert in order to create video-files concatenation.
I'm able for now to create concatenation of n videos in one output mpeg4 file, with or without audio "inside each video input".
What i'm looking to achieve is to create the same but with one single audio track for the whole video that i would import and muting each video inputs audio if there are.
I don't know if MediaConvert allows that (not found my case in Aws MediaConvert Documentation).
I made a small schema representing what i'm trying to achieve :
I figured out i can do that with two jobs, one that will concatenate all my video input and mute their audios if there are. And the second one merging the single audio track in the result of the previous one.
This solution however doesn't feel to be the best one.
Do you know if can achieve what i'm trying to do in one job with Aws MediaConvert and if yes, which settings have I to tweak ?
Many thanks in advance !
Maybe you can have look at this link, especially for the following part
If your audio is in a separate file from your video, choose the External file slider switch element and provide the URI to your audio input file that is stored in Amazon S3 ...
By choosing audio from external file and set proper timestamp offset, maybe you can combine your two jobs to one.

What is usage and purpose of Bitrate in AWS Elastic Transcoder Presets

I want to transcode a video using aws elastic transcoder jobs. I have checked video transcoding with preset id "1351620000001-400050" for "Smooth 800k" and this preset having max bitrate 688
Is this mean 688 kbps??
And the input video transcoded with in OR equal to 688??
Refer the image,
If it is, in my case it behaves differently,
An input video bitrate of "10479 kbps", it was transcoded into 5812 kbps.
Is it expected behaviour??
What is the purpose and usage of Bitrate in AWS ealstic transcoder presets?
Kindly provide your inputs.
Bit Rate is the video bit rate of output file in kilobits/second. If you select a video with a lower bit rate than the selected bit rate, your video bit rate will be lower. Valid values for bit rate depend on the codec that you chose.
You can encode videos in different bit rates to support different types of devices and different types of connection e.g. bandwidth available.
Amazon has a good page describing all of this.
Elastic Transcoder Preset

How to distinct presets from AWS transcoding job if all providing same quality?

I am transcoding a video from any format to HLS formats using AWS Elastic transcoding service. I am using five presets in single job for adaptive bit rate.
If video is of high input quality then video transcodes in different outputs qualities like 224p,270p, 360p,540p,720p.
But if video is of low input quality then video transcodes in different output qualities like 224p,270p, 360p,360p,360p. For low input quality there is three similar output quality i.e. 360p,360p,360p which is unnecessary cost of transcoding. How to avoid two presets for output quality 360p from AWS elastic transcoding job? Want to generate only output quality like 224p, 270p, 360p.
You could use Lambda and mediainfo/ffmpeg to determine the resolution of the source and drop the file into a seperate bucket/pipeline for the appropriate encoding stack.
Though it may be overkill, here's an example of using mediainfo on lambda to extract and store the data in dynamo.