Artifacts on video transcoded from GIF by Elastic Transcoder - amazon-web-services

I'm trying to transcode GIF to MP4 with H.264 inside using AWS Elastic Transcoder. I noticed strange color artifacts on transcoded video: color stripes on the right.
Original GIF:
Transcoded video screenshot:
I've tried to change settings, even changed output format to WEBM+VP8: same result. My local ffmpeg has no such issue, so I think there is something on ET side.
My MP4 settings, if what:
Do you have any ideas?

The width of the image is 381, an odd number that might be getting mishandled during re-scaling.
Try cropping off one pixel on the right to make the image 380x150.

Related

Cannot play mp4 input in AWS MediaLive

I'm using AWS Media Live for live streaming (together with AWS Media Store). My streaming flow works well but for some reason I cannot broadcast mp4 file as a filler. All seems to be ok, I cannot find also anything bad in MediaLive logs but all I can see in the player is black screen.
What I've checked:
MP4 source file is encoded with H.264
My input class is SINGLE_INPUT
My s3 URL for input is s3://assets/streaming/fill.mp4
Inside channel Input codec is AVC
The input is attached to channel
Could you help on resolving this? Streaming input works like a charm but I don't know why black screen happens when changing input to MP4 file.
Based on the description of the problem that you are facing, it looks like permissions issue with you s3/mediastore mp4 asset. Can you please make sure you have provided adequate access so the MediaLive can access the asset.
Please analyze the Alerts tab on the MediaLive channel for any 403 errors accessing the mp4 file.
Please review the following article for How to setup MP4 sources for MediaLive Channels:
https://docs.aws.amazon.com/medialive/latest/ug/mp4-upstream.html
Thank you,
Hussain

Get AWS MediaLive video duration after live stream ends

I'm using AWS MediaLive & MediaStore for live streaming and I'm looking for a way to get the duration of the final video, after the live stream ends.
I'm using HLS Output group type and I'm archiving it to S3. One way I was able to do this, is to get the m3u8 file which contains all segments and sum the duration of all the segments.
Is there any better way? Maybe by using MediaPackage ?
Thank you!
Using a VOD type HLS output is the best way, since the manifest of a VOD HLS rendition contains a list of all segments and the duration of each segment in the EXTINF tag. Adding EXT-X-PROGRAM-DATE-TIME tags to the manifest may also help you to determine the start time of the live event.
Any other option, such as trying to determine the start and end time based on the MediaLive channel channel start/stop, is not as accurate, since this does not take into account the fact that the source could start minutes if not hours after the channel start.

Portrait video converted to Landscape in AWS Elemental Mediaconvert

I'm using AWS 'Elemental MediaConvert' service to get the HLS format of the uploaded video. We are using this as Video-On-Demand service. Everything works fine. Video that is been uploaded in 's3-input' bucket will taken by lambda service and processed by boto3 elemental mediaconvert client. Out of the video will be stored in 's3-output' bucket. One problem is Portrait videos are appearing in Landscape mode in 's3-output' bucket and also when HLS url is played in mobile/browser.
Make sure you use the latest version of boto3 if you use it at all. Anyways
Add "Rotate": "AUTO" to VideoSelector in inputs. In this case, EMC will try to automatically rotate the video based on metadata if it's available.
These links were really useful for me:
List https://www.mandsconsulting.com/lambda-functions-with-newer-version-of-boto3-than-available-by-default/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mediaconvert.html
Well... In your place here is what I'd do... I'll go to the last "job"... Go to the "output" section. Then go to the output formats sub-sections... and set the size of the desired output Manually.
I don't think Elemental MediaConvert has presets for vertical video. It's a pretty new (awesome) product.
Good luck!

Video Streaming: MPEG-DASH , AWS cloudfront, dash.js

I am creating a video streaming application hosted on AWS. I have got mp4 which are hosted on AWS S3. To stream video files, I want to transcode mp4 to MPEG-DASH (mpd) format and store in a different AWS S3 bucket. I will be AWS cloudfront to stream above transcoded mpd files and use dash.js or videogular to stream on client side.
The problem I am facing is here how to transcode mp4 to mpd.(without using AWS transcoder, bit expensive). I was thinking to leverage AWS Lambda to listen the source S3 bucket and output to a different S3 bucket. But could not find a module to transcode programmatically(to convert it to Lambda function). Has anyone done it yet and would like to give some insight?
An mpd file is actually just a text based index file - it contains URLs to the video and audio steams but no media itself.
The media for MPEG DASH is stored in segments, for mp4 in a fragmented mp4 format.
If you want to create fragmented mp4 from mp4 yourself, then there are some tools which you can look at to do this, or even use as part of a batch process.
One example is mp4Dash (https://www.bento4.com/documentation/mp4dash/). You can see examples here on this link to convert a single mp4 file, or to convert multiple bit rate versions of a single file, which is more typical when using DASH for Adaptve Bit Rate Streaming (ABR - allows the client choose the bit rate of the next segment to download depending on the current network conditions):
Single MP4 input file
mp4dash video.mp4
Multi-bitrate set of MP4 files
mp4dash video_1000.mp4 video_2000.mp4 video_3000.mp4
Another example is mp4Box: https://gpac.wp.imt.fr/mp4box/dash/
Its worth nothing that there are actually multiple ways to stream DASH in AWS - Elastic Transcode can create MPEG DASH stream which you can stored and stream from S3, you can use cloud front and services like Unified Streaming or Wowza etc. Streaming is complicated so if this is for a high volume important service it may be worth looking at these and seeing if there is an option or combination which meets your needs without being too expensive.

Elastic Transcoder - MPEG Dash output - tutorials?

I use AWS Elastic Encoder to encode to HLS (with success), and have been trying to get the same mp4 files transcoded to play in MPEG-Dash.
When I transcode into HLS, I typically choose 30 sec segments, and for a 5 min video, I get 12 files and a playlist (using one of the built in presets)
When I transcode same file into MPEG-Dash (using 30 second segments) - I still get one large file. No segments. And no audio. The playlist format seems to be ok - in .mpd format. I am using a built in preset.
Am I supposed to do TWO transcodes for every mpeg-dash transcode? One in video, and the other in audio, with a playlist to tie the two together?
Is there an online tutorial which outlines how to encode into MPEG-Dash format?
Or what do most of you use?