I'm trying to work with AWS Elemental live and I've manage to do the following:
• create a channel and an endpoint with AWS Elemental MediaPackage
• configure channel, input and output with AWS Elemental MediaLive
• stream a random video using OBS, and check it's properly showing by using the "play" link on the AWS console
My next step would be to test out the Graphic Overlay, so i checked this doc but i can't figure out where to find the Live IP.
Any insight?
If you are trying to work with static overlay on AWS Elemental Live (On-prem hardware), then you are referring to the correct documentation. And you would use the Elemental Live Encoder IP address. Which can be found through the web-interface navigating to Settings --> Network --> Current Settings.
If you are trying to work with image overlays on AWS MediaLive Channel, the correct document to refer is this one.
You can use Elemental Live Ingest Endpoint IP as a Live IP Address.
Related
If the resolution is 1080P, how much does it cost if 10000 viewers watch the live stream for 1 hour?
You have several options :
MediaLive plus S3 plus CloudFront is one valid workflow if you don't need DASH formats or Encryption. Each service in the workflow has its own line item in the combined price.
MediaLive plus MediaPackage plus CloudFront is the high-end workflow most often chosen by media professionals. It provides multi-format streaming (creating DASH + CMAF automatically for example), and provides optional Encryption and optional Ad Insertion via MediaTailor. Each service in the workflow has its own line item in the combined price.
The AWS Interactive Video Service (IVS) offers "twitch-quality" 1080P HLS streaming at attractive prices with very fast easy configuration. This would be a one-stop shop solution with simplified pricing.
To answer your question you can look at the public pricing guides, or start a conversation with an AWS salesperson via the web form at: https://aws.amazon.com/contact-us/sales-support/
Good luck with your project!
AWS Elemental MediaLive is a Live encoder and it does not serve the content to viewers. In order to serve the video content to viewer, you have to output the video to an origin server such as AWS S3 or AWS Elemental MediaPackage. If you have 10,000 viewers, it is advised to use CloudFront when streaming the content.
AWS Elemental MediaLive is charged on a fixed rate base on the combination of codec, bitrate, and resolution of the input stream and output stream. Detail of the pricing can be found here:
https://aws.amazon.com/medialive/pricing/
I want to upscale a live video on aws. The input stream will be a rtmp stream which i want to upscale using my own AI upscaling model and then the output will be distributed through the CDN.
I tried searching the internet regarding upscaling on aws but i couldn't find a way to do it using my own models. I already have a streaming pipeline set up where i stream my screen from my phone and the stream goes to aws elemental medialive to aws elemental mediapackage and then to CDN for distribution across the globe. I don't understand how to include the upscaling in the pipeline and where in the pipeline should upscaling be done at to save the transmission cost?
I already have a pipeline setup for streaming using aws medialive and aws mediapackage.
Thanks for your message.
The scaling operation will need a compute resource, probably EC2.
Your scaler could in theory be configured to accept either a continuous bitstream or a series of flat files (TS segments). The 'bitstream' option will require that you implement a streaming receiver/decoder, potentially based on the NGINX streaming proxy. The flat file option might be simpler as you could configure the scaler to either read those files from an S3 bucket. The resulting output can be delivered to MediaLive in either a continuous bitstream or as a series of flat files.
Regarding order of operations, placing the scaler before MediaLive makes the most sense as you want to deliver the optimized content to MediaLive for further encoding into ABR stack renditions, and leverage other features such as logo branding, input switching, output stabilization in the event of input loss, et cetera. Note: at present, UHD or "4k" is the largest input resolution supported by MediaLive.
I think what I want to do is utilize MediaStore as a backend to MediaPackage, but it's possible mediaPackage has everything I need I just haven't been able to find any answers.
What I'm looking for is a way to record live video, and have it available for playback. I was looking at this solution from AWS for livestreaming, and while it is close to what I want I want to store the video for playback at a later date as well as broadcast the video live.
My customer also wants the ability to upload videos that were not live recorded, so I think what I want to do is add MediaStore between the lambda function and MediaPackage, so I can upload videos to MediaStore manually or setup a channel within MediaStore for live streams. Then I can have MeidaPackage reference the MediaStore to create the different file formats for consumption. The problem is that MediaPackage doesn't accept a MediaStore endpiont, only an S3 endpoint.
Any advice?
TTIA
Using S3 and MediaPackage should be sufficient in your case. It is not necessary to use MediaStore.
I am assuming you are using AWS MediaLive or encoder from other vendor to create a HLS feed that ingest to MediaPackage. In MediaPackage, you can create endpoints as needed. This AWS Media Service Simple Live workflow should give you the idea how to build the workflow. [1]
To record the live video or create live to VOD asset, you can create a harvest job in MediaPackage. MediaPackage will harvest the timeframe that you indicated in the harvest job and will save the copy in your S3 Bucket. For more information please read this article. [2]
To playback the live to VOD asset or a upload video, you can use the VOD functionality in MediaPackage to make the asset available for playback. For more information please read this article. [3]
[1] https://github.com/aws-samples/aws-media-services-simple-live-workflow
[2] https://docs.aws.amazon.com/mediapackage/latest/ug/ltov-how.html
[3] https://docs.aws.amazon.com/mediapackage/latest/ug/vod-content.html
I want to extract one frame or screenshot of a video stored in s3 at specific time, what can i use to make it ?
Lambda functions
Using the SDK
Amazon Elastic Transcoder has the ability to create videos from sources files. For example, it can stitch together multiple videos, or extract a portion of video(s).
Elastic Transcoder also has the ability to generate thumbnails of videos that it is processing.
Thus, you should be able to:
Create a job in Elastic Transcoder to create a very short-duration video from the desired time in the source video
Configure it to output a thumbnail of the new video to Amazon S3
You can then dispose of the video (configure S3 to delete it after a day) and just use the thumbnail.
Please note that Elastic Transcoder works asynchronously, so you would create a Job to trigger the above activities, then come back later to retrieve the results.
The benefit of the above method is that there is no need to download or process the video file on your own Amazon EC2 instance. It is all done within Elastic Transcoder.
The AWS SDK does not have an API that extracts pictures from a video. You can use AWS to analyze videos - such as the Amazon Rekognition service. For example:
Creating AWS video analyzer applications using the AWS SDK for Java
You can use the Amazon Rekognition to detect faces, objects, and text in videos. For example, this example detects text in a video:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectText.java
The Amazon S3 API has many operations, but extracting a pic from a video is not one of them. You can get an inputstream of an object located in a bucket.
To extract a pic from a video, you would need to use a 3rd party API.
I want to have live audio streaming and broadcasting setup. I would stream audio from laptop or mobile through Altacast/rtpmic or any cast to inputs of AWS Elemental MediaLive Input.
I setup AWS Elemental MediaLive Input (RTMP) and then had a AWS MediaPackage configured ahead of it. I created endpoints from MediaPackage.
I tried streaming audio from rtpmic to a IPv4 address and port number provided that I got from Elemental MediaLive and tried to hit the endpoints (output) that I got from MediaPackage. But I keep getting "error 404" at the endpoints. I have also setup channels which takes up input from MediaLive output. And the channels have endpoint which I am trying to check for output.
Where do you think I might be going wrong? How can I investigate if Inputs of Elemental MediaLive is getting Audio? Cant I use AWS Elemental MediaLive for audio streaming?
I am new to this, so please excuse me if I am stating anything incorrectly or not providing any information correctly.
I spoke with chat support of AWS Media Live, and I understand that they havent created Elemental MediaLive for audio only streaming. Hence it cannot be done, what I was thinking to do. Users have tried sending black screen as video input and audio, but thats not the ideal way of doing it.
I am trying wowza now.