Audio streaming through MediaLive server - amazon-web-services

I want to have live audio streaming and broadcasting setup. I would stream audio from laptop or mobile through Altacast/rtpmic or any cast to inputs of AWS Elemental MediaLive Input.
I setup AWS Elemental MediaLive Input (RTMP) and then had a AWS MediaPackage configured ahead of it. I created endpoints from MediaPackage.
I tried streaming audio from rtpmic to a IPv4 address and port number provided that I got from Elemental MediaLive and tried to hit the endpoints (output) that I got from MediaPackage. But I keep getting "error 404" at the endpoints. I have also setup channels which takes up input from MediaLive output. And the channels have endpoint which I am trying to check for output.
Where do you think I might be going wrong? How can I investigate if Inputs of Elemental MediaLive is getting Audio? Cant I use AWS Elemental MediaLive for audio streaming?
I am new to this, so please excuse me if I am stating anything incorrectly or not providing any information correctly.

I spoke with chat support of AWS Media Live, and I understand that they havent created Elemental MediaLive for audio only streaming. Hence it cannot be done, what I was thinking to do. Users have tried sending black screen as video input and audio, but thats not the ideal way of doing it.
I am trying wowza now.

Related

I am trying to understand how to upscale live videos using my own ai models on aws

I want to upscale a live video on aws. The input stream will be a rtmp stream which i want to upscale using my own AI upscaling model and then the output will be distributed through the CDN.
I tried searching the internet regarding upscaling on aws but i couldn't find a way to do it using my own models. I already have a streaming pipeline set up where i stream my screen from my phone and the stream goes to aws elemental medialive to aws elemental mediapackage and then to CDN for distribution across the globe. I don't understand how to include the upscaling in the pipeline and where in the pipeline should upscaling be done at to save the transmission cost?
I already have a pipeline setup for streaming using aws medialive and aws mediapackage.
Thanks for your message.
The scaling operation will need a compute resource, probably EC2.
Your scaler could in theory be configured to accept either a continuous bitstream or a series of flat files (TS segments). The 'bitstream' option will require that you implement a streaming receiver/decoder, potentially based on the NGINX streaming proxy. The flat file option might be simpler as you could configure the scaler to either read those files from an S3 bucket. The resulting output can be delivered to MediaLive in either a continuous bitstream or as a series of flat files.
Regarding order of operations, placing the scaler before MediaLive makes the most sense as you want to deliver the optimized content to MediaLive for further encoding into ABR stack renditions, and leverage other features such as logo branding, input switching, output stabilization in the event of input loss, et cetera. Note: at present, UHD or "4k" is the largest input resolution supported by MediaLive.

AWS Elemental Live - Where do i find the Live IP?

I'm trying to work with AWS Elemental live and I've manage to do the following:
• create a channel and an endpoint with AWS Elemental MediaPackage
• configure channel, input and output with AWS Elemental MediaLive
• stream a random video using OBS, and check it's properly showing by using the "play" link on the AWS console
My next step would be to test out the Graphic Overlay, so i checked this doc but i can't figure out where to find the Live IP.
Any insight?
If you are trying to work with static overlay on AWS Elemental Live (On-prem hardware), then you are referring to the correct documentation. And you would use the Elemental Live Encoder IP address. Which can be found through the web-interface navigating to Settings --> Network --> Current Settings.
If you are trying to work with image overlays on AWS MediaLive Channel, the correct document to refer is this one.
You can use Elemental Live Ingest Endpoint IP as a Live IP Address.

How to send AWS Kinesis Video Stream (frames) to EC2 instance?

Requirement: For deep learning predictions, I want to send the frame from my local system camera to EC2 instance for predictions.
Work done till now:
I am able test my deep learning code on my local system.
I have uploaded the code on EC2 instance.
I am able to send the live feed from my local camera to AWS Kinesis Video Stream.
Problem: I don't know how to send the AWS Kinesis video stream frames to EC2 instance for predictions. I searched everywhere, I know this is one of the use case of AWS Kinesis Video to send the frame to EC2 but I don't know how it will be possible.
If I understand correctly, you want the code in the EC2 to consume the frames being sent by your system camera. You can base your application off the parser library (https://github.com/aws/amazon-kinesis-video-streams-parser-library) and run it on the EC2 instance to capture the frames and perform deep learning predictions. Hope this gives an idea!
Per #divku suggestion, you can use GetMedia API and the parser library she referenced to read and parse out and "consume" the frames. You can cut GitHub issues against the assets in question to get more precise and timely responces.

How to Live Stream an exist video store in S3?

I upload a video to S3, use AWS MediaConvert to transcoding then transfer to the end-user using MediaPackage (VOD). And now, I need Live stream a video available in S3.
I know about MediaLive, the document told me the input of MediaLive is Live stream source like Camera or Broadcast Software. I'm not sure MediaLive accepts the source from a video available in S3.
Please let me know how to solve my problem?

What is the difference between AWS Transcribe > Streaming Transcription feature and Kinesis Video Streams(For Audio Input) for live streaming audio

Hi My requirement is I have live audio stream as input, say a call between 2 people, now to convert that audio to text on live and pick certain keywords from that extracted text and insert in Database.
As per architecture in https://github.com/aws-samples/amazon-connect-realtime-transcription Both AWS Kinesis Video Streams service and AWS Transcribe used for live streaming but as per link : https://aws.amazon.com/blogs/machine-learning/amazon-transcribe-now-supports-real-time-transcriptions/ AWS Transcribe supports live transcription then why in that architecture Kinesis used ?
If any one know, please help me in understanding, Hope Amazon connect can ingest live audio to AWS Transcribe for live transcription.
Amazon Kinesis Video Streams is the service that enables streaming voice data from Amazon Connect. Amazon Transcribe can ingest streams from any source for real-time transcription, but the only way to get that real-time data from Amazon Connect is via Kinesis. The launch announcement for real-time streams might help make this more clear:
With the customer voice stream feature, your customer audio is
automatically sent to Amazon Kinesis Video Streams, where it can be
accessed by the integrations that you allow. For example, you could
integrate customer voice stream with real-time text transcription and
sentiment analysis for immediate feedback on call quality, or use this
feature with a 3rd party voice biometric product to authenticate the
caller automatically without having to enter a password or confirm
personal information.