I cannot find any information how to handle the situation like this:
Stream starts: about 3 o'clock.
1.Before the person who is streaming (let's call him a streamer) start to stream I would like to have static image saying something like: 'The event will start soon'.
2.Streamer start pushing his stream to RTMP endpoint but he's late and starts at 3.02. Up until 3.02 the same picture should be visible (as in point 1).
3.Streamer should finish at 4 o'clock but he finishes 5 minutes before 4 (pushing stop at his device).
4.Now, ending screen should be visible from 5 minutes to four and later.
I know that inputs should be switched in order to change a view and this can be scheduled for fixed time, but I would like this to be switched dynamically, ie. when streamer starts pushing to RTMP URL and stops pushing to RTMP URL (from eg. Larix software). How to handle that in AWS Media Live?
Thank you for asking this question on stackoverflow, the easiest way to achieve what you are looking to do is by using an Input Prepare Scheduled Action. The channel will then monitor the input and raise an alarm if the RTMP source is not there. When the RTMP source begins then the alarm will remit, you can send the alarms to a lambda that will look for these alarms and can do the switch from slate MP4 to the RTMP source when it sees the RTMP input missing alarm was cleared. This can also be done for when RTMP input goes away.
Information on Prepare Inputs:
https://docs.aws.amazon.com/medialive/latest/ug/feature-prepare-input.html
Global configuration - Input loss behavior:
https://docs.aws.amazon.com/medialive/latest/ug/creating-a-channel-step3.html
Zach
Related
We are trying to receive customer calls through Amazon Connect and leave messages in Amazon Kinesis.
When we call Amazon Connect from our cell phones, the voice plays the expected message and the Beep tone sounds as expected. But then the call ends and we cannot leave a message. We tried removing Wait and Stop media streaming but the problem persisted. What are we doing wrong?
Set Voice: OK
Play prompt(Message): OK
Play prompt(Beep): OK
Start media streaming: NG
If you have a simple, easy to understand sample for this application, let me know!
Looks like the problem is your Wait block. Wait isn't supported for voice calls, so immediately errors.
Replace the Wait block with a Get Customer Input block. Use Text to speech for the prompt, Set the prompt value manually to <speak></speak> and set Interpret as to SSML. Set it to detect DTMF and set the timeout to however long the message is allowed to be. From your flow above that is 10 seconds.
This should get the customers voice sent to the Kinesis stream and you can process the stream from there.
There is a really thorough implementation guide for voice mail here. I've used this then altered it to suite my exact needs in the past.
I am new to an event hub, I try to integrate using .Net Core. I am able to read the incoming event data successfully but for some reason, I want to re-read the data so is it possible?
Yes - assuming that the data hasn't passed the retention period.
Events are not removed from the stream when read; they remain available for any consumer who wishes to read them until they pass the configured retention period and age out of the stream.
When connecting, Event Hub consumers request a position in the stream to start reading from, which allows them to specify any event with a known offset or sequence number. Consumers may also begin reading at a specific point in time or request to begin at the first available event.
More information can be found in the Reading Events sample for the Azure.Messaging.EventHubs package.
I want to record audio from my browser and live stream it for storage in Amazon S3. I cannot wait till the recording is finished as client can close the browser, so I would like to store what has been spoken (or nearest 5-10 second).
The issue is multipart upload does not support less than 5Mib chunks, and audio files will be for most part less than 5Mib.
Ideally I would like to send the chunks in 5 seconds, so what has been said in last 5 seconds to be uploaded.
Can it be support by S3? or should I use any other AWS service to first hold the recording parts - heard about kinesis stream but not sure if it can serve the purpose.
In a web application, people upload files to be processed. File processing can take anywhere between 30 seconds and 30 minutes per file depending on the size of the file. Within an upload session, people upload anywhere between 1 and 20 files and these may be uploaded within multiple batches with the time lag between the batches being up to 5 minutes.
I want to notify the uploader when processing has completed, but also do not want to send a notification when the first batch has completed processing before another batch has been uploaded within the 2-5 minute time period. Ie. the uploader sees himself uploading multiple batches of files as one single "work period", which he may only do every couple of days.
Instead of implementing a regular check, I have implemented the notification with AWS SQS:
- on completion of each file being processed a message is sent to the queue with a 5 minute delivery delay.
- when this message is processed, it checks if there is any other file being processed still and if not, it sends the email notification
This approach leads to multiple emails being sent, if there are multiple files that complete processing in the last 5 minutes of all file processing.
As a way to fix this, I have thought to use an AQS SQS FIFO queue with the same Deduplicationid, however I understand I need to pass through the last message with the same Deduplicationid, not the first.
Is there a better way to solve this with event driven systems? Ideally I want to limit the amount of queues needed, as this system is very prototype driven and also not introduce another place to store state - I already have a relational database.
You can use AWS StepFunctions to control such types of workflows.
1. Upload files to s3
2. Store jobs in DynamoDB
3. Start StepFunction flow with job Id
4. Last step of flow is sending email notification
...
PROFIT!
I was not able to find a way to do this without using some sort of atomic central storage as suggested by #Ivan Shumov
In my case, a relational database is used to store file data and various processing metrics, so I have refined the process as:
on completion of each file being processed a message is sent to the
queue with a 5 minute delivery delay. The 5 minutes represents the largest time lag of uploads between multiple file batches in a single work session.
when the first file is processed, a unique processing id is established, stored with the user account and linked to all files in this session
when this message is processed, it checks if there is any other file being processed still and checks if there is a processing id against he user.
if there is a processing id against the user, it clears the processing ids from the user and file records, then send
I am trying to implement producer as mentioned here(https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-java/blob/master/src/main/demo/com/amazonaws/kinesisvideo/demoapp/PutMediaDemo.java).
I have a mkv file which i want to upload in loop to act as producer in Kinesis video stream. But program hangs on line 122 (latch.await()). Program stuck at this line without giving any error and i am not able to see any thing on amazon video preview tab.
What i am doing wrong?
The line 122 (latch.await()) is waiting for acknowledge or connection close event. It could be firewall or network condition causing this to wait for ever…. Before you try your own mkv file, were you able to get the demo running with the sample mkv files and see playback in the console? Let us know if that succeeds in your environment.