How to get stream url and key from AWS MediaLive? - amazon-web-services

I need to design a solution where I take a Zoom live stream as input and save chunks of duration 10 seconds in a s3 bucket. I need to save them in bucket for using AWS Transcribe on them.
For live streaming to a custom client, Zoom takes a stream url and stream key. I first tried to use AWS IVS for streaming. IVS gives a stream url and key which I supplied to zoom. But I didn't find a solution to intercept the stream and store audio chunks in s3.
Next I found about MediaLive which seemed promising as it takes an input source and output destination. I set the input type as RTMP (Push) but I am not getting a stream url or stream key that I can send to Zoom.
How can I get these stream url and key? Or is it that I am approaching it all wrong? Any help is appreciated.

Thanks for your message. The RTMP details belong to the MediaLive Input you defined, independent of whatever Channel to which the Input might be attached. Have a look at the Inputs section in your Console.
Alternatively, you can run a command like this from your AWS CLI or your CloudShell prompt:
aws medialive describe-input --input-id 1493101
.
{
"Arn": "arn:aws:medialive:us-west-2:123456123456:input:1493107",
"AttachedChannels": [],
"Destinations": [
{
"Ip": "44.222.111.85",
"Port": "1935",
"Url": "rtmp://44.222.111.85:1935/live/1"
}
],
"Id": "1493107",
"InputClass": "SINGLE_PIPELINE",
"InputDevices": [],
"InputPartnerIds": [],
"InputSourceType": "STATIC",
"MediaConnectFlows": [],
"Name": "RTMP-push-6",
"SecurityGroups": [
"313985"
],
"Sources": [],
"State": "DETACHED",
"Tags": {},
"Type": "RTMP_PUSH"
}
.
The two parameters after the ":1935/" in the URL are the App name and Instance name. They should be unique and not blank. You can use simple values as per my example. The stream key can be left blank on your transmitting device.
You can test the connectivity into the MediaLive Channel using an alternate source of RTMP to confirm the cloud side is listening correctly. There are various phone apps that will push RTMP; ffmpeg also works.
I suggest adding a VOD source to your medialive channel as the first source to your channel in order to confirm the channel starts correctly and produces a short bit of good output to your intended destinations. All the metrics and alarms should be healthy. When that works as intended, then switch to your intended RTMP input.
You can monitor network-in bytes and input video frame rate metrics from AWS CloudWatch. Channel event logs will also be logged to CloudWatch if you enable the Channel logging option on your MediaLive channel (recommended).
I hope this helps!

Input security groups can be created from the AWS MediaLive Console or via the AWS CloudShell CLI, or your local aws-cli, using a command of the form:
'aws medialive create-input-security-group'. Add the 'help' parameter for details on syntax.

Related

How to redirect multiple ECS log streams into a single log stream in CloudWatch

I currently have my application running in ECS. I have enabled the awslogs agent indicating the Log group and the region. Everything works great, send the logs to the Log group and create a Log stream. However, every time I restart the container, it creates a new Log stream.
Is there a way that instead of creating a Log stream as the container restarts, it all goes into a single Log stream?
I've been looking for a solution for a long time and I haven't found anything.
For example, instead of there being 2 Log streams, there is only 1 each time the container is restarted.
Something like this:
The simplest way is to use the PutLogEvents api directly. Beyond that you can get as fancy as you want. You could use a firelens side car container in your task to handle all events using a logging api that writes directly to cloudwatch.
For example, you can do this in python with boto3 cloudwatch put_log_events
response = boto3.client("logs").put_log_events(
logGroupName="your-log-group",
logStreamName="your-log-stream",
logEvents=[
{"timestamp": 123, "message": "log message"},
],
)

how to stream microphone audio from browser to S3

I want to stream the microphone audio from the web browser to AWS S3.
Got it working
this.recorder = new window.MediaRecorder(...);
this.recorder.addEventListener('dataavailable', (e) => {
this.chunks.push(e.data);
});
and then when user clicks on stop upload the chunks new Blob(this.chunks, { type: 'audio/wav' }) as multiparts to AWS S3.
But the problem is if the recording is 2-3 hours longer then it might take exceptionally longer and user might close the browser before waiting for the recording to complete uploading.
Is there a way we can stream the web audio directly to S3 while it's going on?
Things I tried but can't get a working example:
Kineses video streams, looks like it's only for real time streaming between multiple clients and I have to write my own client which will then save it to S3.
Thought to use kinesis data firehose but couldn't find any client data producer from brower.
Even tried to find any resource using aws lex or aws ivs but I think they are just over engineering for my use case.
Any help will be appreciated.
You can set the timeslice parameter when calling start() on the MediaRecorder. The MediaRecorder will then emit chunks which roughly match the length of the timeslice parameter.
You could upload those chunks using S3's multipart upload feature as you already mentioned.
Please note that you need a library like extendable-media-recorder if you want to record a WAV file since no browser supports that out of the box.

AWS IVS get notified for the stream start and stream end

I am Using AWS IVS (Interactive Video Service) for live streaming. I need the notification when the stream start and the stream ends. In the Amazon Event bridge, I have created a Rule. source as IVS and the target as a queue. but I am not getting the messages to the queue when the stream start and the stream ends. I am polling to the queue but the queue is empty. I think the event pattern in the Event Bridge is wrong. can someone help me to validate the event pattern below? or how to get notification when stream start or stream end from the AWS IVS?
{
"source": [
"aws.ivs"
],
"detail": {
"stream_status": [
"Stream End",
"Stream Start",
"Session Created"
]
}
}
The EventBridge sample event had a bug where event_name was shown improperly as eventName. If you manually specify event_name, the events will properly fire and you should be good to use this rule for your needs.
Refer to the documentation here.
Imo, you have to manage it by yourself. AWS does not provide any automated messages when your IVS endpoint is ingesting data.
The best solution I've been thinking of right now is to have an observer pattern using websocket.
The dirtier implementation would be to send a message using a websocket whenever your data source is streaming. This means that you have to trigger it somewhere with your interface if you're using another broadcasting service.
The best way would be a service checking your stream health and sessions regularly and notifying your clients whenever you have a live session, as well as providing info whenever your session health is dropping.

Kinesis agent not sending .log files through firehose

I've setup a Kinesis firehose and the installed the Kinesis agent as described in the AWS docs. I can get test data through to the S3 bucket, but the Kinesis agent won't send any .log files through. I suspect a problem connecting the agent to the firehose.
My /etc/aws-kinesis/agent.json file is below. I've also tried with the "firehose.endpoint" without the https:// but I still can't get any data through.
I've verified that the aws-kinesis-agent service is running.
I'm not using the kinesis.endpoint/kinesisStream, but I've left the flow in the agent.json file. Could this be a problem?
What am I missing?
{
"cloudwatch.emitMetrics": true,
"kinesis.endpoint": "",
"firehose.endpoint": "https://firehose.us-west-2.amazonaws.com",
"flows": [
{
"filePattern": "/home/ec2-user/src/Fake-Apache-Log-Generator/*.log*",
"kinesisStream": "yourkinesisstream",
"partitionKeyOption": "RANDOM"
},
{
"filePattern": "/home/ec2-user/src/Fake-Apache-Log-Generator/*.log*",
"deliveryStream": "apachelogfilesdeliverystream"
}
]
}
EDIT:
The log file at /var/log/aws-kinesis-agent/aws-kinesis-agent.log showed 0 records being parsed. The log message led me to this post, and I made the recommended fixes. In addition I had to remove the flow for kinesis from the /etc/aws-kinesis/agent.json file to avoid an Exception that showed up in the log files.
Botton line is that the aws-kinesis-agent can't read files from /home/ec2-user/ or its subdirectories, and you have to fix up the agent.json file.
Kinesis agent is not able to read the logs from a file which is at /home/ec2-user/<any-file> due to some permissions issue. Try changing your logs location to /tmp/logs/<log-file>.
Add the kinesis agent to the sudoers group:
sudo usermod -aG sudo aws-kinesis-agent-user
Another possibility is data flow, see this answer: https://stackoverflow.com/a/64610780/5697992

How can I customize the entire email notification in Stackdriver Alerting?

Currently, the message specified in the Document field while creating alerting policy appears in the Document field of the Stackdriver alert email.
I would like to overwrite the entire email message body with my custom content.
How can I overwrite the message body of Stackdriver Alert email with my custom message?
Is there any other workaround to do this?
You should be able to send the notification to a webhook, and this could directly be an HTTP-triggered Cloud Function.
This Cloud Function would receive all the information from the alert, and you can follow this tutorial to use SendGrid to send your alerts.
This is a lot more complex than just setting the email notifications, but also provides you with an amazing flexibility regarding alerts, as you'll be able to not just write the message however you want, but you could process the data in any way you want:
You have low priority alerts? Then store them and just send a digest
once in a while instead of spamming.
Want to change who is sent the
alert depending on a calendar rotation? Use the function to look up
who should be notified.
And those are just some random quick ideas I got while writing this message.
The information provided in the POST body is this one (that's just a sample):
{
"incident": {
"incident_id": "f2e08c333dc64cb09f75eaab355393bz",
"resource_id": "i-4a266a2d",
"resource_name": "webserver-85",
"state": "open",
"started_at": 1385085727,
"ended_at": null,
"policy_name": "Webserver Health",
"condition_name": "CPU usage",
"url": "https://app.google.stackdriver.com/incidents/f333dc64z",
"summary": "CPU for webserver-85 is above the threshold of 1% with a value of 28.5%"
},
"version": 1.1
}
You can create a single webhook that handles all the alerts, or you can create a webhook on a per-policy basis to handle things separately.