I have a contactflow in AWS Connect with customer audio streaming enabled. I get the customer audio steam in KVS and can read bytes from the stream and convert it to an audio file when the call is completed in Java with the examples provided by AWS.
But I want to steam the audio in a web page for real-time monitoring exactly like the AWS provides real-time monitoring in built-in CCP.
I get the steam ARN and other contact data. How can I use that stream for real-time monitoring/streaming?
Any heads up will be appreciated.
You're going to want to use a WebRTC client in the browser/page you want to use monitoring and controlling the the stream. AWS provides a WebRTC SDK for Kinesis Video Streams that can be used for this. The SDK documentation can be found here, which includes a link to samples and config details on GitHub
Related
I am using Twilio Programmable video, and trying to pipe remote participant's audio in real time to Google Cloud Media Translation client.
There is a sample code on how to use Google Cloud Media Translation client via microphone on here.
What I am trying to accomplish is that instead of using a microphone and node-record-lpcm16, I want to pipe what I am getting from Twilio's AudioTrack to Google Cloud Media Translation client. According to
this doc,
Tracks represent the individual audio, data, and video media streams that are shared within a Room.
Also, according to this doc, AudioTrack contains an audio MediaStreamTrack. I am guessing this can be used to extract the audio and pipe it to somewhere else.
What's the best way of tackling this problem?
Twilio developer evangelist here.
With the MediaStreamTrack you can compose it back into a MediaStream object and then pass it to a MediaRecorder. When you start the MediaRecorder it will receive dataavailable events which will be a chunk of audio in the webm format. You can then pipe those chunks elsewhere to do the translation. I wrote a blog post on recording using the MediaRecorder, which should give you a better idea how the MediaRecorder works, but you will have to complete the work to stream the audio chunks to the server to be translated.
I am trying to use Amazon's Video Signaling service to create a multi-user video chat system. It appears as if the only supported topology is One-to-Many. Does KVS support Many-to-Many?
i.e. One WebRTC session can feed multiple peers, but I can't mesh them so everyone could communicate with everyone.
We do not currently support the mesh scenario with the signaling service out of the box. This is something we are looking at to support out-of-the-box but currently, there needs to be some solution engineering by enabling a higher-order coordinator.
I am using AWS kinesis video streams to stream live video and perform facial recognition on image feed. I need assistance in understanding some basic concepts regarding it:-
1) If I want to use WerRTC for live streaming how will I do that?
2) In kinesis video streams there is a channel and a stream(when using webRTC it's connecting to a channel, how do I connect it to a video stream?)
You can follow the steps and use Kinesis Video Producer GStreamer sample to do live streaming from your laptop camera: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp#build-and-install-kinesis-video-streams-producer-sdk-and-sample-applications
(1) Currently KVS WebRTC is just for realtime peer-to-peer streaming, it is not applicable for facial recognition now.
(2) If you want both realtime peer-to-peer playback and also cloud storage, you will need to do the first one with KVS WebRTC and do the latter one with KVS producer. A reference of how to do both at the same time: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/issues/161#issuecomment-579621542
I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.
I would like to create a distribution network for the mp3 files of my amazon S3 bucket. I managed to achieve it using aws Cloudfront using RTMP protocol. But as android has no native support for RTMP, i am exploring the idea of making it using RTSP protocol.Can some one help in figuring out how to achieve it.
RTSP is stateful as it maintains a connection trough RTCP, requires multiple ports and you will have issues with firewall traversal. Plus, if you want to take advantage of S3 you should use instead a HTTP streaming protocol, unless you want to stream the mp3 files directly via progressive download.
The are two alternatives: HLS and DASH, with HLS being the most used format at the moment.
Android 4.x+ has native support for HLS, it works on iOS out of the box since it's made by Apple and on desktops it works natively on Safari 6+ and all other browsers with a Flash fallback. There are many web players available, the most noteworthy being JWPlayer (paid) or clappr (open-source).
The Amazon Elastic Transcoder supports HLS transcoding and you can also use an open-source solution like ffmpeg.
https://developer.apple.com/streaming/