how to get flavor specific information of HLS(cupertino) live streaming using wowza server module api - wowza

I am working on wowza live streaming analytics module for hls and rtmp streams.
i want to fetch following information from HLS and RTMP type live streaming...
viewer's information
viewer-end user-agent
total data streamed to viewer
total play duration of media
flavours played during the stream.
i've fetched this data for RTMP live streaming using following listener and classes:
Listener: IMediaStream.addClientListener()
Notifier: IMediaStreamActionNotify3
Some wowza's basic server API
i am not aware about the listener to get stream specific information in case of HLS[cupertino] live streaming.
i've achieved following in case of HLS[cupertino] live streaming:
total data transferred to viewer
user-agent
viewer's information
and the information that is not achieved yet in HLS live streaming:
total play duration of media
flavors played during the stream
i've tried a listener HTTPStreamerApplicationContextCupertinoStreamer.addVODActionListener() but it works with VOD type applications only.
Can you please help me on this to achieve it?
Thanks.

Related

How to get audio data in a specific format in real time from a Twilio MediaStreamTrack?

I am using Twilio Programmable video, and trying to pipe remote participant's audio in real time to Google Cloud Media Translation client.
There is a sample code on how to use Google Cloud Media Translation client via microphone on here.
What I am trying to accomplish is that instead of using a microphone and node-record-lpcm16, I want to pipe what I am getting from Twilio's AudioTrack to Google Cloud Media Translation client. According to
this doc,
Tracks represent the individual audio, data, and video media streams that are shared within a Room.
Also, according to this doc, AudioTrack contains an audio MediaStreamTrack. I am guessing this can be used to extract the audio and pipe it to somewhere else.
What's the best way of tackling this problem?
Twilio developer evangelist here.
With the MediaStreamTrack you can compose it back into a MediaStream object and then pass it to a MediaRecorder. When you start the MediaRecorder it will receive dataavailable events which will be a chunk of audio in the webm format. You can then pipe those chunks elsewhere to do the translation. I wrote a blog post on recording using the MediaRecorder, which should give you a better idea how the MediaRecorder works, but you will have to complete the work to stream the audio chunks to the server to be translated.

Stream Amazon Connect audio in real time with KVS

I have a contactflow in AWS Connect with customer audio streaming enabled. I get the customer audio steam in KVS and can read bytes from the stream and convert it to an audio file when the call is completed in Java with the examples provided by AWS.
But I want to steam the audio in a web page for real-time monitoring exactly like the AWS provides real-time monitoring in built-in CCP.
I get the steam ARN and other contact data. How can I use that stream for real-time monitoring/streaming?
Any heads up will be appreciated.
You're going to want to use a WebRTC client in the browser/page you want to use monitoring and controlling the the stream. AWS provides a WebRTC SDK for Kinesis Video Streams that can be used for this. The SDK documentation can be found here, which includes a link to samples and config details on GitHub

Live streaming from webcam in a browser

I am working on a live-streaming prototype, I have been reading a lot about how live-streaming works and many different approaches but I still can't find a live-streaming stack that suits my needs...
These are the requirements for my prototype:
1)The video/audio recording must come from a web browser using the webcam, the idea is that the client preferably shouldn't need to install plugins or do anything complicated(maybe installing Flash player plugin is acceptable, only for recording the video, the viewers should be able to view the stream without plugins).
2)It can't be peer to peer since I also need to store the entire video in my server (or in Amazon s3 servers for example) for viewing later.
3)The viewers should also be able to watch the stream without the need of installing anything, from their web browsers, say Chrome and Firefox for example. We want to use the HTML5 video tag if possible.
4)The prototype should be constructed without expending money preferably. I have seen that AWS-Cloudfront and Wowza offer free trials so we are thinking about using these 2 services.
5)The prototype should be able to maintain 1 live stream at a time and 2 viewers, just that, so there are no restrictions regarding this.
Any suggestions?
I am specially stuck/confused with the uploading/encoding video part of the architecture(I am new to streaming and all the formats/codecs/protocols/technologies are making it really hard to digest).
As of right now, I came across WebRTC that apparently allows me to do what I want, record and encode video from the browser using the webcam, but this API only works with HTTPS sites. Are there any alternatives that work with HTTP sites?
The other part that I am not completely sure about is the need for an encoding server, for example Wowza Streaming Engine, why do I need it? Isn't it enough if I use for example WebRTC for encoding the video and then I just send it to the distribution service (AWS-Cloudfront for example)? I do understand that the encoding server would allow me to support many different devices since it will create lots of different encodings and serve many different HTTP protocols, but do I need it for this prototype? I just want to make a 1 format (MP4 for example) live-stream that can be viewed in 2 web browsers, that's all, I don't need variety of formats nor support for different bandwidths or devices.
Base on your requirement, WebRTC is good way.
API only works with HTTPS sites. Are there any alternatives that work
with HTTP sites?
No. Currently Firefox is only browser is allow WebRTC on HTTP, but finally it need HTTPS
For doing this prototype you need to go with the Wowza WebRTC.
While going with wowza all the streams are delivered from the wowza only.So it become a routed WebRTC.
Install Wowza - https://www.wowza.com/docs/how-to-install-and-configure-wowza-streaming-engine
Enable the WebRTC - https://www.wowza.com/docs/how-to-use-webrtc-with-wowza-streaming-engine
Downaload and configure the Streamlock. or Selfsigned JKS file - https://www.wowza.com/docs/how-to-request-an-ssl-certificate-from-a-certificate-authority
Download the sample WebRTC - https://www.wowza.com/_private/webrtc/
Publish stream using the Publish HTML and Play through the Play HTML ( Supported Chrome,Firefox & Opera Browsers)
For MP4 files in WebRTC : you need to enable the transcoder with h264 & aac. Also you need to enable the option Record all the incoming Streams in the properties of application which you are creating for the WebRTC ( Not the DVR ).Using the File writer module save all the recorded files in a custom location.By using a custom script(Bash,Python) Move all the Transcoded files to the s3 bucket, Deliver through cloudfront.

Wowza Streaming Live Server Data two questions

Does Wowza Streaming Server (used as live streaming services) care about the direction of data stream?
Possibility of Wowza Server as a server side mode and client side mode
Since I am trying to send data stream via LTE, the cost of sending data stream is very high. So I am wondering if it is possible to have live stream data only when the request is present.
Thank you
It's not 100% clear what do you mean by "Wowza acting as a client" to me. Wowza can be configured to be an edge instance, that pulls stream from an origin Wowza server, when you want to have multiple, possibly geographically distributed stream sources.
Or, if you think about the other case, Wowza accepts the stream from encoders, in which case the data flow direction is also towards Wowza. This is the case when you install for example a Flash Media Live Encoder, or a VLC and push your encoded stream "into" Wowza, which then distributes this stream to all other players.
But obviously Wowza won't act as a video player to you.. :-)
Can you clarify your first question, maybe I can give a better answer then.

RTMP Streaming using ffserver

I would like to be able to stream media content originated by eg. a file to a flash player using RTMP.
I have considered librtmp though it seems ffmpeg support rtmp more as a client than as a server, that is, it implement the push/pull models w/o a ~server~ model.
Having 'ffserver' in mind, Does it support RTMP in the above mentioned manner? is it possibe to expose H264/AAC content via RTMP using ffserver ?
Any help will B appreciated.
Nadav at Sophin
Have you looked into Red5? http://www.red5.org/
I have used CRTMP-Server and have to say its amazing, and C/C++
http://www.rtmpd.com/
it worked great for me. I used to to send a MPEG-TS stream to a flash client. for a live desktop capture application.
Basically i had a directshow filter that captured the desktop area, then fed it to a H264 encoder filter then wrapped it in a TS container and fed it via TCP to rtmpd. It worked pretty well.