RTSP streaming service using cloudfront and s3 - amazon-web-services

I would like to create a distribution network for the mp3 files of my amazon S3 bucket. I managed to achieve it using aws Cloudfront using RTMP protocol. But as android has no native support for RTMP, i am exploring the idea of making it using RTSP protocol.Can some one help in figuring out how to achieve it.

RTSP is stateful as it maintains a connection trough RTCP, requires multiple ports and you will have issues with firewall traversal. Plus, if you want to take advantage of S3 you should use instead a HTTP streaming protocol, unless you want to stream the mp3 files directly via progressive download.
The are two alternatives: HLS and DASH, with HLS being the most used format at the moment.
Android 4.x+ has native support for HLS, it works on iOS out of the box since it's made by Apple and on desktops it works natively on Safari 6+ and all other browsers with a Flash fallback. There are many web players available, the most noteworthy being JWPlayer (paid) or clappr (open-source).
The Amazon Elastic Transcoder supports HLS transcoding and you can also use an open-source solution like ffmpeg.
https://developer.apple.com/streaming/

Related

Performance Testing of Amazon Appstream 2.0 Desktop Application

I have a desktop application which is managed in AWS AppStream 2.0 and I want to conduct a performance test for the same.
I tried multiple ways to record that Application using JMeter/Load Runner (using different protocol) but the tool is not able to capture any server/network calls for the application.
Is there any way we can record these kind of applications using LR or JMeter?
As per Amazon AppStream 2.0 FAQs:
Streaming
Q: What streaming protocol does Amazon AppStream 2.0 use?
Amazon AppStream 2.0 uses NICE DCV to stream your applications to your users. NICE DCV is a proprietary protocol used to stream high-quality, application video over varying network conditions. It streams video and audio encoded using standard H.264 over HTTPS. The protocol also captures user input and sends it over HTTPS back to the applications being streamed from the cloud. Network conditions are constantly measured during this process and information is sent back to the encoder on the server. The server dynamically responds by altering the video and audio encoding in real time to produce a high-quality stream for a wide variety of applications and network conditions.
So I doubt that this is something you can really record and replay, with JMeter you can record only HTTP and HTTPS (see How to Run Performance Tests of Desktop Applications Using JMeter for details)
With regards to LoadRunner - I don't see any mention of NICE DCV protocol in the LoadRunner Professional and LoadRunner Enterprise 2021 License Bundles
The only option I can think of is downloading the client from https://www.nice-dcv.com/, the bundle contains a number of .dll files and you can invoke the exported functions from the .dlls via JNA
Starting at the top of the stack: (For LoadRunner)
Citrix
Terminal Server
GUI Virtual user
Template, Visual Studio using NICE API application source (if available in C, C++, C#, or VB
Template Java, using client NICE Application source in Java (if available)
Bigger questions, as you are using an Amazon service, what is your SLA for response time, bit rate, Mean QOS for video, under load. If you have no contractual SLA how/who will you have to fix the issue at Amazon.

IoT - live video streaming from devices

I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.

Live streaming from webcam in a browser

I am working on a live-streaming prototype, I have been reading a lot about how live-streaming works and many different approaches but I still can't find a live-streaming stack that suits my needs...
These are the requirements for my prototype:
1)The video/audio recording must come from a web browser using the webcam, the idea is that the client preferably shouldn't need to install plugins or do anything complicated(maybe installing Flash player plugin is acceptable, only for recording the video, the viewers should be able to view the stream without plugins).
2)It can't be peer to peer since I also need to store the entire video in my server (or in Amazon s3 servers for example) for viewing later.
3)The viewers should also be able to watch the stream without the need of installing anything, from their web browsers, say Chrome and Firefox for example. We want to use the HTML5 video tag if possible.
4)The prototype should be constructed without expending money preferably. I have seen that AWS-Cloudfront and Wowza offer free trials so we are thinking about using these 2 services.
5)The prototype should be able to maintain 1 live stream at a time and 2 viewers, just that, so there are no restrictions regarding this.
Any suggestions?
I am specially stuck/confused with the uploading/encoding video part of the architecture(I am new to streaming and all the formats/codecs/protocols/technologies are making it really hard to digest).
As of right now, I came across WebRTC that apparently allows me to do what I want, record and encode video from the browser using the webcam, but this API only works with HTTPS sites. Are there any alternatives that work with HTTP sites?
The other part that I am not completely sure about is the need for an encoding server, for example Wowza Streaming Engine, why do I need it? Isn't it enough if I use for example WebRTC for encoding the video and then I just send it to the distribution service (AWS-Cloudfront for example)? I do understand that the encoding server would allow me to support many different devices since it will create lots of different encodings and serve many different HTTP protocols, but do I need it for this prototype? I just want to make a 1 format (MP4 for example) live-stream that can be viewed in 2 web browsers, that's all, I don't need variety of formats nor support for different bandwidths or devices.
Base on your requirement, WebRTC is good way.
API only works with HTTPS sites. Are there any alternatives that work
with HTTP sites?
No. Currently Firefox is only browser is allow WebRTC on HTTP, but finally it need HTTPS
For doing this prototype you need to go with the Wowza WebRTC.
While going with wowza all the streams are delivered from the wowza only.So it become a routed WebRTC.
Install Wowza - https://www.wowza.com/docs/how-to-install-and-configure-wowza-streaming-engine
Enable the WebRTC - https://www.wowza.com/docs/how-to-use-webrtc-with-wowza-streaming-engine
Downaload and configure the Streamlock. or Selfsigned JKS file - https://www.wowza.com/docs/how-to-request-an-ssl-certificate-from-a-certificate-authority
Download the sample WebRTC - https://www.wowza.com/_private/webrtc/
Publish stream using the Publish HTML and Play through the Play HTML ( Supported Chrome,Firefox & Opera Browsers)
For MP4 files in WebRTC : you need to enable the transcoder with h264 & aac. Also you need to enable the option Record all the incoming Streams in the properties of application which you are creating for the WebRTC ( Not the DVR ).Using the File writer module save all the recorded files in a custom location.By using a custom script(Bash,Python) Move all the Transcoded files to the s3 bucket, Deliver through cloudfront.

How to work on Wowza media Engine in Aws

We have ios mobile app application..We need to implement video streaming in wowza media Engine.How to work on Wowza media Engine in Aws
Since it sounds like you are trying to stream VOD files from a web server or S3 bucket, it's best to use the Wowza Streaming Engine MediaCache functionality. This is a more optimal way of streaming content that is not located locally. On initial player request, it grabs the specified number of blocks from the remote location and caches the segments locally, which it then serves to all subsequent player requests.
To use MediaCache, you need to first create the MediaCache store (where the cached content is stored) and the MediaCache sources (where your Wowza server will obtain the remote content). MediaCache sources can be a Cloud storage provider (currently AWS S3, Google Cloud or Microsoft Azure), a file server, or a web server. Each of these sources are identified with a prefix (for example, amazons3). You will then need to create a VOD Edge type of application which can access these MediaCache sources.
If your application name is vodedge, and you are streaming sample.mp4 from your amazons3 source, your example playback URL would then be:
http://localhost:1935/vodedge/_definst_/mp4:amazons3/sample.mp4/playlist.m3u8
Note that you need to include the application instance (default is _definst_).
The playback formats that you choose is really dependent on your target audience and players. Mobile devices don't support RTMP unless you use an app (like the VLC mobile app). But if you know that your target audience will only be using desktop, you control these machines (such as in an internal corporate network) where you can install the required plugins, and latency is a paramount requirement, then RTMP might be a better choice for you, as RTMP is a streaming protocol and is inherently less latent.
If you do need to stream to mobile devices and latency is important, you can opt to tweak the HTTP streaming packetization in your Wowza server so that your target chunk durations are shorter. You can do this by selecting your Wowza live application, and selecting the Properties tab (possible only if you have enabled Advanced Settings on your Manager UI account). Do note that Apple spec recommends a 10-second segment length (which is the Wowza default), and you may run into bandwidth issues as requests for the chunks would be more frequent.

WebRTC and gstreamer on linux device

I have small computer (something like Arduino or Raspberry pi) with Linux, camera and gstreamer installed on it.
I need to stream h264 video from this device to browser using WebRTC technology. Also, I use NodeJS as signaling server.
In simple words, I need to doing a WebRTC client from my device. What is the best way to do this? Can I use WebRTC Native API for this goal? How can I install it on my small device? Or, maybe, I just need to play with my gstreamer and install some webrtc plugins for it?
Since you will have to use a signalling server anyways, I would say you should use the Janus-Gateway. You mention CentOS for your signalling server, I am not 100% if it will run on CentOS specifically, but I have ran it successfully in Debian Jessie build with just a few dependency installations.
Janus handles the entire call set up with the gateway(signalling and everything). So, some port forwarding will probably have to be done so that the SDP exchange can occur(which you would have to worry about with any signalling server).
Install the gateway, there are a few dependencies but all were simple
installations
Take a look at the janus_streaming plugin. It has a gstreamer example that will stream from a gstreamer pipeline. Also, the streamingtest demo page to see how the Javascript API works for that plugin
The plugin listens on those ports given in the configuration file and will accept traffic from any IP address. So, I expect you can run a gstreamer pipeline on a different machine on the same network and send it to the plugin.
NOTE: You will have to modify the SDP that the JavaScipt sends to the gateway so that it includes H264(probably get rid of all other codecs as well just to force negotiation). You can do this by accessing the sdp through the jsep object passed to the success case for the createOffer function in the janus JavaScript API(jsep.sdp).
Another possibility for you is to use the Kurento Media Server (KMS), which has been written on top of GStreamer. I see two possibilities
You install KMS in a Ubuntu 14.04 box and bridge with your device, so that the device generates the video stream and sends it to the KMS box. From that, you can transcode it to VP9 and distribute it as a WebRTC stream quite easily using kurento client APIs (which may be used from Node.js). The application making the transcoding will require an RtpEndpoint (receiving video form the device in RTP/H.264) connected to a WebRtcEndpoint (capable of sending the video stream through WebRTC). This option is quite simple to implement because it's the standard way of using KMS. However, you will need to generate the RTP/H.264 stream on the device and appropriate SDP for it (this can be done using standard GStreamer elements)
You try to install KMS into your box directly. This might be more complex because it requires compiling KMS to the specific device, which may require some time investment. In addition, performing the transcoding in the device might be too expensive and you could starve its CPU.
Disclaimer: I'm member of the Kurento development team
You mentioned that you used a NodeJS signaling server. Recently Ericsson released an open source WebRTC gstreamer element: http://www.openwebrtc.io/, and along with their release they also published a WebRTC demo using node.js: http://demo.openwebrtc.io:38080/; the code here: https://github.com/EricssonResearch/openwebrtc-examples/tree/master/server.
For WebRTC for Raspberry Pi 2 you may want to consider UV4L. It allows you to stream live Audio & Video from the Rpi to any browser on a PC (HTML5).