I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.
Related
I have a need to poll for a close-to-real time reading from a serial device (using ESP32) from a web application. I am currently doing this using Particle Photons and the Particle Cloud API, and am wondering if there is a way to achieve similar using Google Cloud IoT.
From reading the documentation, it seems a common way to do this is via PubSub and then to publish to BigQuery via DataFlow or Firebase via Cloud Functions. However, to reduce pricing overhead, I am hoping to only trigger a data exchange(s) when the device receives an external request.
It looks like there is a way to send commands to the IoT device - am I on the right track with this? I can't seem to find the documentation here, but after receiving a command it would use PubSub to publish to a Topic, which can trigger a Cloud Function to update Firebase?
Lastly, it also looks like there is a way to do a GET request to the device's DeviceState, but this can only be updated once per second (which might also work, though it sounds like they generally discourage using state for this purpose).
If there is another low-latency, low-cost way to allow a client to poll for a real-time value from the IoT device that I've missed, please let me know. Thank you!
Espressif has integrated Google's Cloud IoT Device SDK which creates an authenticated bidirectional MQTT pipe between the device and IoT Core. As you've already discovered, you can send anything from the cloud to the device (it's called a "command" but it's just an MQTT payload so you can put almost anything you want in it) and vice versa (it's called "telemetry" but again it's just an MQTT payload). Once incoming messages from devices reach the cloud, pubsub can route them wherever you want. I don't know if I'd call it real-time, but latencies in a good WiFi network tend to be under a second.
I am trying to use Amazon's Video Signaling service to create a multi-user video chat system. It appears as if the only supported topology is One-to-Many. Does KVS support Many-to-Many?
i.e. One WebRTC session can feed multiple peers, but I can't mesh them so everyone could communicate with everyone.
We do not currently support the mesh scenario with the signaling service out of the box. This is something we are looking at to support out-of-the-box but currently, there needs to be some solution engineering by enabling a higher-order coordinator.
I am working on a live-streaming prototype, I have been reading a lot about how live-streaming works and many different approaches but I still can't find a live-streaming stack that suits my needs...
These are the requirements for my prototype:
1)The video/audio recording must come from a web browser using the webcam, the idea is that the client preferably shouldn't need to install plugins or do anything complicated(maybe installing Flash player plugin is acceptable, only for recording the video, the viewers should be able to view the stream without plugins).
2)It can't be peer to peer since I also need to store the entire video in my server (or in Amazon s3 servers for example) for viewing later.
3)The viewers should also be able to watch the stream without the need of installing anything, from their web browsers, say Chrome and Firefox for example. We want to use the HTML5 video tag if possible.
4)The prototype should be constructed without expending money preferably. I have seen that AWS-Cloudfront and Wowza offer free trials so we are thinking about using these 2 services.
5)The prototype should be able to maintain 1 live stream at a time and 2 viewers, just that, so there are no restrictions regarding this.
Any suggestions?
I am specially stuck/confused with the uploading/encoding video part of the architecture(I am new to streaming and all the formats/codecs/protocols/technologies are making it really hard to digest).
As of right now, I came across WebRTC that apparently allows me to do what I want, record and encode video from the browser using the webcam, but this API only works with HTTPS sites. Are there any alternatives that work with HTTP sites?
The other part that I am not completely sure about is the need for an encoding server, for example Wowza Streaming Engine, why do I need it? Isn't it enough if I use for example WebRTC for encoding the video and then I just send it to the distribution service (AWS-Cloudfront for example)? I do understand that the encoding server would allow me to support many different devices since it will create lots of different encodings and serve many different HTTP protocols, but do I need it for this prototype? I just want to make a 1 format (MP4 for example) live-stream that can be viewed in 2 web browsers, that's all, I don't need variety of formats nor support for different bandwidths or devices.
Base on your requirement, WebRTC is good way.
API only works with HTTPS sites. Are there any alternatives that work
with HTTP sites?
No. Currently Firefox is only browser is allow WebRTC on HTTP, but finally it need HTTPS
For doing this prototype you need to go with the Wowza WebRTC.
While going with wowza all the streams are delivered from the wowza only.So it become a routed WebRTC.
Install Wowza - https://www.wowza.com/docs/how-to-install-and-configure-wowza-streaming-engine
Enable the WebRTC - https://www.wowza.com/docs/how-to-use-webrtc-with-wowza-streaming-engine
Downaload and configure the Streamlock. or Selfsigned JKS file - https://www.wowza.com/docs/how-to-request-an-ssl-certificate-from-a-certificate-authority
Download the sample WebRTC - https://www.wowza.com/_private/webrtc/
Publish stream using the Publish HTML and Play through the Play HTML ( Supported Chrome,Firefox & Opera Browsers)
For MP4 files in WebRTC : you need to enable the transcoder with h264 & aac. Also you need to enable the option Record all the incoming Streams in the properties of application which you are creating for the WebRTC ( Not the DVR ).Using the File writer module save all the recorded files in a custom location.By using a custom script(Bash,Python) Move all the Transcoded files to the s3 bucket, Deliver through cloudfront.
I am trying to send chunks of a data by using MQTT to aws IoT and by rule engine, the data will be streamed to a web app in real time (or near to real time )manner.
I am trying to find a way by which I can get some kind of acknowledgement. the role of acknowledgement is important for my use case as I am trying to send some kind of medical equipment data (lets say blood pressure data during operation).
thus, In a nut shell, I need a quickest way to transfer data(just like MQTT) between the device and AWS IoT which should have some kind acknowledgement mechanism.
Also I would like to add that ,'if suppose the device or web server could not get the message sent by MQTT due to some internet issues, then it will be lost right. I need to add some sort of mechanism by which the mqtt messages could be buffered and queued for some time so that once the device or web app comes online it can get the queued data. I also know that there is something called device shadow but we have thought using it differently. can you suggest about it ?'
I am sure that some of you have faced this problem and also found an alternate of MQTT in data transferring to AWS IoT.
Kindly share your thoughts.
Thanks.
I'm in testing stage of launching an online radio. I'm using AWS CloudFormation stack with Adobe Media Server.
My existing instance type is m1.large and my Flash Media Live Encoder is streaming mp3 at 128kbps which i think is pretty normal but it's producing a stream that isn't smooth & stable at all and seems to have a lot of breaks.
Should i pick an instance type with higher specs?
I'm running my test directly off of LiveHLSManifest link that opens on my iPhone's Safari and plays on browser's build-in player..which doesn't set any buffering on client side - could this be the issue?
Testing HLS/HDS links directly on iPhone's Safari was a bad idea. I relied on built-in players already having some sort of buffering configuration by default but noo... I was able to receive stable & smooth stream when i used players like Strobe Media Playback, FlowPlayer etc.. Hopefully, this answer will save someone some time.