I am trying to use Amazon's Video Signaling service to create a multi-user video chat system. It appears as if the only supported topology is One-to-Many. Does KVS support Many-to-Many?
i.e. One WebRTC session can feed multiple peers, but I can't mesh them so everyone could communicate with everyone.
We do not currently support the mesh scenario with the signaling service out of the box. This is something we are looking at to support out-of-the-box but currently, there needs to be some solution engineering by enabling a higher-order coordinator.
Related
I have a desktop application which is managed in AWS AppStream 2.0 and I want to conduct a performance test for the same.
I tried multiple ways to record that Application using JMeter/Load Runner (using different protocol) but the tool is not able to capture any server/network calls for the application.
Is there any way we can record these kind of applications using LR or JMeter?
As per Amazon AppStream 2.0 FAQs:
Streaming
Q: What streaming protocol does Amazon AppStream 2.0 use?
Amazon AppStream 2.0 uses NICE DCV to stream your applications to your users. NICE DCV is a proprietary protocol used to stream high-quality, application video over varying network conditions. It streams video and audio encoded using standard H.264 over HTTPS. The protocol also captures user input and sends it over HTTPS back to the applications being streamed from the cloud. Network conditions are constantly measured during this process and information is sent back to the encoder on the server. The server dynamically responds by altering the video and audio encoding in real time to produce a high-quality stream for a wide variety of applications and network conditions.
So I doubt that this is something you can really record and replay, with JMeter you can record only HTTP and HTTPS (see How to Run Performance Tests of Desktop Applications Using JMeter for details)
With regards to LoadRunner - I don't see any mention of NICE DCV protocol in the LoadRunner Professional and LoadRunner Enterprise 2021 License Bundles
The only option I can think of is downloading the client from https://www.nice-dcv.com/, the bundle contains a number of .dll files and you can invoke the exported functions from the .dlls via JNA
Starting at the top of the stack: (For LoadRunner)
Citrix
Terminal Server
GUI Virtual user
Template, Visual Studio using NICE API application source (if available in C, C++, C#, or VB
Template Java, using client NICE Application source in Java (if available)
Bigger questions, as you are using an Amazon service, what is your SLA for response time, bit rate, Mean QOS for video, under load. If you have no contractual SLA how/who will you have to fix the issue at Amazon.
I have a need to poll for a close-to-real time reading from a serial device (using ESP32) from a web application. I am currently doing this using Particle Photons and the Particle Cloud API, and am wondering if there is a way to achieve similar using Google Cloud IoT.
From reading the documentation, it seems a common way to do this is via PubSub and then to publish to BigQuery via DataFlow or Firebase via Cloud Functions. However, to reduce pricing overhead, I am hoping to only trigger a data exchange(s) when the device receives an external request.
It looks like there is a way to send commands to the IoT device - am I on the right track with this? I can't seem to find the documentation here, but after receiving a command it would use PubSub to publish to a Topic, which can trigger a Cloud Function to update Firebase?
Lastly, it also looks like there is a way to do a GET request to the device's DeviceState, but this can only be updated once per second (which might also work, though it sounds like they generally discourage using state for this purpose).
If there is another low-latency, low-cost way to allow a client to poll for a real-time value from the IoT device that I've missed, please let me know. Thank you!
Espressif has integrated Google's Cloud IoT Device SDK which creates an authenticated bidirectional MQTT pipe between the device and IoT Core. As you've already discovered, you can send anything from the cloud to the device (it's called a "command" but it's just an MQTT payload so you can put almost anything you want in it) and vice versa (it's called "telemetry" but again it's just an MQTT payload). Once incoming messages from devices reach the cloud, pubsub can route them wherever you want. I don't know if I'd call it real-time, but latencies in a good WiFi network tend to be under a second.
I have a contactflow in AWS Connect with customer audio streaming enabled. I get the customer audio steam in KVS and can read bytes from the stream and convert it to an audio file when the call is completed in Java with the examples provided by AWS.
But I want to steam the audio in a web page for real-time monitoring exactly like the AWS provides real-time monitoring in built-in CCP.
I get the steam ARN and other contact data. How can I use that stream for real-time monitoring/streaming?
Any heads up will be appreciated.
You're going to want to use a WebRTC client in the browser/page you want to use monitoring and controlling the the stream. AWS provides a WebRTC SDK for Kinesis Video Streams that can be used for this. The SDK documentation can be found here, which includes a link to samples and config details on GitHub
I'm making a project where temperature and humidity levels are sensored by Arduino and send those data to AWS with ESP-8266-01s. At the same time, those data are also shown on the web application (it may be on Node.js/Java, etc.).
So what I'm asking is how the architecture should be. What is the best practice? Does AWS also provide a web app where I can use it for both database cloud as a web application or should I make a separate project as a web app to connect to AWS?
I searched on Google but the only answers I can find are two ways: Arduino and AWS without another aspect connected to it in my case the web app.
Make use of MQTT protocol.
Components required -
Pubsubclient.h library on esp8266 that will be used to publish temp and humidity data to MQTT Broker on AWS
mosquitto MQTT broker setup on AWS used to accept data from esp8266
Python script that will subscribe to data from the mosquitto broker and dumps into any database(my suggestion is influxdb)
Graphing platform to query database and display visual timeseries-graphs(my suggestion grafana)
Use AWS only for purchasing a virtual machine. Rest can be taken care using open-source Platforms.
Assuming you want to display graphs of temperature and humidity, Using grafana is the best practice.
You will not find a silver bullet here. A proper architecture for your case depends on many things and there can be different approaches with their own pros and cons.
There are many aspects to cover including connectivity, security, update, availability, costs.
Usually IoT devices are not connected directly to the cloud, because they don't have a constant connection, or any network connection. There is a hub (or middleware) that collects data from sensors/devices and send them to the cloud for processing.
But many cloud vendors provide some out of the box complex solutions here (including AWS).
I listed just examples.
I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.