I have a need to poll for a close-to-real time reading from a serial device (using ESP32) from a web application. I am currently doing this using Particle Photons and the Particle Cloud API, and am wondering if there is a way to achieve similar using Google Cloud IoT.
From reading the documentation, it seems a common way to do this is via PubSub and then to publish to BigQuery via DataFlow or Firebase via Cloud Functions. However, to reduce pricing overhead, I am hoping to only trigger a data exchange(s) when the device receives an external request.
It looks like there is a way to send commands to the IoT device - am I on the right track with this? I can't seem to find the documentation here, but after receiving a command it would use PubSub to publish to a Topic, which can trigger a Cloud Function to update Firebase?
Lastly, it also looks like there is a way to do a GET request to the device's DeviceState, but this can only be updated once per second (which might also work, though it sounds like they generally discourage using state for this purpose).
If there is another low-latency, low-cost way to allow a client to poll for a real-time value from the IoT device that I've missed, please let me know. Thank you!
Espressif has integrated Google's Cloud IoT Device SDK which creates an authenticated bidirectional MQTT pipe between the device and IoT Core. As you've already discovered, you can send anything from the cloud to the device (it's called a "command" but it's just an MQTT payload so you can put almost anything you want in it) and vice versa (it's called "telemetry" but again it's just an MQTT payload). Once incoming messages from devices reach the cloud, pubsub can route them wherever you want. I don't know if I'd call it real-time, but latencies in a good WiFi network tend to be under a second.
I'm trying to play with AWS IoT to communicate with multiple identical devices.
So far, so good, all my devices are connected to it, and the only difference between them could be a single device ID (like a mac address or a serial number)
Now I'd like to send a message to a single specific device using its device ID and I don't know if there is a good way to do that?
I could make each device to subscribe to a topic like /<DEVICE_ID>, however it doesn't seem like a good practice especially if I have thousands of devices.
Plus, AWS discourages it as stated in the AWS IoT documentation:
Note
We do not recommend using personally identifiable information in your
topics.
Is there a good way to handle this use case? Or is AWS IoT only useful to manage multiple devices at once?
Here is the best practice for creating an MQTT topic.
https://www.hivemq.com/blog/mqtt-essentials-part-5-mqtt-topics-best-practices/
Talking about your specific case
Each device needs to have a unique identity to send a command to a particular device. In this case, you need to have device_id into your MQTT topic.
You can use the following pattern to send a message for a destination device
protocol_prefix / type_of_message / dest_id / message_id
hexaiot/controldevice/d12345/x123
Use wild card character at the time of device subscription to subscribe to the topic
I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.
I'm a fresh user of ActiveMQ technology, and I have some problem approaching this technology.
I have the following situation:
I have a SW, running in a embedded (offline) ARM device, that archive a set of videos on a upluggable hard disk at run time.
Sometimes (4-5 events a day), I have to associate a alarm event to those videos and to queue the alarm on a persistent queue.
Once a month we have to extract the hard disk and to connect it to another embedded online ARM device, that should notify a ActiveMQ server about the alarms generated by the offline ARM device
And now my question: how can I store the persistent queue on the hard disk, so that the events generated byt the offline ARM device will be available to the online ARM system (the only "connection" between online and offline embedded device is hard disk)?
Please note that I cannot change the way I transmit messages to the online server, since it is a system not developed by my company.
Best regards
Giovanni
It sounds like you want a "store-and-forward" messaging pattern. You could configure the "offline" ActiveMQ broker to attempt to connect to the "online" ActiveMQ broker. The network connector will attempt to connect at configurable intervals and when it is "online" it will begin to send messages automatically.
The slight down side is that the broker will attempt to connect to the remote broker (even when offline), so you'll need to manage log rotation or logging levels to accommodate.
Look for the static:// network connector uri
Network of brokers
I'm trying to come up with the best solution for scaling a chat service in AWS. I've come up with a couple potential solutions:
Redis Pub/Sub - When a user establishes a connection to a server that server subscribes to that user's ID. When someone sends a message to that user, a server will perform a publish to the channel with the user's id. The server the user is connected to will receive the message and push it down to the appropriate client.
SQS - I've thought of creating a queue for each user. The server the user is connected to will poll (or use SQS long-polling) that queue. When a new message is discovered, it will be pushed to the user from the server.
SNS - I really liked this solution until I discovered the 100 topic limit. I would need to create a topic for each user, which would only support 100 users.
Are their any other ways chat could be scaled using AWS? Is the SQS approach viable? How long does it take AWS to add a message to a queue?
Building a chat service isn't as easy as you would think.
I've built full XMPP servers, clients, and SDK's and can attest to some of the subtle and difficult problems that arise. A prototype where users see each other and chat is easy. A full features system with account creation, security, discovery, presence, offline delivery, and friend lists is much more of a challenge. To then scale that across an arbitrary number of servers is especially difficult.
PubSub is a feature offered by Chat Services (see XEP-60) rather than a traditional means of building a chat service. I can see the allure, but PubSub can have drawbacks.
Some questions for you:
Are you doing this over the Web? Are users going to be connecting and long-poling or do you have a Web Sockets solution?
How many users? How many connections per user? Ratio of writes to reads?
Your idea for using SQS that way is interesting, but probably won't scale. It's not unusual to have 50k or more users on a chat server. If you're polling each SQS Queue for each user you're not going to get anywhere near that. You would be better off having a queue for each server, and the server polls only that queue. Then it's on you to figure out what server a user is on and put the message into the right queue.
I suspect you'll want to go something like:
A big RDS database on the backend.
A bunch of front-end servers handling the client connections.
Some middle tier Java / C# code tracking everything and routing messages to the right place.
To get an idea of the complexity of building a chat server read the XMPP RFC's:
RFC 3920
RFC 3921
SQS/ SNS might not fit your chatty requirement. we have observed some latency in SQS which might not be suitable for a chat application. Also SQS does not guarantee FIFO. i have worked with Redis on AWS. It is quite easy and stable if it is configured taking all the best practices in mind.
I've thought about building a chat server using SNS, but instead of doing one topic per user, as you describe, doing one topic for the entire chat system and having each server subscribe to the topic - where each server is running some sort of long polling or web sockets chat system. Then, when an event occurs, the data is sent in the payload of the SNS notification. The server can then use this payload to determine what clients in its queue should receive the response, leaving any unrelated clients untouched. I actually built a small prototype for this, but haven't done a ton of testing to see if it's robust enough for a large number of users.
HI realtime chat doesn't work well with SNS. It's designed for email/SMS or service 1 or a few seconds latency is acceptable. In realtime chat, 1 or a few seconds are not acceptable.
check this link
Latency (i.e. “Realtime”) for PubNub vs SNS
Amazon SNS provides no latency guarantees, and the vast majority of latencies are measured over 1 second, and often many seconds slower. Again, this is somewhat irrelevant; Amazon SNS is designed for server-to-server (or email/SMS) notifications, where a latency of many seconds is often acceptable and expected.
Because PubNub delivers data via an existing, established open network socket, latencies are under 0.25 seconds from publish to subscribe in the 95% percentile of the subscribed devices. Most humans perceive something as “realtime” if the event is perceived within 0.6 – 0.7 seconds.
the way i would implement such a thing (if not using some framework) is the following:
have a webserver (on ec2) which accepts the msgs from the user.
use Autoscalling group on this webserver. the webserver can update any DB on amazon RDS which can scale easily.
if you are using your own db, you might consider to decouple the db from the webserver using the sqs (by sending all requests the same queue), and then u can have a consumer which consume the queue. this consumer can also be placed behind an autoscalling group, so that if the queue is larger than X msgs, it will scale (u can set it up with alarms)
sqs normally updates pretty fast i.e less than one second. (from the moment u sent it, to the moment it appears on the on the queue), and rarely more than that.
Since a new AWS IoT service started to support WebSockets, Keepalive and Pub/Sub couple months ago, you may easily build elastic chat on it. AWS IoT is a managed service with lots of SDKs for different languages including JavaScript that was build to handle monster loads (billions of messages) with zero administration.
You can read more about update here:
https://aws.amazon.com/ru/about-aws/whats-new/2016/01/aws-iot-now-supports-websockets-custom-keepalive-intervals-and-enhanced-console/
Edit:
Last SQS update (2016/11): you can now use Amazon Simple Queue Service (SQS) for applications that require messages to be processed in a strict sequence and exactly once using First-in, First-out (FIFO) queues. FIFO queues are designed to ensure that the order in which messages are sent and received is strictly preserved and that each message is processed exactly once.
Source:
https://aws.amazon.com/about-aws/whats-new/2016/11/amazon-sqs-introduces-fifo-queues-with-exactly-once-processing-and-lower-prices-for-standard-queues/
Now on, implementing SQS + SNS looks like a good idea too.