I am trying to publish/subscribe to AWS IoT MQTT broker from a client that does not support SigV4 or Client Certificates, it just has SSL with username and password. From what I can tell this won't be possible, so what is the best way to integrate this client?
Currently the client is publishing to a CloudMQTT broker which is working nicely, but I want to integrate Amazon Echo/Alexa into the solution to allow voice control so I need some way to connect it instead to the AWS IoT MQTT broker where I have Alexa publishing data (using Lambda and IoT Device Shadows).
What is the best approach, because as far as I can tell I can't connect the client to AWS MQTT using SSL, it insists on certificates. Should I try and bridge cloudMQTT to AWS MQTT? Or is there some way I could get the Echo to publish to a different MQTT broker than Amazons?
Bridging the brokers is one possible solution as described at
https://aws.amazon.com/blogs/iot/how-to-bridge-mosquitto-mqtt-broker-to-aws-iot/
This did turn out to be quite a complicated process though. I bridged using a local mosquitto install, and it failed to connect with 'unknown error'. Having done some searching online it looks like this problem has just appeared in the latest release of mosquitto. Instead I tried bridging with a mosquitto broker running on an AWS Linux EC2 instance and I was successful in bridging using this.
The better solution I came up with is to modify my Lambda function to publish directly to the MQTT broker I was already using. To do this you need to include the node.js module 'mqtt.js' (or a similar library), which is not in the aws-sdk and so does require a bit of reading to figure out how to do it. I have just been using the AWS Lambda web interface inline editor to write code up till now, which unfortunately doesn't allow you to include external libraries. Instead you need to create your own deployment package.
Below are two useful links which help you get started with making your own deployment package, but they are missing a couple of crucial bits of info I have mentioned below:
https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/
http://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html
You will need to write the code in a file on your hard drive, then use npm-install from the command line to put the required dependencies into the folder your code is in. You then need to zip the whole lot so that there is no top level folder containing it all. That is to say your code needs to be in the root of the zip, not in a folder in the root of the zip (which is what you get if you right click your code containing folder and send to zip).
What is also not mentioned is that if you are moving from working in the online editor you need to include a couple of lines at the top of your JavaScript so that paths resolve correctly. You need to add the following:
var child_process = require('child_process');
var path = require('path');
You can then upload this code in the lambda function web editor and build your function as normal. Unfortunately you can no longer use the inline web editor, so you need to re-zip and upload again to make changes.
Related
I'm trying to publish to the devices's topic from a Raspberry-Pi and read it on a Node.js server.
I'm using the GCP IoT contrib nodes for Node-Red.
I managed to write a message to the cloud with other node (on a general topic) and read it on the server, so the private key and credentials are fine.
I use the same project as above, the right registryId, the right region, and an already allowed deviceId:
The node is connected to the cloud and when I inject a timestamp into it, I can see a successful log:
But between them I get some unsuccessful:
I cannot find how to read the sent messages on the GCP Console (like you can with the normal messages from subscriptions).
And I don't know how to read them by code. Maybe like this?
gcp code link
Also:
When I send messages to a normal topic, and let the Node-Red read them with the Read from Google node, it crashes the app.
Thank you!
As a part of my project, I am trying to integrate ESP32 with some of the AWS services.
One of the task demands to store certain codes(stored as '.ino' files) of ESP32(in Arduino IDE) in Amazon S3 buckets(cloud storage). Whenever the user(For example: a mobile app) or ESP32 itself sends requests to AWS S3 server to retrieve any .ino file, that particular .ino file should be received and executed instantaneously by ESP32.
While trying to figure out the solution for the above, I am stuck thinking about how exactly will ESP32 execute the code retrieved from cloud while maintaining a web-socket connection with AWS Servers.
One of the possible solutions which I thought was using Dual Core Processing in ESP32 but I am not exactly sure how and where this would be used in my Arduino IDE code
It would be great if someone could throw some light on the above query!
I have a legacy app accessing MQ Server from an MQ Client using the C++ API. How is this API used to add encryption over the Server Connection Channel? I can't find a location where the certificate is provided to the imqChannel object.
You don't provide any code that is not working to help you with so I can provide only some general direction.
You specify the cipher like this:
pchannel->setSslCipherSpecification("TLS_RSA_WITH_AES_256_CBC_SHA256");
You can specify the location of the kdb and sth file like this:
(note in this example it would expect to find two files, /tmp/key.kdb and /tmp/key.sth)
manager.setKeyRepository("/tmp/key");
You can also specify the location of the key repository non-programmatically using the mqclient.ini or setting the MQSSLKEYR environment variable, if you are interested in these options comment and I'll expand this answer.
Upon using the Google Cloud IoT Core platform, it seems to be built around the idea of sending configurations down to the device and receiving states back from it.
Google's own documentation suggests using that approach instead of building around sending commands down (as a config) and getting responses back (as a state).
However in the very end of the documentation they show an example of exactly that.
I am struggling to understand how does one support both approaches? I can see the benefit of how it was designed but I am also struggling to understand how would one be able to talk to the device using such an idiom of values and results as the config.
Has anybody implemented a command/response flow? Is it possible to subscribe to the state topic to retrieve the state of the device in my own application?
Edit based on clarifying comment below:
We've got a beta feature we're calling "Commands" which will do the reboot you're talking about. So the combination of config messages (for persistent configuration that you want to send a device on startup/connect to IoT Core) and Commands for fire and forget like a reboot message can do what your'e talking about. Current state is a bit trickier, in that you could either have a callback mechanism where you send a command to ask, and listen on the events/ channel for a response, or have the device report state (/state/ MQTT topic) and just ask IoT Core's admin SDK rather than the device.
Commands just went open beta, you should have access to it now. If you're using the gcloud SDK from command line, you'll need to do a gcloud components update and then gcloud beta iot devices --help will show the commands group. If you're using the console, when you drill down to a single device, you should now see "Send Command" next to "Update Configuration" on the top bar.
Old Answer: As a stab at answering, it sounds like rather than using the state topic, you could/should just use the standard /events/ topic and subscribe to the Pub/Sub topic the devices go into instead?
It really depends on the volume and number of devices we're talking about in terms of keeping that state machine in sync.
Without knowing what specifically you're implementing, I'd probably do something like send configs down, respond from device on the /events/ topic, and have a Cloud Function that tracks the Pub/Sub topic and updates something like a Firestore instance with the state of the devices, rather than using the /state/ topic. Especially if you're doing something in response directly to the state reporting of the device.
Send command to device
To send a command to a device, you will need to use the sendCommandToDevice API call.
Receive command from device
To receive a command from a device, subscribe to the /devices/<your-device-d>/commands/# topic.
Full examples will eventually be published to the Google Cloud IoT Core samples repos:
Java
NodeJS
Python
I am a little new to active MQ so please bear with me.
I am trying to take advantage of the ActiveMQ priority backup feature for some of my Java and CPP applications. I have two brokers on two different servers (local and remote), and I want the following behavior for my apps.
Always connect to local broker on startup
If local broker goes down, connect to remote
While connected to remote, if local comes back up, we then reconnect to local.
I have had success with testing it on the java apps by simply adding priorityBackup to my uri options
i.e.
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
However stuff isn't going as smoothly on the CPP side.
The following works fine on the CPP apps (with basic working failover functionality - aka jumping to remote when local goes down )
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false
But updating the uri options with priorityBackup seems to break failover functionality completely (my apps never failover to the remote broker, they just stay in some kind of broker-less/limbo state when their local broker goes down)
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
Is there anything I am missing here? Extra uri options that I should have included?
UPDATE: Transport connector info
<transportConnectors>
<transportConnector name="ClientOpenwire" uri="tcp://0.0.0.0:61616?wireFormat.maxInactivityDuration=7000"/>
<transportConnector name="Broker2BrokerOpenwire" uri="tcp://0.0.0.0:62627?wireFormat.maxInactivityDuration=5000"/>
<transportConnector name="stompConnector" uri="stomp://0.0.0.0:62623"/>
</transportConnectors>
backup and priorityBackup parameters are handled in completely different way in Java and C++ implementation of the library.
Java implementation works well but unfortunately C++ implementation is broken. There are no extra options that can fix this issue. Serious changes in library are required to resolve this issue.
I was testing this issue using activemq-cpp-library-3.8.3, and brokers in various versions (5.10.0, 5.11.1). Issue is not fixed in 3.8.4 release.