AWS Greengrass V2 Modbus RTU adapter connectivity issue - amazon-web-services

I have deployed the Greengrass Modbus component to my Raspberry pi 4B with the required dependencies successfully. The pi is connected to a Modbus temperature and humidit sensor via an RS485 USB dongle. I confirm that I can read the holding registers successfully using Modpoll Commandline master simulator tool, all lovely so far. The sensor can be accessed through the path "/dev/ttyUSB0".
I want Greengrass to report the values of these registers on the sensor to AWS IoTCore.
To request data, I send a payload to a specific topic from the AWS broker to the Greengrass core. In my particular case, the topic is the default one "modbus/adapter/request". I sent the following payload:
{
"request": {
"operation": "ReadHoldingRegistersRequest",
"device": 4,
"address": 1,
"count": 2
},
"id": "TestRequest"
}
And I get the following response on the topic "modbus/adapter/response":
{
"response": {
"status": "fail",
"error_message": "Modbus Error: [Connection] Failed to connect[rtu baud[9600]]",
"error": "Exception",
"payload": {
"error": "Modbus Error: [Connection] Failed to connect[rtu baud[9600]]"
},
"operation": "ReadHoldingRegistersRequest",
"device": 4
},
"id": "TestRequest"
}
To reiterate, I know the correct port because I can read the holding registers using commandline Modbuspoll. However, using the tool I can specify more parameters, such as parity type, baud rate, etc... which seems like I can't do here. For your reference, the parameters I used on Modpoll are the following:
Address: 4
Baud rate: 9600
Parity: none
Registers: 1 (or 40001 for PLC)
Path: /dev/ttyUSB0
For your information, all of the steps mentioned above are following this guide: https://docs.aws.amazon.com/greengrass/v2/developerguide/modbus-rtu-protocol-adapter-component.html#modbus-rtu-protocol-adapter-component-input-data
Any suggestion?

Upon looking at the log /greengrass/v2/logs/aws.greengrass.Modbus.log , I quickly realized that it was a permission issue with the pi. I was able to quickly fix it using the sudo chmod 777 command to the port.

Related

Difference between AWS IoT shadow file desired and reported fields

I'm looking at the AWS IoT documentation for shadow states and trying to better understand the use of desired and reported in the shadow file.
The documentation states:
When the shadow's state changes, AWS IoT sends /delta messages to all MQTT subscribers with the difference between the desired and the reported states.
After looking through the rest of the documentation I don't feel like I have a clear grasp of the use case for desired vs reported. Can someone explain the use case? When do we use one vs. the other?
Let's start from the beginning, a device shadow is a persistent virtual shadow of a Thing defined in AWS IoT Registry. Basically, it’s a JSON State Document that is used to store and retrieve current state information for a Thing. You can interact with a Device Shadow using MQTT Topics or REST API calls. The main advantage of Shadows is that you can interact with it, regardless of whether the thing is connected to the Internet or not.
A shadow’s document contains a state property that describes aspects of the device’s state:
{
"state": {
"desired": {
"color": "RED"
},
"reported": {
"color": "GREEN"
},
"delta": {
"color": "RED"
}
}
}
Here's a description of each state:
Apps specify the desired states of device properties by updating the desired object.
Devices report their current state in the reported object.
AWS IoT reports differences between the desired and the reported state in the delta object.
Every shadow has a reserved MQTT Topic and HTTP URL that supports the get, update, and delete actions on the shadow. Let's take a look:
$aws/things/THING_NAME/shadow/update: publish to this Topic to update/create the Thing Shadow;
$aws/things/THING_NAME/shadow/update/accepted: AWS IoT publishes reported or desired portion of the State Document to this Topic, on accepting the update request;
$aws/things/THING_NAME/shadow/update/rejected: AWS IoT publishes an Error Message to this Topic when it rejects update request;
$aws/things/THING_NAME/shadow/update/documents: AWS IoT publishes a State Document with Previous and Current State information to this topic whenever an update to the shadow is successfully performed;
$aws/things/THING_NAME/shadow/update/delta: AWS IoT publishes a response Delta State Document to this topic when it accepts a change for the thing shadow and the request state document contains different values for desired and reported states.
Here's an example. Let's say that we have an air purifier and we want to change the fan speed. The flow will be the following:
User changes the fan speed from the air purifier mobile application
The mobile application publishes the following JSON message to this MQTT topic: $aws/things/THING_NAME/shadow/update to update the device shadow with a new desired state: "fanSpeed": 50. It will look like this:
{
"state": {
"desired": {
"fanSpeed": 50
}
}
}
On successful shadow update, if the previous reported state is different from "fanSpeed": 50, AWS IoT will publish desired state to delta topic $aws/things/THING_NAME/shadow/update/delta.
The shadow state document may look like this:
{
"state": {
"desired": {
"fanSpeed": 50
},
"reported": {
"fanSpeed": 100
},
"delta": {
"fanSpeed": 50
}
}
}
The device (our air purifier) that is subscribed to delta topic will perform the requested operation (set fan speed to 50 in this case), and report back the new state to AWS IoT Device Shadow, using Update Topic $aws/things/THING_NAME/shadow/update with following JSON message:
{
"state": {
"reported": {
"fanSpeed": 50
}
}
}
Now our air purifier has a fan speed of 50... and that's how it works ;)

AWS IoT thing connectivity status not reliable

I need to get IoT devices status reliable.
Now, I have Lambda connected to SELECT * FROM '$aws/events/presence/#' events on IoT.
But I can't get reliable device status in the case when a connected device was disconnected and connected back within ~ 40 seconds. The result of this scenario - events in the order:
1. Connected - shortly after device was connected again
2. Disconnected - after ~ 40 seconds.
It looks like the message disconnected is not discarded when device is connected back and emitted after connection timeout in any case.
I've found a workaround - request device connectivity from AWS_Things IoT index. In fact, I also receive previous connectivity state, but it has timestamp field. Then, I just compare the current event.timestamp with the timestamp from index and if it higher that 30 seconds - I discard disconnected event silently. But this approach is not reliable, because I am still able get wrong behavior when switching device faster - with 5 seconds interval. This is not acceptable for my project.
Is it possible to use IoT events to solve my problem? I wouldn't like to go in devices index polling..
You can also use an sqs delay queue and check after 5 secs if the disconnect is true. That is way cheaper than using step functions. This is also the official solution.
Handling client disconnections
The best practice is to always have a wait state implemented for lifecycle events, including Last Will and Testament (LWT) messages. When a disconnect message is received, your code should wait a period of time and verify a device is still offline before taking action. One way to do this is by using SQS Delay Queues. When a client receives a LWT or a lifecycle event, you can enqueue a message (for example, for 5 seconds). When that message becomes available and is processed (by Lambda or another service), you can first check if the device is still offline before taking further action.
https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html#connect-disconnect
Well, on this moment I just use StepFunction, connected to SELECT * FROM '$aws/events/presence/#' event that makes check of actual thing state after 10s delay:
{
"StartAt": "ChoiceEvent",
"States": {
"ChoiceEvent": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.eventType",
"StringEquals": "disconnected",
"Next": "WaitDelay"
}
],
"Default": "CheckStatus"
},
"WaitDelay": {
"Type": "Wait",
"Seconds": 30,
"Next": "CheckStatus"
},
"CheckStatus": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:xxxxxxx:function:connectivity-check",
"End": true
}
}
}
The connectivity-check lambda just checks actual thing state in IoT registry when eventType is disconnected

GCP stackdriver fo OnPrem

Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.

AWS EFS - Script to create mount target after creating the file system

I am writing a script that will create an EFS file system with a name from input. I am using the AWS SDK for PHP Version 3.
I am able to create the file system using the createFileSystem command. This new file system is not usable until it has a mount target created. If I run the CreateMountTarget command after the createFileSystem command then I receive an error that the file system's life cycle state is not in the 'available' state.
I have tried using createFileSystemAsync to create a promise and calling the wait function on that promise to force the script to run synchronously. However, the promise is always fulfilled while the file system is still in 'creating' life cycle state.
Is there a way to force the script to wait for the file system to be in the available state using the AWS SDK?
One way is to check the status of the file system using DescribeFileSystems API. In the response look at the LifeCycleState, if it is available fire the CreateMountTarget API. You can keep checking the DescribeFileSystems in a loop with a few seconds delay until the LifeCycleState is Available
It looks like you want a waiter for FileSystemAvailable, but the elasticfilesystem files don't specify one. I'd file an issue on GitHub asking for one. You'd need to wait for DescribeFileSystems to have a LifeCycleState of available.
In the mean time, you can probably write your own with something like the following and following the waiters guide.
{
"version":2,
"FileSystemAvailable": {
"delay": 15,
"operation": "DescribeFileSystems",
"maxAttempts": 40,
"acceptors": [
{
"expected": "available",
"matcher": "pathAll",
"state": "success",
"argument": "FileSystems[].LifeCycleState"
},
{
"expected": "deleted",
"matcher": "pathAny",
"state": "failure",
"argument": "FileSystems[].LifeCycleState"
},
{
"expected": "deleting",
"matcher": "pathAny",
"state": "failure",
"argument": "FileSystems[].LifeCycleState"
}
]
},
}
Promises in the AWS SDK for PHP are used for making the HTTP request concurrently. This doesn't help in this case because the behavior of the API call is to start an asynchronous task in EFS.

AWS IoT to to send once-off real-time command to device

I have a weather station which is publishing to AWS IoT.
It reports it's state as well as measurements of the environment by publishing to shadows service messages of the format:
{
"state": {
"reported": {
"temperature" : 22,
"humidity" : 70,
....
"wind" : 234,
"air" : 345
}
}
The station has some interactive properties like _led1 and _led2 which I can also report and update via Shadows service by setting "desired" state. To do that I can send to a device message like that:
{
"state": {
"desired": {
"_led1" : "on",
"_led2" : "off",
....
"_lock99" : "open"
}
}
Thanks to shadow service Whenever device gets online it will receive synchronized state and will turn the leds and locks into desired position.
However, sometimes I want to operate the device in real-time: when troubleshooting a device - I want to send a real-time command to reboot it - and if device is live and receives the message I want to reboot it. If device was offline, then nothing happens (the reboot command never reaches the device).
So what would be the best way to control device in real-time? Still try to play with the shadows service to achieve that? Or simply create a separate topic eg. my-things/{thing_name}/real-time-commands and force device to subscribe to it?