According to https://docs.wso2.com/display/IoTS310/Analyzing+Data I should be able to do some Machine Learning tasks in IoT Server but the menu, usually available in WSO2 DAS, is missing, as is the Machine Learner features in "Configure->Features->Installed features" or "Configure->Features->Available features".
What Can I do?
Should I use an external DAS, as described here https://docs.wso2.com/display/IoTS310/Configuring+WSO2+IoT+Server+with+WSO2+Data+Analytics+Server?
Its depending on the load of the events and number of IOT devices you are dealing. If the load is not significant, you can install WSO2 DAS features in the IOT node and operate.
Going forward it will be difficult when scale up, in case you if the IOT event generating throughput is high whereas you need to have multiple nodes and clustering. Therefore simply you can setup another DAS node as in the documentation and publish events from the IOT server and leave the analytic part there. When scaling up you can have different clusters for IOT and for analytics depending on the load.
Related
Is there a way by which we can do integration between On-Premise IBM MQ with AWS SQS/API Gateway.I checked lots of links but found that we can migrate whole IBM MQ to AWS MQ but can't call from AWS to on premise MQ. Please suggest if anyone has tried this kind of integration.
I’m assuming you have an AWS based application that integrates with SQS and an on-premise application that integrates with IBM MQ, and ultimately you want to communicate effectively between the two applications.
At a functional level IBM MQ provides a client interface and a bridge between this and the AWS SQS interface is relatively straight forward to create. One important aspect to consider is the non-functional aspects. The IBM MQ client can either communicate directly back to the on-premise MQ instance, or via an AWS MQ instance. Although it may appear to be more straight forward to communicate directly to the on-premise MQ instance there are a few considerations that may mean an MQ instance in AWS is a more sensible approach.
Applications often use IBM MQ for its assured delivery capabilities,
by building a bridge to AWS SQS which is a non-assured delivery
provider there is a risk that messages can be lost or duplicated
(depending on the implementation of the bridging logic). To minimize
the chance of this occurring you want to ensure that you have a
reliable network between MQ, the bridge and SQS instance. This
removes any fragile network links, as MQ can transfer the message
reliably from on-premise to a MQ instance deployed in AWS, overcoming
any network issues transparently.
The MQ Client is relatively chatty compared to two MQ instances exchanging messages. Due to the network latency between the on-premise and AWS data center, the chatty nature of the MQ Client can impact the overall performance of the solution.
Therefore, it is often sensible to install a lightweight instance of MQ within your AWS availability zone and allow MQ to transfer the messages from on-premise to AWS efficiently and reliably. To help get you up and running quickly, you can grab the IBM MQ developer container for free on DockerHub here.
I created a SQS Adapter in the OnPremise server and called my SQS directly from there.
I'm new to AWS IoT and want to know how to get my things connectivity status.
I've read about managing indexes and believe that this is what i'm looking for.
However in my architecture, i've IoT Greengrass core which is an Edge device that is physically linked to AWS Cloud, and Greengrass devices that are connected to this edge device with bluetooth and that i'm creating in AWS IoT as IoT Things too. (IoT Thing for both Greengrass core and Greengrass devices)
I believe that once the Edge device is connected to AWS, its connectivity status in the AWS_Things index will be updated to "true". But how about the IoT Devices that are not directly connected to AWS but through the edge device ? are their connectivity states is going to be updated too ? how does it work ?
Or should i make use of shadow attributes for connectivity states when it comes to these IoT Things that don't connect directly to AWS IoT Cloud Platform ?
So I would do this in one of two ways:
One is with attributes, probably the best and "proper" way if only for the connected/disconnected status.
The other way is via having the Greengrass device query the BT attached device in the background locally, perhaps by bt-device -l (from the bluez-tools apt package). Upon detected it is no longer attached, you could publish an alert on a separate topic. The benefit of this method is that you could potentially query battery status or other properties periodically and publish under this specific topic i.e.:
IoTDevice1/BTDevice1/running True
IoTDevice1/BTDevice1/battery 80%
etc.
About getting the state of GG* Devices, I suggest the following :
The physical GG Devices communicate with the GG Core via local Lambda functions once the Core get the state from a given Device, a Lambda function (which runs from the Core) set the status of the IoT Thing in the AWS Cloud Platform.
*GG : Greengrass
I have to create and configure a two node WSO2 EI cluster. In particular I have to cluster an ESB profile and MB profile.
I have some architectural doubts about this:
CLUSTERING ESB PROFILE DOUBTS:
I based my assumptions on this documentation: https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile
I found this section:
Note that some production environments do not support multicast.
However, if your environment supports multicast, there are no issues
in using this as your membership scheme
What could be the reason for not supporting multicast? (so I can inform about possible issues with it). Looking into the table (inside the previous link) it seems to me that possible problem could be related to the following points:
All nodes should be in the same subnet
All nodes should be in the same multicast domain
Multicasting should not be blocked
Is obtaining this information from system\network engineers enough to decide whether to proceed with the multicast option?
Using multicast instead of WKA, would I need to do the same configuration steps listed in the first deployment scenario (the WKA based one) related to the "mounting registry" and "creating\connecting to databases" (as shown in the first documentation link)?
Does using Multicast instead of WKA allow me to not stop the service when I add a new node to the cluster?
CLUSTERING MB PROFILE:
From what I understand, MB profile cluster can use only WKA as membership scheme.
Does using WKA mean that I have to stop the service when I add a new node to the cluster?
So at the end can we consider the ESB cluster and the MB cluster two different clusters? Does the ESB cluster (if it is configured using multicast) need the service to be stopped when a new node is added while the MB cluster is stopped to add a new one?
Many virtual private cloud networks, including Google Cloud Platform,
Microsoft Azure, Amazon Web Services, and the public Internet do not
support multicast. Because such a platform does not support multicast.
If you configure wso2 products with multicast as the membership shceam it will not work as expected. That is the main reason for the warning in the official documentation.
You can consider the platform capability and chose any of the following membership schemes when configuring Hazalcast clustering in WSO2 Products.
WKA
Multicast
AWS
Kubernetes
Other than WKA the rest of the options for membership schema does not require you to include all the IPs of the member's in the configuration. So newly introduced nodes can join the cluster with ease.
Even in the WKA membership scheme if you have at least one known member active you can join a new member to the cluster then follow the configuration change and restart the other services without any service interruption.
Please note with all the above membership scheme usages the rest of
the configurations related to each product are needed to successfully
complete the cluster.
Regarding your concern about Clustering the MB Profile,
You can use any of the above-mentioned membership schemas which matches your deployment environment.
Regarding the adding new members to WKA, You can maintain service availability and apply the changes to servers one by one. You only need at least one WKA member running to introduce a new member to the cluster.
WSO2 MB Profile introduces cluster coordination through an RDBMS. With this new feature by default, cluster coordination is not handled by hazelcast engine. When the cluster coordination through an RDBMS is dissabled is allow the hazelcast engine to manage cluster coordination
Please note when the RDMS coordination is used there are no server restarts required.
I hope this was helpfull.
I have an Active-Active Deployment of WSO2 API Manager. I don't know if I should enable Hazelcast Clustering, because:
A) On one hand, in the link of official documentation that I followed to deploy, Hazelcast doesn't appear.
B) On the other hand, this link of official documentation says that backend throttling limits will not be shared across the cluster when Hazelcast clustering is disabled (and I of course want that backend throttling limits are shared across the cluster!). But that link is under section "Distributed Deployment", and I haven't a "Distributed Deployment". As I said, I have a "Active-Active Deployment", so I don't know if I should follow that link and install Hazelcast.
If you need backend throttling, then you have to enable clustering in the nodes. Although it is mentioned under distributed deployment, for Active-Active deployment also needs clustering if you require backend service throttling.
The idea here is that two nodes serve the requests while they are in a cluster and enable backend service throttling.
if I should follow that link and install Hazelcast
You don't need to install anything, just enable the clustering and setup the IP addresses if wka membership scheme is used (please not many cloud providers or native docker don't support multicast)
The hazelcast cluster is used to broadcast the token invalidation messages and throttling limits. You don't need to enable the cluster at all, but then you may miss the messages between nodes.
From what I have read and understood from their documentation is that only single layer IoT hierarchies can be maintained with either of their gateways. That is a field gateway(sort of like a smart router) sits between the server and edge devices and does the preprocessing and edge computing.
What I am wondering is whether field gateways provided from either vendor(AWS or Azure) can be nested as parent and children to create multi-layered IoT device hierarchies. That is gateways connected to gateways and so on.
EDIT - This kind of hierarchies would create Fog networks which would enable sub-networks within the hierarchy to function more independently without being over-reliant on the server. Also, they'd reduce the load on the server if the edge gateway could do edge computing while reducing latency as well.
Currently the edge device only able to communicate with Azure IoT Hub directly. It doesn't support nest multiple Azure IoT Edge gateways.
However, this capability is planning on adding to the product. This feature will be considered in the future but unfortunately there is no timeline yet.