From what I have read and understood from their documentation is that only single layer IoT hierarchies can be maintained with either of their gateways. That is a field gateway(sort of like a smart router) sits between the server and edge devices and does the preprocessing and edge computing.
What I am wondering is whether field gateways provided from either vendor(AWS or Azure) can be nested as parent and children to create multi-layered IoT device hierarchies. That is gateways connected to gateways and so on.
EDIT - This kind of hierarchies would create Fog networks which would enable sub-networks within the hierarchy to function more independently without being over-reliant on the server. Also, they'd reduce the load on the server if the edge gateway could do edge computing while reducing latency as well.
Currently the edge device only able to communicate with Azure IoT Hub directly. It doesn't support nest multiple Azure IoT Edge gateways.
However, this capability is planning on adding to the product. This feature will be considered in the future but unfortunately there is no timeline yet.
Related
What is the AWS recommendation for hosting multiple micro-services? can we host all of them on same APIG/Lambda? With this approach, I see when want to update an API of one service then we are deploying API of all services which means our regression will include testing access of all services. On the other hand creating separate APIG/Lambda per service we will end up with multiple resources (and multiple accounts) to manage, can be operational burden later on?
A microservice is autonomously developed and should be built, tested, deployed, scaled, ... independently. There are many ways to do it and how you'd split your product into multiple services. One pattern as an example is having the API Gateway as the frontdoor to all services behind it, so it would be its own service.
A lambda usually performs a task and a service can be composed of multiple lambdas. I can't even see how multiple services can be executed from the same lambda.
It can be a burden specially if there is no proper tooling and processes to manage all systems in a automated and escalable way. There are pros and cons for any architecture, but the complexity is definitely reduced for serverless applications since all compute is managed by AWS.
AWS actually has a whitepaper that talks about microservices on AWS.
I'm new to AWS IoT and want to know how to get my things connectivity status.
I've read about managing indexes and believe that this is what i'm looking for.
However in my architecture, i've IoT Greengrass core which is an Edge device that is physically linked to AWS Cloud, and Greengrass devices that are connected to this edge device with bluetooth and that i'm creating in AWS IoT as IoT Things too. (IoT Thing for both Greengrass core and Greengrass devices)
I believe that once the Edge device is connected to AWS, its connectivity status in the AWS_Things index will be updated to "true". But how about the IoT Devices that are not directly connected to AWS but through the edge device ? are their connectivity states is going to be updated too ? how does it work ?
Or should i make use of shadow attributes for connectivity states when it comes to these IoT Things that don't connect directly to AWS IoT Cloud Platform ?
So I would do this in one of two ways:
One is with attributes, probably the best and "proper" way if only for the connected/disconnected status.
The other way is via having the Greengrass device query the BT attached device in the background locally, perhaps by bt-device -l (from the bluez-tools apt package). Upon detected it is no longer attached, you could publish an alert on a separate topic. The benefit of this method is that you could potentially query battery status or other properties periodically and publish under this specific topic i.e.:
IoTDevice1/BTDevice1/running True
IoTDevice1/BTDevice1/battery 80%
etc.
About getting the state of GG* Devices, I suggest the following :
The physical GG Devices communicate with the GG Core via local Lambda functions once the Core get the state from a given Device, a Lambda function (which runs from the Core) set the status of the IoT Thing in the AWS Cloud Platform.
*GG : Greengrass
I have to create and configure a two node WSO2 EI cluster. In particular I have to cluster an ESB profile and MB profile.
I have some architectural doubts about this:
CLUSTERING ESB PROFILE DOUBTS:
I based my assumptions on this documentation: https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile
I found this section:
Note that some production environments do not support multicast.
However, if your environment supports multicast, there are no issues
in using this as your membership scheme
What could be the reason for not supporting multicast? (so I can inform about possible issues with it). Looking into the table (inside the previous link) it seems to me that possible problem could be related to the following points:
All nodes should be in the same subnet
All nodes should be in the same multicast domain
Multicasting should not be blocked
Is obtaining this information from system\network engineers enough to decide whether to proceed with the multicast option?
Using multicast instead of WKA, would I need to do the same configuration steps listed in the first deployment scenario (the WKA based one) related to the "mounting registry" and "creating\connecting to databases" (as shown in the first documentation link)?
Does using Multicast instead of WKA allow me to not stop the service when I add a new node to the cluster?
CLUSTERING MB PROFILE:
From what I understand, MB profile cluster can use only WKA as membership scheme.
Does using WKA mean that I have to stop the service when I add a new node to the cluster?
So at the end can we consider the ESB cluster and the MB cluster two different clusters? Does the ESB cluster (if it is configured using multicast) need the service to be stopped when a new node is added while the MB cluster is stopped to add a new one?
Many virtual private cloud networks, including Google Cloud Platform,
Microsoft Azure, Amazon Web Services, and the public Internet do not
support multicast. Because such a platform does not support multicast.
If you configure wso2 products with multicast as the membership shceam it will not work as expected. That is the main reason for the warning in the official documentation.
You can consider the platform capability and chose any of the following membership schemes when configuring Hazalcast clustering in WSO2 Products.
WKA
Multicast
AWS
Kubernetes
Other than WKA the rest of the options for membership schema does not require you to include all the IPs of the member's in the configuration. So newly introduced nodes can join the cluster with ease.
Even in the WKA membership scheme if you have at least one known member active you can join a new member to the cluster then follow the configuration change and restart the other services without any service interruption.
Please note with all the above membership scheme usages the rest of
the configurations related to each product are needed to successfully
complete the cluster.
Regarding your concern about Clustering the MB Profile,
You can use any of the above-mentioned membership schemas which matches your deployment environment.
Regarding the adding new members to WKA, You can maintain service availability and apply the changes to servers one by one. You only need at least one WKA member running to introduce a new member to the cluster.
WSO2 MB Profile introduces cluster coordination through an RDBMS. With this new feature by default, cluster coordination is not handled by hazelcast engine. When the cluster coordination through an RDBMS is dissabled is allow the hazelcast engine to manage cluster coordination
Please note when the RDMS coordination is used there are no server restarts required.
I hope this was helpfull.
This is not a duplicate question. I am just confused in Iaas,Saas with respect to AWS services like Dynamo, RDS, RedShift and Kinesis etc. They helps users to create database So, should we categorize them in Iaas or Saas?
Thanks
To help you understand, SaaS is Software as a Service. It's more like an on demand application where you don't have to worry about configurations, accesses, whitelisting etc. For instance, Google Maps (or Google Apps).
IaaS or Infra as a Service gives you more flexibility in terms of spawning of nodes and clusters, to deal with security services at IP and Port levels, manage access control and authentication etc. On AWS, you may specify what all private or public IPs will have access to your system, whether you prefer to go with dense storage or dense compute nodes for your warehouse, rotate your log files etc.
A page on Amazon RDS reads -
When you buy a server, you get CPU, memory, storage, and IOPS, all
bundled together. With Amazon RDS, these are split apart so that you
can scale them independently.
So, in short... Services like AWS and Azure are mostly now either IaaS or PaaS.
According to https://docs.wso2.com/display/IoTS310/Analyzing+Data I should be able to do some Machine Learning tasks in IoT Server but the menu, usually available in WSO2 DAS, is missing, as is the Machine Learner features in "Configure->Features->Installed features" or "Configure->Features->Available features".
What Can I do?
Should I use an external DAS, as described here https://docs.wso2.com/display/IoTS310/Configuring+WSO2+IoT+Server+with+WSO2+Data+Analytics+Server?
Its depending on the load of the events and number of IOT devices you are dealing. If the load is not significant, you can install WSO2 DAS features in the IOT node and operate.
Going forward it will be difficult when scale up, in case you if the IOT event generating throughput is high whereas you need to have multiple nodes and clustering. Therefore simply you can setup another DAS node as in the documentation and publish events from the IOT server and leave the analytic part there. When scaling up you can have different clusters for IOT and for analytics depending on the load.