I have an Active-Active Deployment of WSO2 API Manager. I don't know if I should enable Hazelcast Clustering, because:
A) On one hand, in the link of official documentation that I followed to deploy, Hazelcast doesn't appear.
B) On the other hand, this link of official documentation says that backend throttling limits will not be shared across the cluster when Hazelcast clustering is disabled (and I of course want that backend throttling limits are shared across the cluster!). But that link is under section "Distributed Deployment", and I haven't a "Distributed Deployment". As I said, I have a "Active-Active Deployment", so I don't know if I should follow that link and install Hazelcast.
If you need backend throttling, then you have to enable clustering in the nodes. Although it is mentioned under distributed deployment, for Active-Active deployment also needs clustering if you require backend service throttling.
The idea here is that two nodes serve the requests while they are in a cluster and enable backend service throttling.
if I should follow that link and install Hazelcast
You don't need to install anything, just enable the clustering and setup the IP addresses if wka membership scheme is used (please not many cloud providers or native docker don't support multicast)
The hazelcast cluster is used to broadcast the token invalidation messages and throttling limits. You don't need to enable the cluster at all, but then you may miss the messages between nodes.
Related
There is a project migrated from legacy to GCP.
On GCP everything runs on microservices.
May be around 40-50 microservice.
I would like to automate this microservices but there is no endpoint exposed in this project.
How could you automate a microservice where there are no endpoints?
What type of architecture, you could use to test this?
Db: Firestore (nosql)
Thanks
M
In my view you can do it following way:
Use ClusterIP or NodePort to access those POD.
Spin up a new POD which will access your target POD to communicate.
You can enable POD to POD communication based on labels and enabling network policy.
You can use calico as network policy agent.
You can view the log of your testing pod using kubectl logs [pod name] or from logging service of your cloud provider or even using daemonset that you could install.
The testing POD can periodically send traffic. So you can use thread to call the target service and keep the thread in sleep mode for a while or you can use kubernetes cronjob to call the target service. Based on your usecase it will be chosen.
Let me know if it meets your requirements or you have more to elaborate?
In terms of finding out how to test your micro-services on the Google Cloud Platform, I would suggest referencing our documentation on "Microservices Architecture on Google App Engine" as it will explain and guild you how to implement your services onto GCP. You may also look into this document as well as it provides the best practices for designing APIs to communicate between microservices.
Additionally, user "ARINDAM BANERJEE" has a great example you can follow as well.
Google Cloud Platform has made hybrid- and multi-cloud computing a reality through Anthos which is an open application modernization platform. How does Anthos work for distributed data platforms?
For example, I have my data in Teradata On-premise, AWS Redshift and Azure Snowflake. Can Anthos joins all datasets and allow users to query or perform reporting with low latency? What is the equivalent of GCP Anthos in AWS and Azure?
Your question is wide. Anthos is designed for managing and distributing container accross several K8S cluster.
For a simpler view, imagine this: you have the Anthos master, and its direct node are K8S masters. If you ask Anthos Master to deploy a pod on AWS for example. Anthos master forward the query to K8S master deployed on EKS, and your pod is deployed on AWS.
Now, rethink your question: what about the data? Nothing magic, if your data are shared across several clusters you have to federate them with a system designed for this. It's quite similar than with only one cluster and with data on different node.
Anyway, you point here the real next challenge of multi-cloud/hybrid deployment. Solutions will emerge from this empty space.
Finally your last point: Azure and AWS equivalent. There isn't.
The newest Azure ARC seems to be light: it only allow to manage VM out of Azure Platform with an agent on it. Nothing as manageable as Anthos. for example: You have 3 VM on GCP and you manage them with Azure ARC. You deployed on each an NGINX and you want to set up a loadbalancer in from of your 3 VM. I don't catch how you can do this with Azure ARC. With Anthos, it's simply a service exposition of K8S -> The Loadbalancer will be deployed according with the cloud platform implementation.
About AWS, outpost is an hardware solution: you have to buy AWS specific hardware and to plug it in your OnPrem infrastructure. Need more investment on prem in your move to cloud strategy? Hard to convince. And not compliant with other cloud provider. BUT ReInvent is coming next month. Maybe an outsider?
I have to create and configure a two node WSO2 EI cluster. In particular I have to cluster an ESB profile and MB profile.
I have some architectural doubts about this:
CLUSTERING ESB PROFILE DOUBTS:
I based my assumptions on this documentation: https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile
I found this section:
Note that some production environments do not support multicast.
However, if your environment supports multicast, there are no issues
in using this as your membership scheme
What could be the reason for not supporting multicast? (so I can inform about possible issues with it). Looking into the table (inside the previous link) it seems to me that possible problem could be related to the following points:
All nodes should be in the same subnet
All nodes should be in the same multicast domain
Multicasting should not be blocked
Is obtaining this information from system\network engineers enough to decide whether to proceed with the multicast option?
Using multicast instead of WKA, would I need to do the same configuration steps listed in the first deployment scenario (the WKA based one) related to the "mounting registry" and "creating\connecting to databases" (as shown in the first documentation link)?
Does using Multicast instead of WKA allow me to not stop the service when I add a new node to the cluster?
CLUSTERING MB PROFILE:
From what I understand, MB profile cluster can use only WKA as membership scheme.
Does using WKA mean that I have to stop the service when I add a new node to the cluster?
So at the end can we consider the ESB cluster and the MB cluster two different clusters? Does the ESB cluster (if it is configured using multicast) need the service to be stopped when a new node is added while the MB cluster is stopped to add a new one?
Many virtual private cloud networks, including Google Cloud Platform,
Microsoft Azure, Amazon Web Services, and the public Internet do not
support multicast. Because such a platform does not support multicast.
If you configure wso2 products with multicast as the membership shceam it will not work as expected. That is the main reason for the warning in the official documentation.
You can consider the platform capability and chose any of the following membership schemes when configuring Hazalcast clustering in WSO2 Products.
WKA
Multicast
AWS
Kubernetes
Other than WKA the rest of the options for membership schema does not require you to include all the IPs of the member's in the configuration. So newly introduced nodes can join the cluster with ease.
Even in the WKA membership scheme if you have at least one known member active you can join a new member to the cluster then follow the configuration change and restart the other services without any service interruption.
Please note with all the above membership scheme usages the rest of
the configurations related to each product are needed to successfully
complete the cluster.
Regarding your concern about Clustering the MB Profile,
You can use any of the above-mentioned membership schemas which matches your deployment environment.
Regarding the adding new members to WKA, You can maintain service availability and apply the changes to servers one by one. You only need at least one WKA member running to introduce a new member to the cluster.
WSO2 MB Profile introduces cluster coordination through an RDBMS. With this new feature by default, cluster coordination is not handled by hazelcast engine. When the cluster coordination through an RDBMS is dissabled is allow the hazelcast engine to manage cluster coordination
Please note when the RDMS coordination is used there are no server restarts required.
I hope this was helpfull.
I have gone through the cloudbreak documentation and I am still not sure what is the exact purpose of this component.
Is it actually useful only for deploying the cluster in any cloud services and if so can we customise the components that needs to be installed in the cluster.
If it is only for maintaining the deployment of a cluster then is there any cost involved in using cloudbreak?
Cloudbreak main purpose is Hdp or Hdf cluster management. It provides an UI and api to access, create and edit the cluster. It also provides access control management for the clusters. Yes you can customize the components installation via ambari blueprint.
One additional benefit was from its component periscope, which provides autoscaling based on ambari alerts.
Can IBM Integration Bus((and /or Websphere message Broker) be implemeted on AWS ? Can my on-premise ESB be migrated to AWS Cloud ?
Thanks in Advance
AWS EC2 allows importing VMs into an AMI then you can start an EC2 instance using that image. If you are new to AWS you can check the link below
https://aws.amazon.com/ec2/vm-import/
However, you should be careful about IIB license and how many machines you can install it on before regesting the AMI in a launch configuration and create an autoscaling group and set a scaling policy that can start instances more that what you purchased.
That's very much possible. There are several possible approaches.
1. IIB on EC2
Installing and configuring IIB on an EC2 instance is very much similar to doing the same in on-premise servers. Only difference is that the physical server is in AWS Cloud. While this approach gives you maximum flexibility to design your architecture any way, it does not take advantage of the basic features of the cloud.
2. Quick Start
IIB is available for deployment under AWS Quick Start. You can read more about this here. This helps you get started quickly by setting up the entire environment in a few clicks. But, if you're planning to migrate your existing architecture to AWS, this may not suit you as the architecture is pre-defined with limited options for customization.
3. IIB on Containers
ACE 11 provides better support for containerization. You can read more about running IIB 10 on containers here and ACE 11 on containers here. After this, the containers can be deployed into fully managed containers such as AWS Elastic Container Service or your own container configuration such as Docker on EC2.
Yes of course, AWS provides the IAAS and you just install whatever you want inside. Make sure you open ports, use specific credentials for the instalation (dont use admin) and everything should work.
IBM also provides docker images of integration bus v10 and APP Connect Enterprise v11. This is true for all their integration tools, MQ, API Management and more.
Not restricted to AWS.