Can we scale service horizontally based on load using zookeeper ?
I have read on apache zookeeper doc that it can be used for barrier, queues, service discovery and leader election.
Thanks.
Related
I have three docker containers all running in their own Tasks (3 tasks), and each task running in a separate Fargate service (3 services) on ECS Fargate. I have all the services behind an application load balancer (ALB) with path based routing setup. Below is how the path based routing works:
example.com/app_one forwards traffic to service_one_target_group
example.com/app_two forwards traffic to service_two_target_group
example.com/app_three forwards traffic to service_three_target_group
One thing to note. app_one & app_two are Node JS apps and app_three is a middleware, GraphQL server used to connect to a database.
I need the the services for app_one & app_two to be able to discover the app_three service.
I know Service Discovery is an option but I am unsure how to implement the Service Discovery in a scenario with path based routing. Any advice would be helpful.
We are planning to run a service on ECS which isn't a webserver, but is a (node.js based) background daemon which we are going to use for processing asynchronous tasks. I wanted to add a health check to it so that the task is restarted in case the daemon dies or gets killed. The target group health checks only support http and https protocols and probably are not meant for this purpose. Any insights into how I can monitor non web based services on ECS and ensure that it's always up and running?
How is High Availability achieved with WSO2 ESB clustering .
Suppose there are 2 nodes clustered and there is a load balancer , what happens when a node which is handling few HTTP requests goes down , what will happen to the requests ? will they be lost or because of the clustering the pending requests will be moved to the other node in the cluster.
What needs to be done to achieve this . Can you please suggest ?
Thanks
HA will handle by the load balancer, not by the ESB.Basically if esb node failure happens load balancer or the invoking client should handle that situation. If LB and client haven't implemented to handle such a failure scenario, there will be a message loss.LB has to route new requests to the other available node.
WSO2 recommend using Nginx as the default load balancer.ESB clustering documentation can be found on cluster doc.
I am using Amazon cloud server (AWS) to create Mule server nodes. Issue with AWS is it doesn't support multicasts, but MuleSoft requires all the nodes are in same network and multicasts enabled for clustering.
Amazon FAQ:
https://aws.amazon.com/vpc/faqs/
Q. Does Amazon VPC support multicast or broadcast?
Ans:No.
Mule cluster doesn't show proper heartbeat without multicasts enabled, mule_ee.log file should show as
Cluster OK
Members [2] {
Member [<IP-Node1>]:5701 this
Member [<IP-Node2>]:5701
}
but my cluster shows as:
Members [1] {
Member [<IP-Node1>]:5701 this
}
which is wrong according to MuleSoft standards. I created a sample Poll scheduler application and deployed in Mule cluster which runs in both nodes due to improper handling of Mule cluster.
But my organization needs AWS to continue with server configuration.
Question
1) is there any other approach instead of using Mule cluster, I can use both Mule server nodes and make it HA cluster configuration(Active-Active).
2) Is it possible to make one server up and running(active) and another one passive mode instead of Mule HA(ACtive-Active) mode?
3) CloudHub and AnypointMQ is deployed in AWS, how did MuleSoft handle multicasts issues with AWS?
According to Mulesoft support team, they don't advise managing Mule HA in AWS , it doesnt matter if we aree managing with ARM or MMC.
The Mule instances communicate with each other and guarantee HA as well as not processing a single request more than once but that does not work on AWS because latency may cause the instances to disconnect from one another. We need to have the servers on-prem to have HA model
Multicast and Unicast are just used for the nodes to be discoverable automatically and further more as explained in the documentation.
Mule cluster config
AWS know limitation: here
I want to make a cluster of Data Services Servers(DSS), and use an Enterprise Service Bus (ESB) as load balancer. In this deployment, what is the purpose of having a manager DSS in the cluster, and if there is a manager, is it a single point of failure?
These are the references which I used for load balancing and DSS clustering:
Dynamic load balancing between 3 nodes
How to install WSO2 Carbon cluster management feature?
The dynamic load balancing mechanism in WSO2 ESB, discovers the DSS members in an application group using a group communication framework and shares the load in runtime.
Load balancer is not bound or coupled to any cluster manager - it will simply distribute the load among nodes in applicationDomain.
So - in runtime - cluster manager doesn't create any single point of failure.
If you want you can setup a DSS cluster even without a cluster manager and distribute the load among the nodes via ESB.
The cluster manager - which is a component installed only to manage your cluster...
This is an extension to Prabath's answer.
DSS can be configured to work in a cluster. So that all DSS nodes act as members in a single cluster. This facilitates sharing session among each of the nodes.
Or else, you can have all DSS nodes running in isolation (using the same configuration), fronted by a load balancer (LB). Unlike the previous approach, this method does not support share sessions between DSS nodes. Thus only supports stateless services.
WSO2 ESB can act as a LB. But having a single instance of LB will make it a SPoF. And, LB can be configured to run in a cluster as well.
I don't know what's behind the decision of using an ESB instead of an ELB for LB, but it's up to you which one to use.
The manager is not a single point of failure, it's just a way to manage the entire cluster from a single management console (with limitations), and can be configured to be a worker at the same time.
Regarding the LB layer, you can use keepalived to avoid having a SPoF in the ESB acting as a LB, the same way it's done for WSO2 ELB's.
Take a look on that Failover for ELB with keepalived