ELB with auto scaling - wso2

I want a suggestion for ELB with auto scaling (automatically start a new instance when the load is more) for WSO2 ESB.
Thanks.

Please use "WSO2 Private PaaS". You can have "auto scaling" WSO2 ESB instances with the WSO2 Private PaaS.
As I mentioned in my previous answer, Auto scaling with a load balancer was not very successful and that's why WSO2 ELB is no longer recommended for auto scaling.
It's not mandatory to use WSO2 Private PaaS to auto scale WSO2 products. You can use your preferred IaaS features for auto scaling.
For example, you can use Amazon EC2 Auto Scaling. You can create your own AMIs with WSO2 Products and use some configuration management solution like Puppet to configure products when a new instance is spawned. WSO2 Private PaaS also uses Puppet to configure WSO2 cartridge instances.
In EC2, you can dynamically scale using various metrics. For more info, see
Auto Scaling Documentation. When you use EC2, you can use the Amazon ELB.
With WSO2 Private PaaS, WSO2 Cartridges are readily available. (With puppet configurations etc and those cartridges can auto scale according to configured policies)

The WSO2 Elastic Load Balancer has been discontinued. You can download NGinx Plus - the load balancer by NGinx - for which we provide support.
If your are currently using WSO2 ELB and need guidance, please visit our documentation page, Spacially Auto-Scaling in Load Balancer

Related

Istio with AWS ECS

My company is using AWS ECS as container orchestration service. From Istio's documentation I have understood that it works primarily with Kubernetes. Does Istio work with ECS also?
What i will suggest you to please use Service Discovery for ECS, Where your micro service will be able to connect to each other without hard coding any task ips. Istio is not integrated as of now with ECS. With the help of service discovery, you’ll get a route 53 private hosted zone which can be used to connect with other micro services.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

Google Kubernetes Engine: Cloud Platform Service Broker vs Alias IPs

What is the difference between the using Cloud Platform Service Broker and using Alias IPs when configuring Kubernetes Engine?
Service Broker doesn't have anything to do with Alias IPs. Cloud Platform Service Broker is called by "Kubernetes Service Catalog" to get GCP services provisioned with Kubernetes manifests (for example, you can create a Cloud SQL database by deploying a Kubernetes manifest thanks to this feature). Alias IPs don't have anything to do with this.

Google Container Engine Clusters in different regions with cloud load balancer

Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress

ElasticSearch AWS plugin is a must to deploy on AWS?

We are planning to deploy our ElasticSearch on Amazon Web Services. I noticed that there is a plugin from ElasticSearch that allows ElasticSearch to use AWS API for the unicast discovery mechanism. ElasticSearch Cloud AWS.
My questions are:
Should I use that plugin? or it is something nice to have but not required ?
What is the effect of not using it?
You don't have to use the plugin.
If you don't then you'll have to put the addresses of the nodes in your configuration file by hand (since multicast is not available)
The ec2 plugin can also set the availability zone of instances as node attributes - this can be used to tell elasticsearch not to put primary and replica shards in the same availability zone. Again you could do this by hand