If I have an architecture in AWS with many different microservices, each in their own AWS ECS cluster and an AWS application load balancer in front of each one, would I still need consul or linkerd?
Conversely, if I don't use an AWS application load balancer, would I be required to use something like consul or linkerd to handle the load balancing as the ECS clusters scale up/down?
I'm trying to understand whether these all are complementary services or whether they are competing services.
Meshes are really designed to mediate communications within the cluster. Linkerd could secure communications across this kind of multicluster, but to be candid I find myself wondering if you'd be better served with all the microservices in the same cluster? That would let you use an API gateway and service mesh to secure everything and provide better reliability and observability, and save you money on LBs.
(Full disclosure: I have ties to Linkerd and to Emissary-ingress, so I'm a touch biased. :) )
Related
I know that with AWS ECS's service-discovery you can make one service talk to the other by using the service.cluster pattern at the url (and service discovery does the rest for you). But I want to know if there is a way of doing one instance of a service talk to other container in the same service on AWS Elastic Container Service.
I have searched for this for a while now and decided to ask here as I wasn't able to find any conclusive ideas on how to make it work.
You need to associate your ECS Service with an Application Load Balancer (ALB) which makes sure ECS will registers the tasks (i.e. one or more containers) with the ALB providing a uniform URL for the services to communicate with each other. ALB support path based routing to multiple Target Groups allowing micro-service style communication between tasks/services.
As per the AWS documentation1, you can use a single ALB for multiple services as ALB provides path-based routing and priority-rules. For further information you can also view this link 2 which explains how path-based routing can be used to forward request to different target groups with different ECS tasks/services.
My applications run on ElasticBeanstalk and communicate purely with internal services like Kinesis and DynamoDB. There is no web traffic needed? Do I need an ElasticLoadBalancer in order to scale my instances up and down. I want to add and remove instances purely based on some cloudwatch metrics? Do I need the ELB to do managed updates etc.?
If there is no traffic to the service then there is no need to have a load balancer.
In fact the load balancer is primarily to distribute inbound traffic such as web requests.
Autoscaling can still be accomplished without a load balancer with scaling based on the CloudWatch metric that you want to use. In fact this is generally how consumer based applications tend to work.
To create this without a load balancer you would want to configure you environment as a worker environment.
#Chris already anwsered, but I would like to complement his answer for the following:
There is no web traffic needed?
Even if you communicate with Kinesis and DynamoDB only, your instances still need to be able to access internet to communicate with the AWS services. So the web traffic is required from your instances. The direct inbound traffic to your instances is not needed.
To fully separate your EB env from the internet you should have a look at the following:
Using Elastic Beanstalk with Amazon VPC
The document describes what you can do and want can't be done when using private subnets.
I have a set of ASP.net core APIs running on AWS lambda. I have setup custom url paths that are mapped to these APIs. I also have the same set of APIs exposed in EC2 and at deploy time I deploy to both automatically.
The reason I do this is because the EC2 version is fast for light load scenarios with no cold start issues. However as load increases I'd like my requests to be directed to the API Gateway/Lambda APIs. Basically I'd be using Lambda to add resiliency and take the strain of heavy load without sacrificing performance in light load scenarios due to cold start peformance. I have an ELB sitting in front of my EC2 APIs and I thought that maybe I could use the load balancer to redirect traffic to API gateway based on some custom health monitoring.
My question would be is this possible or even a reasonable approach? Is there perhaps a better way where I have API Gateway forward to EC2 or Lambda instead?
Any guidance appreciated.
Thanks
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress