Wanted to build a Cell based Architecture using AWS. Wanted to build a control plane and data plane. I am clear about control plane and data plane. But little confused about building a thinnest layer that routes the traffic to appropriate cell based on the client. Any leads or any expertise? Can we have an ALB backed by Compute instances and DDB as thinnest layer which listen's to the requests and routes the traffic appropriately to cells via Route 53?
Related
I see that for every knative service, 2 VirtualService objects are created namely ksvc-ingress which has knative-serving/knative-ingress-gateway & knative-serving/knative-local-gateway gateways configured and ksvc-mesh which has mesh as the gateway.
I can see the knative-serving/* gateways using kubectl but I am unable to find the mesh gateway object in any namespace. I would like to understand if mesh here denotes some special object or is it an istio keyword representing something else?
The mesh name is a keyword, as you guessed. That keyword represents the East-West traffic between Pods in the Kubernetes cluster, as managed by the Istio sidecar. You can think of those VirtualServices as being programmed onto each sidecar to do the routing and traffic splitting next to the request sender, rather than needing to route to a central service / gateway.
As you noticed, knative uses istio as a service mesh.
In the Istio context mesh is not an object (or resource) like, for example, a Service. Istio About page explain what Service Mesh is:
A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term “service mesh” describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
So mesh is a term that encapsulate all Istio objects (istio-proxy containers, Virtual Services, Ingress Gateways etc.), that work together to allow for traffic management inside cluster.
A Gateway is a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.
So my plan is to secure a set of physical servers in a private network against the entire NSX-T workload domain, without buying an additional hardware firewall, since we have massive edge capacity left, but no money. :/
So intuitively I would just add an NSX gateway firewall, just like it's described in this blog post:
https://blogs.vmware.com/networkvirtualization/2020/08/the-nsx-t-gateway-firewall-secures-physical-servers.html/
But there it's the easy case, when the firewall is just added to the default T0, which I can't do due to our active/active setup. So I would have to add an additional active/passive T0 and connect it to the existing T0.
But how do I now force traffic to my private network through the additional T0 including the gateway firewall?
Apprently this is impossible to achieve without bridging only the second T0 to the private network's vlan and omitting the route via the physical BGP router. Or is there a chance?
So if the private network is routed via the physical BGP uplink router, there is indeed no way, but to hide this route on the physical BGP. This wouldn't make too much sense anyways, so let's consider the case where it doesn't.
Then there are apparently two solutions to this task, with the first probably being the more straight forward one:
Deploy an additional service edge (active/passive) and then add a service gateway and the gateway firewall to any T1. The corresponding T1 service router will then be deployed on the service edge (you have to pick one in the deployment wizard). Now we might only have to add a prefix filter on the NSX BGP uplink, if we want to hide the private network to the external uplink network.
Configure L2 bridging: Vmware Docs. Create an L2 bridge in any segment and add the gateway firewall to this segment's T1 uplink or add a bridge firewall to the bridge's VDS. Then optionally apply the prefix filter for the bridged LAN on the BGP uplink.
an upcoming update will let you use A/A and firewall (stateful). VMware is working on it. That simply set stateful services to use even with A/A routing setup.
We have many internet services, what are the considerations whether to use alb per service or single alb for all using listener rule pointing to target 🎯 group.
The services has its own clusters/target group with different functionality and different url.
Can one service spike impact other services?
Is it going to be a single point of failure ?
Cost perspective ?
Observability, monitoring, logs ?
Ease of management ?
Personally I would normally use a single ALB and use different listeners for different services.
For example, I have service1.domain.com and service2.domain.com. I would have two hostname listeners in the same ALB which route to the different services.
In my experience ALB is highly available and scales very nicely without any issues. I've never had a service become unreachable due to scaling issues. ALB's scale based on "Load Balancer Capacity Units" (LBCU). As your load balancer requires more capacity, AWS automatically assigns more LBCU's which allows it to handle more traffic.
Source: Own experience working on an international system consisting of monoliths and microservices which have a large degree of scaling between timezones.
You don't have impact on service B if service A has a spike, but the identification of which service is having bad times could be a little pain.
For monitoring perspective it's is a bit hard because is not that easy to have a fast identification of which service/target is suffering.
For management, as soon as different teams need to create/management its targets it can create some conflicts.
I wouldn't encourage you using that monolith architecture.
From cost perspective you can use one load balancer with multi forward rules, but using a single central load balancer for an entire application ecosystem essentially duplicates the standard monolith architecture, but increases the number of instances to be served by one load balancer enormously. In addition to being a single point of failure for the entire system should it go down, this single load balancer can very quickly become a major bottleneck, since all traffic to every microservice has to pass through it.
Using a separate load balancer per microservice type may add additional overhead but it make single point of failure per microservice in this model, incoming traffic for each type of microservice is sent to a different load balancer.
As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.
From GCP portal perspective Load balancer is a service and related services comes under it like backendServers, health Check etc.
However APIs are only available for services like backendService, address, healthcheck etc.
Using UI we could find direct relationship between service like backendServers and LoadBalancer but backend service API doesn't have respective field.
While on UI we have:
Where as supported fields from backend service:
affinityCookieTtlSec,backends,cdnPolicy,connectionDraining,creationTimestamp,description,enableCDN,fingerprint,healthChecks,iap,id,kind,loadBalancingScheme,name,port,portName,protocol,region,selfLink,sessionAffinity,timeoutSec
Wanted to know if there is direct / indirect way to get List of Load Balancers
As mentioned by Patrick W, there is no direct entity 'load balancer', its just a collection of components. The list seen in the UI that appears to be the load balancer is actually the url-map component, which can be seen via the API with:
gcloud compute url-maps list
More information on the command
At the API level, there is no Load Balancer, only the components that make it up.
Your best bet to get a view similar to the UI is to list forwarding rules (global and regional). You can use gcloud compute forwarding-rules list which will show you all the forwarding rules in use (similar to the UI view), along with the IPs of each and the target (which may be a backend service or a target pool).