im trying to make my app HA, so I created the following
3 replica
PDB
liveness and readiness probes and
pod anti affinity
is there anything else which I miss?
this is the antiaffinty config
...
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ten
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: tan
topologyKey: topology.kubernetes.io/zone
Is there anything I missing?
Highly Available.. I think these are the parameters for an application to be HA..
Never launch your app directly from a Pod - it won’t survive a crash of a node. Even for single pod applications use ReplicaSet or Deployment objects, as they manage pods across whole cluster and maintain specified number of instances (even if it’s only one)
Use affinity configuration with custom rules to spread your pods based on your environments architecture. Workload are running in multiple instances spread across multiple nodes provides second level of resilience to the app
Define a livenessProbe for each container. Use proper method. Avoid ExecAction when your container can process HTTP requests. Remember to set proper initialDelaySeconds parameter to give your app some time to initialize (especially for ones based on JVM like Spring Boot - they are slow to start their HTTP endpoints)
You seemingly following all these points, so you should be good.
However If feasible I would recommend to try to deploy the apps on multiple clusters OR say deploy across multiple data centres and run in active-active mode. It can help adding more more nines to your availability.
Resource limit
You need to add the resource limit also in workloads it's a necessary thing otherwise cronjobs or other unnecessary workloads can may impact the business logic and workloads.
HPA - POD autoscaling
There is also some chance of all three POD get killed due to readiness & liveness while the workload under heavy traffic and the application won't be able to respond to readiness & liveness in this I would suggest you implement the HPA also at the place.
HA can be achieved by using multiple replicas, kubernetes provides this feature for HA only. Further service object in kubernetes helps load balancing the traffic to one of the available replicas based on liveliness and readiness probes, both of which are responsible for identifying the pod as healthy and ready to receive requests, resp.
please refer here https://kubernetes.io/docs/concepts/services-networking/service/ and https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Related
As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.
We are looking to switch over to Kubernetes for our deployment on AWS. One area of concern is setting up the load balancer for the frontend application.
It appears recommended to use the "LoadBalancer" type service in the cluster. However, I'm worried about this because there seems to be no way to specify the load balancer used, so any redeployment of the service would necessarily change DNS name used, resulting in downtime.
Is there a recommended practical way to stay on the same load balancer? Am I overthinking this, and this is acceptable for a generic SaaS appliation?
Well, the approach taken generically is this way --
Using nginx or Traefik (L7 load balancers), being static part of architecture ( rarely changed except for upgradations).
You can add ingress rules, which carry the binding of a DNS to a service ( say frontend service in your case is bound to www.example-dns.com), the frontend service will be having multiple Pods in backend where the traffic will be thrown.
Now there are multiple ways to do Loadbalancing at Pod level, horizontal Pod autoscaler can be used for each service individually.
The nginx and Traefik will appear under the EKS boundaries only.
I have three nodes, the master and two workers inside my cluster. I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Thanks for the help
Warok
Edit
Apparently, it's possible to route the traffic of one specific user to a specific version https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity. But the question is still open
Edit 2
Assume that my nodes name are node1 and node2, does the following yaml file is right?
apiVersion: networking.istio.io/v2alpha3
kind: VirtualService
metadata:
name: node1
...
spec:
hosts:
- nod1
tcp:
-match:
-port: 27017 #for now, i will just specify this port
- route:
- destination:
host: node2
I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Quick answer, No.
Istio is working as a sidecar container that is injected into a pod. You can read at What is Istio?
Istio lets you connect, secure, control, and observe services.
...
It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
...
You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices
I also recommend reading What is Istio? The Kubernetes service mesh explained.
It's also important to know why would you want to redirect traffic from one node to the other.
Without knowing that I cannot advice any solutions.
I have an application running on AWS EC2. In order to get high availability, we are planning to set up AWS Load balancer and host the application in a minimum of 2 EC2 instances.
We also heard about Docker swarm, in which we can create a service with 2 managers on 2 separate EC2 instances and the swarm will take care of the rest (instead of using ALB with 2 EC2). It will balance the load to all the containers inside the cluster and it also restarts the container if anything makes this go down.
So I want to know which is the best choice for my case. The application won't get heavy load/traffic. Why I choose load balancer is for high availability. If 1 instance goes down the other will be taken care of this.
If there are any other choices to fulfill my requirements, it will be highly appreciated.
Thanks.
I presume it's a stateless application.
A combination of both Swarm & ALB is what you can go for but you will need to incorporate autoscaling etc sooner or later which means you will need to manage the swarm cluster & maintain it.
With ALB you will get real good metrics which you will surely miss while using Swarm.
BUT, you do have a few better options which will manage the cluster for you. You just will have to manage & maintain the docker images -
Use ECS.
Use EKS (only in us-east-1 as if now)
Use ElasticBeanstalk MultiContainer.
Ref -
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
https://aws.amazon.com/eks/
Does anyone have any advice on how to minimize cross-az traffic for inter-pod communication when running Kubernetes in AWS? I want to keep my microservices pinned to the same availability zone so that microservice-a that resides in az-a will transmit it's payload to microservice-b also in az-a.
I know you can pin pods to a label and keep the traffic in the same AZ, but in addition to minimizing the cross az-traffic I also want to maintain HA by deploying to multiple AZs.
In case you're willing to use alpha features you could use inter-pod affinity or node affinity rules to implement such a behaviour without loosing high availability.
You'll find details in the official documentation
Without that you could just have one deployment pinned to one node and a second deployment pinned to another node and one service which selects pods from both deployments - example code can be found here