Automatic restarts on GCP, GKE (Workloads, loadBalancer) - google-cloud-platform

Once a week the load balancer stops communicating with the workloads. the load balancer stops having communication with the workloads.
On the one hand I can see that the Workload was restarted since I see the last restart time, to solve this what I do is restart the Workloads manually and there is a connection again,
My question is, why does it restart? why if it restarts we leave the loadBalancer without communication with the Worckloads?
More info:
The loadBalancer is of type Internal HTTP(S), with respect to the provisioning we do it from a yaml file deployed in kubernetes:
# internal-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ilb-cym-ingress
namespace: k8s-cym
annotations:
kubernetes.io/ingress.class: "gce-internal"
The problem is: for some reason once in a while (1 week, 5 days ...) we receive an alert that we have configured when our WorkLoad is restarted, this causes in the Ingress that the BackEnd referring to this WorkLoad remains in UnHealthy state.
On the other hand, we can redeploy from the GCP console and we did not find any problems.
Regarding the configuration of the HealthCheck we have an interval of 60 sec with a timeout of 30 sec.
And in terms of the Threshold we have the Healthy in 1 attempt and the unHealthhy in 5 attempts, these parameters we have been varying to see if we managed to solve it but we have not managed to solve it.
Finally, I wanted to comment that when the workload is launched we have an initial delay of 20 sec to allow time for the database to connect correctly. I don't know if this can interfere with our HealtChecks

Related

Target health check fails - AWS Network Load Balancer

NOTE: I tried to include screenshots but stackoverflow does not allow me to add images with preview so I included them as links.
I deployed a web app on AWS using kOps.
I have two nodes and set up a Network Load Balancer.
The target group of the NLB has two nodes (each node is an instance made from the same template).
Load balancer actually seems to be working after checking ingress-nginx-controller logs.
The requests are being distributed over pods correctly. And I can access the service via ingress external address.
But when I go to AWS Console / Target Group, one of the two nodes is marked as and I am concerned with that.
Nodes are running correctly.
I tried to execute sh into nginx-controller and tried curl to both nodes with their internal IP address.
For the healthy node, I get nginx response and for the unhealthy node, it times out.
I do not know how nginx was installed on one of the nodes and not on the other one.
Could anybody let me know the possible reasons?
I had exactly the same problem before and this should be documented somewhere on AWS or Kubernetes. The answer is copied from AWS Premium Support
Short description
The NGINX Ingress Controller sets the spec.externalTrafficPolicy option to Local to preserve the client IP. Also, requests aren't routed to unhealthy worker nodes. The following troubleshooting implies that you don't need to maintain the cluster IP address or preserve the client IP address.
Resolution
If you check the ingress controller service you will see the External Traffic Policy field set to Local.
$ kubectl -n ingress-nginx describe svc ingress-nginx-controller
Output:
Name: ingress-nginx-controller
Namespace: ingress-nginx
...
External Traffic Policy: Local
...
This Local setting drops packets that are sent to Kubernetes nodes that aren't running instances of the NGINX Ingress Controller. Assign NGINX pods (from the Kubernetes website) to the nodes that you want to schedule the NGINX Ingress Controller on.
Update the pec.externalTrafficPolicy option to Cluster
$ kubectl -n ingress-nginx patch service ingress-nginx-controller -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
Output:
service/ingress-nginx-controller patched
By default, NodePort services perform source address translation (from the Kubernetes website). For NGINX, this means that the source IP of an HTTP request is always the IP address of the Kubernetes node that received the request. If you set a NodePort to the value of the externalTrafficPolicy field in the ingress-nginx service specification to Cluster, then you can't maintain the source IP address.

Google loadbalancer health checkup fails

I installed Kubernetes ingress controller on GKE following the official documentation as following.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
The ingress controller runs fine.
ingress-nginx-admission-create-dvkgp 0/1 Completed 0 5h29m
ingress-nginx-admission-patch-58l4z 0/1 Completed 1 5h29m
ingress-nginx-controller-65d7564f46-2rtjs 1/1 Running 0 5h29m
It creates a TCP load balancer, health checkup and firewall rules automatically. My kubernetes cluster has 3 nodes. Interestingly, the health checkup fails for 2 instances. It passes for the instance where the ingress controller is running. I debug it but didn't find any clue. Could someone help me on this.
If you were to look into the deploy.yaml you applied you would see:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
Notice the externalTrafficPolicy: Local. It is being used to Preserve the client source ip.
It's even better explained here: Source IP for Services with Type=LoadBalancer
From k8s docs:
However, if you're running on Google Kubernetes Engine/GCE, setting the same service.spec.externalTrafficPolicy field to Local forces nodes without Service endpoints to remove themselves from the list of nodes eligible for loadbalanced traffic by deliberately failing health checks.
These health checkups are designed to fail. It works that way so that client IPs can be preserved.
Notice that the one node that is listed as healthy is the one where ingress-nginx-controller pod runs. Delete this pod and wait for it to reschedule on a different node - now this other node should be healthy. Now run 3 pod replicas, one on every node and all nodes will be healthy.
One of the possible reason is Firewall rules. Google has specified IP range and port details of Google Health Check probers. You have to configure ingress allow rule to establish health check probing connection to your backend.
For additional debugging details check this Google Cloud Platform blog: Debugging Health Checks in Load Balancing on Google Compute Engine

Adding session affinity via BackendConfig results in service outage (502)

After adding session affinity via IPs to my service it resulted in 503s of my page I not yet understand why that happened.
The service itself did not throw any errors, but on the load balancer (LB) logs I see that the LB could not connect to the service anymore.
I am quite sure the outage was a result of adding the backend config, because the moment I removed the annotation the page recovered.
It would be really great if you could help me to find out why that happened and how I prevent that going forward, as I still want enable the session affinity.
Service annotations:
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "SimonsBackendConf"}'
...
Backend config:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: SimonsBackendConf
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
Log entry which leads me to thinking it might be the service not being available:
{
"jsonPayload": {
"#type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "backend_connection_closed_before_data_sent_to_client"
},
"httpRequest": {
"status": 502,
...
},
...
}
Setup:
gke
L7 google managed load balancer
In order for session affinity to work, you need to be running a VPC-native cluster as session affinity requires network endpoint groups. You will also need to create an Ingress resource for your service as well.
Assuming you have a VPC-native cluster, you'll need to add an additional annotation to your service:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "SimonsBackendConf"}'
cloud.google.com/neg: '{"ingress": true}'
...
Note that the backend-config annotation is now GA as well (not sure which GKE version you are on)
when a container doesn't explicitly handle SIGTERM, it immediately terminates and stops handling requests. The load balancer continues to send incoming traffic to the terminated container, leading to 502 error.
The resolution for this issue would be to configure containers to handle SIGTERM [2] and continue responding to requests throughout the termination grace period (30 seconds by default). Configure Pods to begin failing health checks when they receive SIGTERM.This signal lets the containers know that they are going to be shut down soon. This will also signal the load balancer to stop sending traffic to the Pod while endpoint deprograming is in progress.
Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (WebSocket stream), saving the current state, or anything like that.
If your application does not gracefully shut down and stops responding to requests when receiving a SIGTERM, the preStop hook[3] can be used to handle SIGTERM and keep serving traffic while endpoint deprograming is in progress.
lifecycle:
preStop:
exec:
# if SIGTERM triggers a quick exit; keep serving traffic instead
command: ["sleep","60"]
Kindly refer the below link to find more detailed information related to this in the document [1]
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#traffic_does_not_reach_endpoints
[2] SIGTERM: https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
[3] preStop hook: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details

Exposing a K8s TCP Service Endpoint to the Public Internet Without a Load Balancer

So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a Stateful Set with a Service for network communication. I need to expose each Service to the public internet via DNS on port 5432.
The most natural approach here is to use the k8s Load Balancer resource and something like external dns to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the idle connection timeout. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like pg_restore.
So I need some kind of solution that allows me to work around the limitations of Load Balancers. Node IPs are out of the question since I will need port 5432 exposed for every single postgres instance in the cluster. Ingress also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. ExternalName is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as NodeIP.
Any suggestions would be greatly appreciated.
The Kubernetes ingress controller implementation Contour from Heptio can proxy TCP streams when they are encapsulated in TLS. This is required to use the SNI handshake message to direct the connection to the correct backend service.
Contour can handle ingresses, but introduces additionally a new ingress API IngressRoute which is implemented via a CRD. The TLS connection can be terminated at your backend service. An IngressRoute might look like this:
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: postgres
namespace: postgres-one
spec:
virtualhost:
fqdn: postgres-one.example.com
tls:
passthrough: true
tcpproxy:
services:
- name: postgres
port: 5432
routes:
- match: /
services:
- name: dummy
port: 80
ha proxy supports tcp load balancing. you can look at ha-proxy as a proxy and load balancer for postgres database. it can support both tls and non tls connections.

GCP destination group instances not being checked

I'm trying to create a TCP/UDP load balancer on GCP to allow HA on my service, but I've noticed that when I create a destination group, all instances in that group are marked as unhealthy and are not being checked by google (I've seen the machine logs to check it). The firewall is open because is for testing purpose, so I'm sure that is not the problem.
I've created an HTTP/S load balancer using a backend with similar check configuration and the same machine is marked as healthy, so is not a problem of that machine (even now the logs shows how google is really checking that instance).
Both checks are HTTP to port 80, so I'm not able to see where's the problem and the difference between both kind of load balancers checkers.
Also I've checked to disable health check but the instance still marked as unhealthy and the traffic is not being sent to any of the instances, so the load balancer is not usefull it all.
Is necessary any other configuration to make it check the instance?
Thanks and greetings!!
Creating a TCP load balancer
When you're using any of the Google Cloud load balancers, you need not expose your VM's external ports to the internet, only your load balancer needs to be able to reach it.
The steps to create a TCP load balancer are described here. I find it convenient to use gcloud and run the commands, but you can also use the Cloud Console UI to achieve the same result.
I tried the below steps and it works for me (you can easily modify this to make it work with UDP as well - remember you still need HTTP health checks even when using UDP load balancing):
# Create 2 new instances
gcloud compute instances create vm1 --zone us-central1-f
gcloud compute instances create vm2 --zone us-central1-f
# Make sure you have some service running on port 80 on these VMs after creation.
# Create an address resource to act as the frontend VIP.
gcloud compute addresses create net-lb-ip-1 --region us-central1
# Create a HTTP health check (by default uses port 80).
$ gcloud compute http-health-checks create hc-1
# Create a target pool associated with the health check you just created.
gcloud compute target-pools create tp-1 --region us-central1 --http-health-check hc-1
# Add the instances to the target pool
gcloud compute target-pools add-instances tp-1 --instances vm1,vm2 --instances-zone us-central1-f
# Create a forwarding rule associated with the frontend VIP address we created earlier
# which will forward the traffic to the target pool.
$ gcloud compute forwarding-rules create fr-1 --region us-central1 --ports 80 --address net-lb-ip-1 --target-pool tp-1
# Describe the forwarding rule
gcloud compute forwarding-rules describe fr-1 --region us-central1
IPAddress: 1.2.3.4
IPProtocol: TCP
creationTimestamp: '2017-07-19T10:11:12.345-07:00'
description: ''
id: '1234567890'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: fr-1
portRange: 80-80
region: https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/regions/us-central1/forwardingRules/fr-1
target: https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/regions/us-central1/targetPools/tp-1
# Check the health status of the target pool and verify that the
# target pool considers the backend instances to be healthy
$ gcloud compute target-pools get-health tp-1
---
healthStatus:
- healthState: HEALTHY
instance: https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/us-central1-f/instances/vm1
ipAddress: 1.2.3.4
kind: compute#targetPoolInstanceHealth
---
healthStatus:
- healthState: HEALTHY
instance: https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/us-central1-f/instances/vm2
ipAddress: 1.2.3.4
kind: compute#targetPoolInstanceHealth
HTTP Health Checks are required for non-proxy TCP/UDP load balancers
If you're using a UDP load balancer (which is considered Network Load Balancing in Google CLoud), you will need to spin up a basic HTTP server which can respond to HTTP health checks in addition to your service which is listening on a UDP port for incoming traffic.
The same also applies to non-proxy based TCP load balancers (which is also considered Network Load balancing in Google Cloud).
This is documented here.
Health checking
Health checks ensure that Compute Engine forwards new connections only
to instances that are up and ready to receive them. Compute Engine
sends health check requests to each instance at the specified
frequency; once an instance exceeds its allowed number of health check
failures, it is no longer considered an eligible instance for
receiving new traffic. Existing connections will not be actively
terminated which allows instances to shut down gracefully and to close
TCP connections.
The health check continues to query unhealthy instances, and returns
an instance to the pool once the specified number of successful checks
is met.
Network load balancing relies on legacy HTTP Health checks for
determining instance health. Even if your service does not use HTTP,
you'll need to at least run a basic web server on each instance that
the health check system can query.