Exposing Istio Ingress Gateway as NodePort to GKE and run health check - google-cloud-platform

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?

Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."

The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.

Related

How to set up a GCP Global External HTTP(S) LoadBalancer for gRPC?

I have created a google cloud load balancer: configuration.
The backend is an unmanaged instance group. For example, it consists of one VM. The gRPC service is deployed on the VM (port 443). gRPC health checks are successful. But the gRPC client cannot connect to the service. I can't find a solution to this problem.
The last thing I found in the documentation:
If you use HTTP/2, you must use TLS. HTTP/2 without encryption is not
supported.
Could this be a solution and I just need to secure the gRPC connection with SSL/TLS?
You need to enable TLS on the Load Balancer and also between the Load balancer and your backend VM

GCP Kubernetes - Health Check Fails in Loader Balancer with NEG backends

Here is what exists and works OK:
Kubernetes cluster in Google Cloud with deployed 8 workloads - basically GraphQL microservices.
Each of the workloads has a service that exposes port 80 via NEG (Network Endpoint Group). So, each workload has its ClusterIP in the 10.12.0.0/20 network. Each of the services has a custom namespace "microservices".
One of the workloads (API gateway) is exposed to the Internet via Global HTTP(S) Load Balancer. Its purpose is to handle all requests and route them to the right microservice.
Now, I needed to expose all of the workloads to the outside world so they can be reached individually without going through the gateway.
For this, I have created:
a Global Load Balancer, added backends (which referer to NEGs), configured routing (URL path defines which workload the request will go), and external IP
a Health Check that is used by Load Balancer for each of the backends
a firewall rule that allows traffic on TCP port 80 from the Google Health Check services 35.191.0.0/16, 130.211.0.0/22 to all hosts in the network.
The problem: Health Check fails and thus the load balancer does not work - it gives error 502.
What I checked:
logs show that the firewall rule allows traffic
logs for the Health Check show only changes I do to it and no other activities so I do not know what happens inside.
connected via SSH to the VM which hosts the Kubernetes node and checked that the clusterIPs (10.12.xx.xx) of each of workload return HTTP Status 200.
connected via SSH to a VM created for test purposes. From this VM I cannot reach any of the ClusterIPs (10.12.xx.xx)
It seems that for some reason traffic from the Health Check or my test VM does not get to the destination. What did I miss?

How to set up a GCP LoadBalancer for mixed HTTPS and gRPC traffic

I am trying to make sense of the GCP LoadBalancer for the use case of mixed HTTPS and gRPC backend. The LoadBalancer documentation seems to indicate that you can/should use the HTTP(S) LoadBalancer, as that "includes HTTP/2". For backend services I appear to be able to specify a named "grpc" port and set it to be number 7000, but if I use the gcloud command to view my backend services:
gcloud compute backend-services list --format=json
My service is shown to use portName "grpc" (correct) with port "80" (incorrect). This even though I was prompted that the instance group had named ports, and I could (and did) choose "grpc:7000".
On the frontend side, I can only select ports 80 and 8080 for HTTP, or 443 for HTTPS. No mention of HTTP/2, but I guess "HTTPS includes HTTP/2".
Am I right in that I cannot use the layer 7 LoadBalancer at all for my scenario? The documentation is not very explicit on ports, and if I search the Web for gRPC I get loads of stories on LoadBalancing Kubernetes-hosted apps.
In order to use gRPC you need to use HTTP/2
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
HTTP/2 and HTTPS are not one and the same, however H2 (HTTPS/2) can only work over HTTPS. But by default H2 is not enabled you need to enable it.
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
See: https://cloud.google.com/load-balancing/docs/https/ for further information.

Gremlin-server health check endpoint for AWS ELB

Is there any HTTP/TCP endpoint for a gremlin-server health check? Currently, we are using the default TCP port but it doesn't seem to indicate the gremlin-server's health.
We noticed that gremlin-server crashed and was not running but the health check kept passing. We are using AWS Classic Load Balancer.
Have you enabled an HTTP endpoint for the Gremlin service? The document above explains:
While the default behavior for Gremlin Server is to provide a
WebSocket-based connection, it can also be configured to support plain
HTTP web service. The HTTP endpoint provides for a communication
protocol familiar to most developers, with a wide support of
programming languages, tools and libraries for accessing it.
If so, you can use an ELB HTTP health check to a target like this:
HTTP:8182/?gremlin=100-1
With a properly configured service, this query will return a 200 HTTP status code, which will indicate to the ELB that the service is healthy.

How to access client IP of an HTTP request from Google Container Engine?

I'm running a gunicorn+flask service in a docker container with Google Container Engine. I set up the cluster following the tutorial at http://kubernetes.io/docs/hellonode/
The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. What I was looking for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to retain the external client ip in the requests?
If anyone gets stuck on this there is a better approach.
You can use the following annotations depending on your kubernetes version:
service.spec.externalTrafficPolicy: Local
on 1.7
or
service.beta.kubernetes.io/external-traffic: OnlyLocal
on 1.5-1.6
before this is not supported
source: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
note that there are caveats:
https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips
I assume you set up your service by setting the service's type to LoadBalancer? It's an unfortunate limitation of the way incoming network-load-balanced packets are routed through Kubernetes right now that the client IP gets lost.
Instead of using the service's LoadBalancer type, you could set up an Ingress object to integrate your service with a Google Cloud HTTP(s) Load Balancer, which will add the X-Forwarded-For header to incoming requests.