How to set up a GCP LoadBalancer for mixed HTTPS and gRPC traffic - google-cloud-platform

I am trying to make sense of the GCP LoadBalancer for the use case of mixed HTTPS and gRPC backend. The LoadBalancer documentation seems to indicate that you can/should use the HTTP(S) LoadBalancer, as that "includes HTTP/2". For backend services I appear to be able to specify a named "grpc" port and set it to be number 7000, but if I use the gcloud command to view my backend services:
gcloud compute backend-services list --format=json
My service is shown to use portName "grpc" (correct) with port "80" (incorrect). This even though I was prompted that the instance group had named ports, and I could (and did) choose "grpc:7000".
On the frontend side, I can only select ports 80 and 8080 for HTTP, or 443 for HTTPS. No mention of HTTP/2, but I guess "HTTPS includes HTTP/2".
Am I right in that I cannot use the layer 7 LoadBalancer at all for my scenario? The documentation is not very explicit on ports, and if I search the Web for gRPC I get loads of stories on LoadBalancing Kubernetes-hosted apps.

In order to use gRPC you need to use HTTP/2
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
HTTP/2 and HTTPS are not one and the same, however H2 (HTTPS/2) can only work over HTTPS. But by default H2 is not enabled you need to enable it.
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
See: https://cloud.google.com/load-balancing/docs/https/ for further information.

Related

How to set up a GCP Global External HTTP(S) LoadBalancer for gRPC?

I have created a google cloud load balancer: configuration.
The backend is an unmanaged instance group. For example, it consists of one VM. The gRPC service is deployed on the VM (port 443). gRPC health checks are successful. But the gRPC client cannot connect to the service. I can't find a solution to this problem.
The last thing I found in the documentation:
If you use HTTP/2, you must use TLS. HTTP/2 without encryption is not
supported.
Could this be a solution and I just need to secure the gRPC connection with SSL/TLS?
You need to enable TLS on the Load Balancer and also between the Load balancer and your backend VM

Where to configure websockets with an application load balancer on AWS EC2?

According to the AWS documentation, "WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer."
However, when I select Application Load Balancer in EC2, I don't have any option other than HTTP and HTTPS:
I would like to use the secure websocket protocol (wss://) which I believe would be over TLS:8888.
How can I input this option?
The solution was to use HTTPS for the listener protocol, even though the browser is making requests to wss://.
For port number, configuring both the listener and environment instance to port 8888 works.

how to set ports for GCP load balancer

Three of Node.js web server are all listing on port 3000, how can I set port configuration(backend and frontend) for load balancer?
I set backend port 3000, frontend 80, but it's not working. I tried to use iptable to redirect 80 to 3000 in the instance, it didn't work. How can I set the load balancer ports?
Did you set url_map to direct traffic to different backend services?
You mentioned that there were three web servers, were they served as an identical service? If not, you need to define them separately. One backend service for one web server. For example, set webserver A as backend service A, and webserver B as backend service B ... etc.
You could define port for each backend service, which is about which port you would like the traffic to be directed from instance group to each instance.
Simply speaking, if three web servers are all different, you need to...
Define three ports for three web servers on the instance group
Set corresponding firewall-rules to open required ports on each instance
Run your web server on each instance on specified ports
map the traffic to correct backend server with url-map
default front end with 80 port should work okay, if needed, you could build a 443 front end to provide HTTPS with automatically renewed SSL by Google
Above mentioned steps you could easily find on Google Cloud Console.
If you would like to know how each component on GCP LB in detail, you could refer to this article
- you could only read the concept of how instance and instance group and backend service connect from each other.

How to get HTTP/2 working in a Kubernetes cluster using ingress-nginx

We have a number of services behind an API gateway which is itself behind ingress-nginx. We're trying to use HTTP/2 to speed up data transfer to the front-end but all of our connections are still being done with HTTP/1.1.
The connection from client to nginx is over HTTPS, but nginx communicates with our API gateway using HTTP, and the gateway also uses HTTP to communicate with the backend services.
Do we need to use HTTPS from end-to-end to get HTTP/2 to work? If so, what's the best way to set this up re: using certificates? If not, what could be causing the connection to drop to HTTP/1.1?
We are using ingress-nginx version 0.21.0, which has nginx 1.15.6 and OpenSSL 1.1.1, which should be sufficient to support TLS 1.3/ALPN/HTTP2. Our nginx-configuration configmap has use-http2 set to true and I can see that the pod's /etc/nginx.conf has a listen ... http2; line.
Edit 10/05/2019:
Further to the comments of #Barry Pollard and #Rico, I've found out that AWS Elastic Load Balancer, which sits in front of our ingress-nginx controller, doesn't support HTTP/2. I've cut nginx out of the stack and our API Gateway is being provisioned its own Network Load Balancer. However, we're still on HTTP/1.1. It looks like ASP.Net Core 2.2's HTTP server Kestrel uses HTTP/2 by default, so I'm not sure why the connection is still dropping to 1.1.
Like #BarryPollard said you shouldn't need HTTP/2 end to end to establish HTTP/2 connections on your browser.
It sounds like whatever you are using for a client is dropping to HTTP/1.1, make sure you try with one of the following:
Chrome 51
Firefox 53
Edge 12
Internet Explorer 11
Opera 38
You didn't specify what architecture is fronting your nginx. Is it connected directly to the internet? or it's going through cloud load balancer? CDN? You can also test with Wireshark as described here.

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.