How to set up a GCP Global External HTTP(S) LoadBalancer for gRPC? - google-cloud-platform

I have created a google cloud load balancer: configuration.
The backend is an unmanaged instance group. For example, it consists of one VM. The gRPC service is deployed on the VM (port 443). gRPC health checks are successful. But the gRPC client cannot connect to the service. I can't find a solution to this problem.
The last thing I found in the documentation:
If you use HTTP/2, you must use TLS. HTTP/2 without encryption is not
supported.
Could this be a solution and I just need to secure the gRPC connection with SSL/TLS?

You need to enable TLS on the Load Balancer and also between the Load balancer and your backend VM

Related

Where to configure websockets with an application load balancer on AWS EC2?

According to the AWS documentation, "WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer."
However, when I select Application Load Balancer in EC2, I don't have any option other than HTTP and HTTPS:
I would like to use the secure websocket protocol (wss://) which I believe would be over TLS:8888.
How can I input this option?
The solution was to use HTTPS for the listener protocol, even though the browser is making requests to wss://.
For port number, configuring both the listener and environment instance to port 8888 works.

502 Server Error after configuring the Load Balancer in Google Cloud Platform

I am using WordPress Certified by Bitnami and Automattic, and one VM Instance running in Google Cloud Compute Engine.
I configured a free SSL certificate from Let's Encrypt for my website and also configured the Certbot Auto-Renewal script.
I tried using Cloudflare and I was receiving 5xx errors sometimes, mostly 522 timeout error. I stopped using the Cloudflare service, and I tried to configure a GCP load balancer for my VM Instance.
I created an Unmanaged Instance Group and I configured the HTTP protocol for my backend service with Cloud CDN Enabled in the load balancer, and for the Frontend, I configured an HTTP and HTTPS protocol and created a Google Managed SSL Certificate for the HTTPS protocol in my load balancer.
(The SSL certificate is ACTIVE)
I used this link to configure my load balancer in Google Cloud Platform:
https://docs.bitnami.com/google-templates/how-to/configure-lb-ssl-google-templates/
The problem is that I have 2 SSL Certificates and I get 502 Server Error:
*
"Error: Server Error The server encountered a temporary error and
could not complete your request. Please try again in 30 seconds."
*
I don't know how to solve this problem.
I just want to use a very basic and common configuration for my website.
I also want to know why I received a 522 timeout error from Cloudflare and how to solve it.
I need a quick response and appreciate your answers and help in advance.
I would advise you to follow 1 to create a HTTPS Load Balancer with the backend service 2. Once you create that, you can enable CDN 3.
Regarding the errors, Make sure that your backend instance is healthy and supports HTTP/2 protocol. You can verify this by testing connectivity to the backend instance using HTTP/2.
After you verify that the VM uses the HTTP/2 protocol, make sure your firewall setup allows the health checker and load balancer to pass through.
If there are no problems with the firewall setup, ensure that the load balancer is configured to talk to the correct port on the VM. I will also suggest you to walkthrough 4 for more steps that you can take to troubleshoot this issue.

How to set up a GCP LoadBalancer for mixed HTTPS and gRPC traffic

I am trying to make sense of the GCP LoadBalancer for the use case of mixed HTTPS and gRPC backend. The LoadBalancer documentation seems to indicate that you can/should use the HTTP(S) LoadBalancer, as that "includes HTTP/2". For backend services I appear to be able to specify a named "grpc" port and set it to be number 7000, but if I use the gcloud command to view my backend services:
gcloud compute backend-services list --format=json
My service is shown to use portName "grpc" (correct) with port "80" (incorrect). This even though I was prompted that the instance group had named ports, and I could (and did) choose "grpc:7000".
On the frontend side, I can only select ports 80 and 8080 for HTTP, or 443 for HTTPS. No mention of HTTP/2, but I guess "HTTPS includes HTTP/2".
Am I right in that I cannot use the layer 7 LoadBalancer at all for my scenario? The documentation is not very explicit on ports, and if I search the Web for gRPC I get loads of stories on LoadBalancing Kubernetes-hosted apps.
In order to use gRPC you need to use HTTP/2
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
HTTP/2 and HTTPS are not one and the same, however H2 (HTTPS/2) can only work over HTTPS. But by default H2 is not enabled you need to enable it.
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
See: https://cloud.google.com/load-balancing/docs/https/ for further information.

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.

Secure Web Socket (wss) using AWS Load Balancer

I have a small nodejs application containing a web socket server.
The app is hosted inside an ecs container so it is basically a docker image running on an ec2 instance.
The web socket works as expected over ws://. I use port 5000 for this.
In order to use it on my SSL secured website (https), i need to use a secured web socket connection over wss://.
To archive that I've created a certificate on aws (like many times before) and after I create a load balancer.
I tried an application load balancer, a network load balancer and the classic load balancer (previous generation).
I read a few answers here on StackOverflow and followed the instructions as well as some tutorials found using google.
I tried a lot without success. Of course, this takes a lot of time because the creation of a load balancer and other resources takes quite a bit of time.
How I create a load balancer on aws pointing to my instance with wss://. Could someone please provide an example or instructions?
The solution posted
https://anandhub.wordpress.com/2016/10/06/websocket-ebs/ appears to work well.
Rather than selecting https and http, select the 'SSL' on port 443 and 'TCP' on your applications port (eg 5000)
You'll need to load your key/certificate via AWS and the loadbalancer will handle the secure part. I suspect you can not take advantage of 'sticky' features of the LB with this method.