Which port gke https loadbalancer use for health checks? - google-cloud-platform

Please I want to know which port GKE uses when performing the health checks of the backend services.
Does it use the service port declared in the service yaml or other specific ports? Because I'm having trouble getting the back services healthy.

Google Cloud has special routes for the load balancers and their associated health checks.
Routes that facilitate communication between Google Cloud health check probe systems and your backend VMs exist outside your VPC network, and cannot be removed. However, your VPC network must have ingress allow firewall rules to permit traffic from these systems.
For health checks to work you must create ingress allow firewall rules so that traffic from Google Cloud probers can connect to your backends. You can refer to this documentation.

Related

Cannot call API with ALB's DNS

I have an API on AWS ECS, connected with an Application Load Balancer. It has two target groups for blue/green deployment with CodeDeploy. The deployment works, and the targets are healthy, so I assume the app runs, and the ports are configured correctly. The port I use is 3000, and the listener is set to HTTP:3000 as well.
The load balancer is assigned to the default VPC security group, and for testing purposes I set an inbound rule to it that accepts all traffic, and the IP address is 0.0.0.0/0, so in theory it should be accessible for anyone. When I try to call the health check endpoint with {alb_dns}/rest/health (which is tested by the health checker, and it works), I get ECONNREFUSED error. Why can't I access it?

Logs for GCP network load balancer

I'm working with GKE cluster. In gke I have microservices that are using GRPC. Also I have an ingress that deployed under GCP NLB. I'm aware about gcp ingress but it's not the best option for my case.
Most of the services have long living connections(15m+). And during the test I found that all long living connection drops after 30s or more. I suggest that it may be timeouts on the LB level.
Is it possible to find NLB logs somewhere to ensure that connection drops on LB level?

How to restrict access to an external load balancer's endpoint

I am very new to GCP. For now I have deployed a hello-world container in GKE. This hello-world is backed by an external load balancer, meaning that it is accessible to everyone on the internet provided they have its IP address.
I would like to restrict the access to this endpoint only to authenticated users (through Google SSO) that are part of my project or my organization. Is there a way to do so?
You need to integrate IAP ( Identity-Aware Proxy )
When to use IAP
Use IAP when you want to enforce access control policies for applications and resources. IAP works with signed headers or the App Engine standard environment Users API to secure your app. With IAP, you can set up group-based application access: a resource could be accessible for employees and inaccessible for contractors, or only accessible to a specific department.
Enabling IAP for GKE
IAP is integrated through Ingress for GKE. This integration enables you to control resource-level access for employees instead of using a VPN.
In a GKE cluster, incoming traffic is handled by HTTP(S) Load Balancing, a component of Cloud Load Balancing. The HTTP(S) load balancer is typically configured by the Kubernetes Ingress controller. The Ingress controller gets configuration information from a Kubernetes Ingress object that is associated with one or more Service objects. Each Service object holds routing information that is used to direct an incoming request to a particular Pod and port.
Beginning with Kubernetes version 1.10.5-gke.3, you can add configuration for the load balancer by associating a Service with a BackendConfig object. BackendConfig is a custom resource definition (CRD) that is defined in the kubernetes/ingress-gce repository.
The Kubernetes Ingress controller reads configuration information from the BackendConfig and sets up the load balancer accordingly. A BackendConfig holds configuration information that is specific to Cloud Load Balancing, and enables you to define a separate configuration for each HTTP(S) Load Balancing backend service.

GCP Firewall allow ingress traffic based on domain name

Is GCP Firewall able to allow ingress traffic based on a specific domain name?
I've googled about it and I didn't find any result on this.
All I know is it can allow or deny based on IP address.
A network firewall typically acts at the packet level and since network packets don't carry information about the domain, the standard GCP VPC Firewall will not let you do that.
What you are looking for is an Application Firewall (or Layer 7 Firewall). Google Cloud has another service called Cloud Armor that has WAF (Web Application Firewall) capabilities. I think that by using Cloud Armor and load balancers you might be able to do what you want.

Is it possible to use Google Cloud NAT for VMs behind a TCP/Proxy LB so that all servers can utilize egress from a single IP?

I am trying to use the Google Cloud NAT on a set of VMs running on Compute Engine which are in their own specific subnet such that all of the servers make requests to customer websites from a single static IP address. Unfortunately when I add these VMs to a TCP/SSL Proxy LB they don't appear to be using the NAT which I believe is configured correctly.
I have tried configuring the TCP Proxy LB as well as an HTTP(S) LB and the Cloud NAT and when I try and make an egress http request it results in a timeout. The ingress via the LB is working properly. The VM instances do not have external IPs which is a requirement for the Cloud NAT.
I expect the http requests to hit the server and for the web-server to make outbound http request via the Cloud NAT such that other servers need only whitelist a single IP address (a static IP assigned to the Cloud NAT)
I'm trying to understand why would you need Cloud NAT in this scenario, since a TCP/SSL proxy load balancer will connect to the backends using a private conneciton and the backends won't be exposed to the Internet. Configuring just a TCP/SSL proxy would be enough for your scenario imo.
The following official documentation will explain my point1:
Backend VMs for HTTP(S), SSL Proxy, and TCP Proxy load balancers do
not need external IP addresses themselves, nor do they need Cloud NAT
to send replies to the load balancer. HTTP(S), SSL Proxy, and TCP
Proxy load balancers communicate with backend VMs using their primary
internal IP addresses.