Logs for GCP network load balancer - google-cloud-platform

I'm working with GKE cluster. In gke I have microservices that are using GRPC. Also I have an ingress that deployed under GCP NLB. I'm aware about gcp ingress but it's not the best option for my case.
Most of the services have long living connections(15m+). And during the test I found that all long living connection drops after 30s or more. I suggest that it may be timeouts on the LB level.
Is it possible to find NLB logs somewhere to ensure that connection drops on LB level?

Related

Which port gke https loadbalancer use for health checks?

Please I want to know which port GKE uses when performing the health checks of the backend services.
Does it use the service port declared in the service yaml or other specific ports? Because I'm having trouble getting the back services healthy.
Google Cloud has special routes for the load balancers and their associated health checks.
Routes that facilitate communication between Google Cloud health check probe systems and your backend VMs exist outside your VPC network, and cannot be removed. However, your VPC network must have ingress allow firewall rules to permit traffic from these systems.
For health checks to work you must create ingress allow firewall rules so that traffic from Google Cloud probers can connect to your backends. You can refer to this documentation.

How to allow NLB on ecs fargate service

I’m needing to deploy an api on ecs fargate that uses an internet facing network load balancer. After getting all this setup, the api deployed on ecs, and passing health checks, I’m noticing that I get a timeout error when I try to access the Dns nlb name in the browser. I think this may have to do with the firewall. The nlb doesn’t have a security group I can modify so I’m wondering if I’ve setup the fargate service security group correctly or if there’s anything else I can try. I’ve got 443,8443,80,8080 just to cover a few in that security group.
The nlb setup like the following: tcp 443 listener and then the container is hosted on port 8443. My question is, does the timeout issue sound like this has something to do with the firewall, and if so how do I allow the nlb on the ecs service sec group if that’s what’s needed? As of yet I’m unsure where to set the firewall configs for something like this since the nlb doesn’t have a security group.
I’ve got 443,8443,80,8080 just to cover a few in that security group.
Port 32768 to 61000 shoud open in security group for tasks in fargate to allow NLB access tasks. accroding to this page, if dynamic ports used in NLB and task definition.
Wish this helps

Enable DDoS protection for GKE Autopilot with Traefik ingress (TCP load balancer)

Has anyone ever configured DDoS protection (with Cloud Armor) for the GKE Autopilot cluster where Traefik is used as an ingress controller. The thing is that Traefik is using TCP load balancer and that is why it is not possible (as far as I can see) to apply Cloud Armor to its configuration. Are there any workarounds for this? Please, share your experience or thoughts. Thanks.
You can only use HTTP(S) load balancer backend services as targets for Cloud Armor.
So, to get the set-up work, you can use Traefik with HTTP(S) load balancer. For more details, refer here

AWS EKS - Create Load Balancer Service throws out of service

I have a quick question regarding AWS EKS that whenever I create a K8s service with of type LoadBalancer, it provisions a classic ELB backed the EC2 where services are running. Now whenever I try to hit the Load Balancer ELB from the Internet, it returns ERR_EMPTY_RESPONSE error. If I navigate back to ELB and look at the instances behind ELB, it shows the status of EC2 instances as OutOfService.
This happens either I use my own K8s deployments & services or the one provided with documentation. Anyone can help me with this? More over, is there any way to provision a different type of Load Balancer for a K8s service? Thanks.
This is default behavior or K8S with on cloud providers , A service type Load Balancer will spins up real one which affect cost.
Better to use K8S Ingress as best practice and can use as Endpoint or you can add under External Load Balancer.

Google Container Engine Clusters in different regions with cloud load balancer

Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress