Do load balancer run inside a compute engine in gcp? - google-cloud-platform

I have created 2 load balancers (for http and https) and they are connected to a backend storage. Now when I check the Infrastructure summary in the monitoring tab, I could see 2 different vms running. Do the cost for load balancer is related to these vms?
I have read the load balancer documentation and it was not clear how the LB's works internally.

It depends on what type of Load Balancer you're running, but basically it is all running and managed by the internal infrastructure of Google.
The HTTPS/HTTP are managed by the GFE and Andromeda which are the software-defined networks build by Google. You can read about this in the following documentation.
About the cost of the Load Balancer, it depends on the architecture of your environment. For this reason, I suggest you to check the following documentation which explains on detail the Load Balancer's pricing. In summary you will be charged depending on the traffic that goes trough the LB. This can be stimated using the GCP calculator.

Related

Load balancer for kubernetes clusters

I need some help configuring my load balancer for my Kubernetes clusters. The internal load balancer works fine. Now, I'd like to expose the service via https and I'm stumped on the Ingress configuration.
First of all, please take in count that whenever any HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make are overwritten by GKE.
So, once that we are aware about the above, We have to know that Ingress for Internal HTTP(S) Load Balancing deploys the Google Cloud Internal HTTP(S) Load Balancer. This private pool of load balancers is deployed in your network and provides internal load balancing for clients and backends within your VPC. As per this documentation.
Now, we are ready to configure an Ingress for Internal Load Balancer, This is an example about how to configure a simple Ingress in order to expose a simple service.
My suggestion is to try first to implement the first configuration in order to know how an ingress works, and then, try to configure an ingress for GKE as per this documentation.
Let me know if you still having doubts about it or if you need more assistance.
Have a nice day, and stay safe.

GCP internal load balancer between two VMs (Compute instances)

Is it possible in GCP to create an internal load balancer that balances the traffic between two Compute Instances in different regions?
Two instances (written NAT on them) are in different regions (e.g one in us-central1 and other in asia-south1) serving something on the same ports and the internal load balancer (e.g with IP: 170.0.0.4) is accepting requests from the clients and forwarding them to these VMs.
This would help in creating a highly available service (NAT in this case) that will work even when one VM or the service or region is down.
EDIT:
Adding some more details here:
Both VMs and the Load Balancer have internal IPs.
Both VMs and the Load Balancer are in the same VPC network
I need a layer 7 (HTTP(S)) internal lLoad balancer.
Internal Load balancing is only regional and since you want to have back-end in different regions it will still work but you will have to set up one by yourself.
It's not possible "out of the box".
You can have a look at the Internal Load Balacing documentation which explains how this works and why. Here's also a table with available Load balancing options.
If you want to configure your own LB then maybe try Envoy Proxy (or Nginx, or any solution you want).
In essence - unless you set up your own load balancer GCP doesn't have the functionality.
You could also use external load balancer (which is risky) and use it to load balance traffic and restrict external traffic to your instances (just a bunch of known IP's or ranges).
With only two endpoints, it is impossible to use a load-balancer, because there is nothing to balance. You could only put both of them behind a load balancer, then it could balance their traffic. Moving both into the same region might be the only option to use the internal load balancer, but also here, the servers need to be put behind it.

Is Google Cloud Load Balancer a single point of failure ? Can we have a standby replica?

We are doing an HA deployment of our application in GCP for evaluation.
The rough architecture is similar to
(Image courtesy : Google cloud)
The Global load balancer itself appears a single point of failure.
How can we achieve this in GCP.
Couldn't find any HA configuration for Cloud Load balancer in GCP console.
Googling takes me to provision for backend services.Not what I am looking for.
Can you please share your thoughts.
The global load balancer is not a device, and not a single point of failure.
It's a logical/virtual entity that delivers the service using globally-distributed hardware and anycast IP routing... so if my browser and your browser are connecting to "the same" balancer (at the same IP address), then there's a very high probability that we aren't communicating with the same physical hardware, and a good possibility that those two different physical devices we're talking to are not even in the same physical location.
Cloud Load Balancing is a fully distributed, software-defined, managed service for all your traffic. It is not an instance or device based solution, so you won’t be locked into physical load balancing infrastructure or face the HA, scale and management challenges inherent in instance based LBs.
https://cloud.google.com/load-balancing/
There isn't any configuration option for this, because it's an intrinsic part of the design of the service.

GCP, Autoscaling on internal load balancer

I managed to set up autoscaling based on an external load balancer, but I didn't find a way to do the same for an internal load balancer.
Is this feature supported, how to go about auto-scaling my instance group based on the internal load balancer?
The issue is, when you configure an instance group to scale by HTTP requests, you need an HTTP load balancer, which is internet facing, So, the UDP load balancer, which can be internal doesn't work for that.
The Internal Load Balancer uses a backend service which can use a managed instance group. You can assign a managed instance group to the backend or target pools of both internal and network load balancers.
Keep in mind that the Network Load Balancer uses target pools instead of backend services, but target pools can use managed instance groups as well.
Take a look at the documentation for more details. Alternatively I found this and this posts that I believe can be useful to you.
From your last comment:
I'm not able to setup a TCP load balancer which has a backend service, I only get a REGIONAL backend service, which doesn't support http load balancing..
As stated in the Internal Load Balancing Concepts, "internal client requests stay internal to your VPC network and region", so there is neither need of HTTP here, nor a multi-regional setup.
On the same page, under section "About Internal Load Balancing", the schema shows a classic load balancing architecture, featuring one global (http) and multiple internal (tcp/udp) load balancers for each region.
Further on, under "Deploying Internal Load Balancing with clients across VPN or Interconnect", the following is stated in an "Important" note:
Internal Load Balancing is a regional product. [...] An internal load balancer cannot forward or receive traffic to and from VM instances in other regions.
Basically, if your managed instance group has instances across multiple regions, then you need an external load balancer, but if all your instances are within the same region (instances can be split across zones within this same region, e.g. us-west1-a/b/c), then you can rely on an internal load balancer.

Choosing of AWS Load Balancer for Kubernetes HA cluster

I am trying to implement CI/CD pipeline for my set of microservices by using Kubernetes HA cluster and Jenkins.
When I am exploring, I found that usage of load balancer for making a HA kubernetes cluster. I am planning to host my application in AWS. So I want to create and use an AWS elastic load balancer for my application by supporting HA kubernetes cluster.
My doubt here that, When I am exploring AWS load balancer, I am seeing that "Application Loadbalancer - http/https" and "Network Loadbalancer-TCP".
So here I am confused about which load balancer AWS resources I need to create? In my exploration, I understood that to create a network load balancer.
So my confusion is this. Can anyone clear that I am going in correct direction? Means creating network load balancer.
Can anyone correct me if I am going in wrong direction?
Let me know that should network load balancer will solve HA kubernetes cluster formation or not?
There isn't a right answer to your question, it really depends what you need to do. You can achieve HA with a network load balancer, a classic load balancer (even though this is the old generation and is not suggested) or an application load balancer.
The advantages of network load balancers are certainly related to performance and the possibility to attach elastic IPs and are ideal for balancing TPC traffic (level 4 routing).
On the other side, application load balancers are ideal for advanced load balancing of HTTP and HTTPS traffic (level 7 routing), with the possibility to do advanced request routing, suitable for complex application architectures (such as microservices)
This link might be useful https://aws.amazon.com/elasticloadbalancing/details/#details