GCP API for getting list of load balancer - google-cloud-platform

From GCP portal perspective Load balancer is a service and related services comes under it like backendServers, health Check etc.
However APIs are only available for services like backendService, address, healthcheck etc.
Using UI we could find direct relationship between service like backendServers and LoadBalancer but backend service API doesn't have respective field.
While on UI we have:
Where as supported fields from backend service:
affinityCookieTtlSec,backends,cdnPolicy,connectionDraining,creationTimestamp,description,enableCDN,fingerprint,healthChecks,iap,id,kind,loadBalancingScheme,name,port,portName,protocol,region,selfLink,sessionAffinity,timeoutSec
Wanted to know if there is direct / indirect way to get List of Load Balancers

As mentioned by Patrick W, there is no direct entity 'load balancer', its just a collection of components. The list seen in the UI that appears to be the load balancer is actually the url-map component, which can be seen via the API with:
gcloud compute url-maps list
More information on the command

At the API level, there is no Load Balancer, only the components that make it up.
Your best bet to get a view similar to the UI is to list forwarding rules (global and regional). You can use gcloud compute forwarding-rules list which will show you all the forwarding rules in use (similar to the UI view), along with the IPs of each and the target (which may be a backend service or a target pool).

Related

What's the best way to load balance between Cloud Run services across projects?

Consider a scenario with two identical Cloud Run services ("Real Time App") deployed in two different regions, let's say EU and US. These services use Firestore for real time communication, and minimizing latency is important. Since Firestore only allows specifying one region per project, each Cloud Run service is deployed in its own project and uses its regional Firestore instance. It is not needed for a service in the US to access Firestore data in the EU and vice versa.
Is there a way to deploy a global HTTPS load balancer to route requests to Cloud Run service closest to the user when services are defined in different projects?
I attempted a setup with a shared VPC between a Host "Global" project (in the US) and 2 service projects (EU and US). I created a Cloud Run Service, Network Endpoint Group (NEG), and Backend Service in each regional project. I then attempted to create a Global forwarding rule, Target HTTPs proxy, and URL Map in the host project. However, the URL Map cannot be fed a backend service in another project, complaining that:
Cross-project references for this resource are not allowed.
Indeed, per the Shared VPC Architecture and Cross-project service referencing section of the documentation it seems that:
Cross-project service referencing is not supported for the global external HTTP(S) load balancer
and that, if I understood correctly, the following rules apply:
The NEG must be defined in the same project as the Cloud Run Service
The Backend Service must be in the same project as the NEG
The Target HTTP(s) Proxy and associating URL map must be in the same project as the Backend Service
The Forwarding Rule must be in the same project as the Backend Service
essentially requiring the entire chain to be defined in one project.
Are there recommended workarounds for this scenario?
One solution I can think of is to create a "Router" Cloud Run Service in the Host global project behind a load balancer, with multi region deployment. Its sole purpose is to respond to the client with the regional URL endpoint of the closest "Real Time App" Cloud Run service.
I am wondering whether there is a more elegant solution, though.

Can an API Gateway point to multiple Application Load Balancers?

Having a hard time figuring out a microservices architecture.
Right now I have an ECS Cluster with two services (TodoService, CategoriesService) running in containers. Both of the services have their own Load Balancer. I'm trying to build an API Gateway where /todos would route to the Todo-app-load-balancer and /categories would route to the Categories-app-load-balancer.
First, is this a good approach to microservices? And second, question from the title.
First, is this a good approach to microservices?
Yes, there is nothing wrong with this approach.
Can an API Gateway point to multiple Application Load Balancers?
Yes, you can point each method from the API gateway to an entirely different backend resource.
In case of an Application Load Balancer, there are multiple ways of doing this. Probably the easiest is to have a public Application Load Balancer and to create HTTP integration for it. You have to specify the DNS name for the application load balancer as the endpoint. For more information, see this support page.
Other option would be to use VPC Links, which would integration with private load balancers. While this would be recommended for production, it is a bit more complex to set it up.
Is it a good or bad approach is moreover an architectural decision, But I can suggest using one ALB(Ingress) with different rules can solve your problem, Also in API GATEWAY only allow to add ELB services directly ALB will not but still there is a workaround by adding direct DNS. Here I'm attaching two screenshots for your reference.
Direct integration is not allowed on ALB, but you can use the DNS name manually.

AWS Load Balancer Path Based Routing

I am running a microservice application off of AWS ECS. Each microservice currently has its own Load balancer.
There is one main public facing service which the rest of the services communicate with via gateways. Having each service have its own ELB is currently too expensive, is there some way to have only 1 ELB for the public facing service that will route to the other services based off of path. Is this possible without actually having the other service names in the URL. Could a reverse proxy work?
I know this is a broad question but any help would be appreciated
Inside your EC2 panel go to loadbalancers section, choose a loadbalancer and then in listeners tab, there is a button named view/edit rules, there you set conditions to use a single loadbalancer for different clusters/instances of your app. note that for each container you need a target group defined.
You can config loadbalancer to route based on:
Http Headers
Path i.e: www.example.com/a or www.example.com/b
Host Header(hostname)
Query strings
or even source Ip.
That's it! cheers.

How to expose just one microservice from kubernetes cluster outside for existing load balancer and other services - only within cluster

Hello and thanks in advance.
First I want to provide some context to make answering my question easier.
We are using Google Cloud.
We got to a situation when our needs to be able to deploy updates easily for various parts of application bumped into our monolith architectures limitations.
Our app is not super big, but it already has 2 physical services - backend (scope being updated), and caching server which caches data and makes search in a mongo-like way over data from Google Datastore.
We have 2 options here.
"plugins" - like nanoservices running within same process which are developed in a way that these nanoservices do not know they are on the same process, all they know is a set of "plugins shell API" injected at activation of a nanoservice code. This shell gives the nanoservice access to a database, logging, configuration, routes registration, control events like refresh pages map and some metadata like website root url and root of static content deployed as supply stuff for a service version. Like https://static.server.com/deployments/foo/v2
standard microservices on kubernetes where same API mentioned exposed to each service via "shell client" package deployed as part of container image.
In short, this is a common "infrastructure vs library" dilemma, often mentioned in articles about microservices i read.
For library approach I have some vision already on how to implement all that including hot modules replacement without stopping server but the more i read about kubernetes the stronger I feel that I am inventing kubernetes (or similar) wheel.
How I imagine that:
1) there is a router service which is single service exposed outside from the cluster. When a page is requested, and this service attached to the load balancer we already have as backend.
It will handle authentication/authorization of outside requests, and pick the page to be rendered or API endpoint to be invoked. When a page requested, the related template is loaded, and data for pre-rendering is picked by calling related endpoint exposed by module service. When public API endpoint is picked, the matching service endpoint is called.
There are few services, including:
caching service (that service which deployed now at separate servers group, and what
updates service, which process the module services version switch and provides API to do so via some UI for admins.
modules services (one per modules). Each module exposes endpoints for providing preloaded pages data, endpoint to give list of pages routes to be registered, API endpoints implementations, and endpoint to list exposed API routes to be invoked through router service.
router service which process external requests and dispatch them across other services when appropriate using cached routes map, updated in case if one of internal services broadcasts pages map refresh event, e. g. updates service.
What is stopping me from starting to use kubernetes right away is the lack of knowledge about how to implement the following scenarios:
1) only 1 microservice must be exposed outside cluster, the "routing service".
2) reuse builtin services discovery etc to communicate with services within cluster like caching server.
3) Cluster's router service would be attached to cloud load balancer we already have as a backend.
In my opinion, you should have a look at NGINX Ingress Controller to build your routing scheme, more information you can find here and here.
EDIT In addition, you can try some other ingress controllers and among them Istio and Traefik definitely worth your attention as alternative solutions to NGINX Ingress Controller.

Running multiple web services on a single ECS cluster

If I have an ECS cluster with N distinct websites running as N services on said cluster - how do I go about setting up the load balancers?
The way I've done it currently is for each website X,
I create a new target group spanning all instances in the cluster
I create a new application load balancer
I attach the ALB to the service using the target group
It seems to work... but am want to make sure this is the correct way to do this
Thanks!
The way you are doing it is of course one way to do it and how most people accomplish this.
Application load balancers also support two other types of routing. Host based and path based.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#host-conditions
Host based routing will allow you to route based off of the incoming host from that website. So for instance if you have website1.com and website2.com you could send them both through the same ALB and route accordingly.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Similarly you can do the same thing with the path. If you websites were website1.com/site1/index.html and website1.com/site2/index.html you could put both of those on the same ALB and route accordingly.