Routing traffic to specific VM's via load balancer on GCP - google-cloud-platform

I am new to Google Cloud Platform and advanced networking in general but I have been tasked with setting up an external HTTPS load balancer that can forward internet traffic to 2 separate Virtual Machines on the same VPC. I have created the load balancer, SSL certs, DNS, frontend, and a backend. I have also created an instance group containing the two VM's for use with the backend.
What I am failing to understand is, how do I determine which VM is going to receive the traffic? Example:
I want test.com/login to go to instance1/some/path/login.php
I want test.com/download to go to instance2/some/path/file.script
Any help is greatly appreciated here. Thanks

To detail what #John Hanley mentioned in configuring URL maps, you can follow these steps :
On you load balancer balancer page. Click the name of the load balancer, then look for Edit.
Select Host and path rules, then click Add host and path rule.
On the host field, enter test.com/login. Then for your path, instance1/some/path/login.php.
Once done, for the Backends, select the backend associated to the VM instance. Do the same step for test.com/downloadby adding another host and path rule.
Click Update.
You can check and refer to this guide for more details

Related

How to create a LoadBalancer on GCP with two instances?

I have a situation here.
I made 2 environments prod and preprod, both has two vms each (like two nodes per environment).
Now i have to create a Load Balancer keeping those to nodes on the back end. Once of the nodes has SSL configured with a domain name (say example.com).
Its a Pega App Server with two nodes pointing to the same DB on Google SQL. now Client wants a Load Balancer in the front which will share or balance the traffic between these two nodes.
Is that possible?
If yes, the domain name has been registered with the ip of Node1, but Load Balancer will have a different ip right?
So if the Pega URL that was working before https://example.com/prweb will not work, isnt it?
But the requirement is they will just type the domain name and ill access the Pega App via Load balancer, as in, to which Node the requests gonna go.
Is that Possible at all guys?
Guys honestly i am a noob in all these Cloud thing, please if possible help me out. I ould really appriciate it. Thanks.
I tried to create an HTTPS Load Balancer classic and added those two instances in the Backend, but 1 target pool detected out of 2 instances, its showing "instance xxxx is unhealthy for [the ip of the load balancer]
So next i created HTTPS type Load Balancer with Network endpoint group, where i added those two nodes private ip. But not sure how to do it. Please let me know if anybody knows how to do it.

How to add Cloud CDN to GCP VM? Always no load balancer available

I have a running Web server on Google Cloud. It's a Debian VM serving a few sites with low-ish traffic, but I don't like Cloudflare. So, Cloud CDN it is.
I created a load balancer with static IP.
I do all the items from the guides I've found. But when it comes time to Add origin to Cloud CDN, no load balancer is available because it's "unhealthy", as seen by rolling over the yellow triangle in the LB status page: "1 backend service is unhealthy".
At this point, the only option is to choose Create a Load Balancer.
I've created several load balancers with different attributes, thinking that might be it, but no luck. They all get the "1 backend service is unhealthy" tag, and thus are unavailable.
---Edit below---
During LB creation, I don't see anywhere that causes the LB to know about the VM, except in cert issue (see below). Nowhere does it ask for any field that would point to the VM.
I created another LB just now, and here are those settings. It finishes, then it's marked unhealthy.
Type
HTTP(S) Load Balancing
Internet facing or internal only?
From Internet to my VMs
(my VM is not listed in backend services, so I create one... is this the problem?)
Create backend service
Backend type: Instanced group
Port numbers: 80,443
Enable Cloud CDN: checked
Health check: create new: https, check /
Simple host and path rule: checked
New Frontend IP and port
Protocol: HTTPS
IP: v4, static reserved and issued
Port: 443
Certificate: Create New: Create Google-managed certificate, mydomain.com and www.mydomain.com
Load balancer's unhealthy state could mean that your LB's healthcheck probe is unable to reach your backend service(Your Debian VM in this case).
If your backend service looks good now, I think there is a problem with your firewall configuration.
Check your firewall rules whether it allows healthcheck probe's IP address range or not.
Refer to the docoment below to get more detailed information.
Required firewall rule

AWS Load Balancer Path Based Routing

I am running a microservice application off of AWS ECS. Each microservice currently has its own Load balancer.
There is one main public facing service which the rest of the services communicate with via gateways. Having each service have its own ELB is currently too expensive, is there some way to have only 1 ELB for the public facing service that will route to the other services based off of path. Is this possible without actually having the other service names in the URL. Could a reverse proxy work?
I know this is a broad question but any help would be appreciated
Inside your EC2 panel go to loadbalancers section, choose a loadbalancer and then in listeners tab, there is a button named view/edit rules, there you set conditions to use a single loadbalancer for different clusters/instances of your app. note that for each container you need a target group defined.
You can config loadbalancer to route based on:
Http Headers
Path i.e: www.example.com/a or www.example.com/b
Host Header(hostname)
Query strings
or even source Ip.
That's it! cheers.

How to make a specific port publicly available within AWS

I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)

Loadbalancer for multiple web applications on single EC2 cluster

This may seem an obvious for people who have worked with AWS but I have a lot of trouble figuring out on how to set up a loadbalancer on 2 EC2 instances which are hosting multiple websites.
We have 2 Windows 2012 R2 machines set up, I have created one ELB and from what I have read, I know you can point that ELB to one location (assuming its the default site on the servers). How would I go about pointing say other ELBs that I create to point to the other applications on the server? (Not sure if this info is relevant but just to add : This whole setup is a part of VPC, Domain Controller environment and the web servers are in public subnet. )
One way to solve this is by running your applications in multiple IIS websites.
Each of the websites should have a different site binding with a different host name. You could use the DNS name of the load balancer for each website.
Alternatively you can use a domain name configured in Route53 and use an A record to point to the load balancer.