GCP Http Load Balancer - google-cloud-platform

I have service running in one of my VM port 8088, I want all traffic to be send to this port via my HTTP LOAD BALANCER
Load-Balancer-IP:8088 -> Redirect to my VM port 8088
Load-Balancer-IP-> Redirect to my VM port 8088
How to configure this in gcp load balancer setting currently my configuration look like this

To forward a custom port to your backend, you need to use a tcp load balancer in single region only.
Keep in mind that is not a proxy but a port forwarding and SSL certificate aren't managed on the load balancer. If you want to use one, you have to host and manage it on your VM.

Related

AWS Lightsail Load Balancer: Change default ports from 80/443

So I've setup an AWS Lighsail load balancer and attached it to a single instance.
My instance is running a REST API on port 8080. I'd like to be able to route HTTP (and down track HTTPS) requests hitting the front end of the load balancer to port 8080 on my instance. By default the load balancer routes to port 80 on the attached instance.
I'd also like to change the default ports on the load balancer. The load balancer listens on ports 80 & 443. It says these are 'defaults' in the AWS Lightsail console.
I'm struggling to find any settings related to changing default ports or port forwarding.
Any help would be much appreciated...
Seem it's not possible to change default port of AWS Lightsail Load Balancer. Lightsail instance install Bitnami package which includes Apache httpd service. This httpd service hosts the port 80 by default, and expected to receive traffic from AWS Load Balancer and forwards to your application. Therefore, your application run in Lightsail instance should be configured to be proxied by this httpd service.
If you want AWS Load Balancer to direct traffic to your application, not via httpd service, just stop this httpd service and then start your application using port 80.

AWS NLB: forwarding request to different ports of a single host based on Path

with this flow:
external world --> AWS API Gateway ---> VPC Link ---> Network Load Balancer ---> my single EC2 instance
How can I configure AWS Netword Load Balancer such that:
Requests to https://myapp.com is routed into port 80 of my EC2 instance.
Requests to https://myapp.com/api/* is routed into port 3000 of my EC2 instance.
?
Currently I have only configured one Listener on the NLB that listens on port 80 and all traffics from the API Gateway are routed to port 80 of my EC2 instance.
I have found that in Application Load Balancer, you can configure "Rules" that map path to different ports: Path based routing in AWS ALB to single host with multiple ports
Is this available with NLB?
This is not possible with the Network Load Balancer, because it operates on a level of the network stack that has no concept of Paths.
The NLB operates on Layer 4 and supports the protocols TCP and UDP. These essentially create a connection between ports on two machines that allow data to flow between them.
Paths as in HTTP(S) Paths are a Layer 5+ concept and belong to the HTTP Protocol. They're not available to the NLB because it can only work based on data that's guaranteed to be available there.
You can use an Application Load Balancer as the target for your Network Load Balancer and then configure the Path-based rules there, because the ALB is a layer 5+ load balancer and understands the Layer 5 protocol HTTP.
Here is a blog detailing this: Application Load Balancer-type Target Group for Network Load Balancer

GCP Load Balancer with IAP

Is there a way to setup the load balancer so that I can enable IAP without exposing the 443 port of my application?
I would like to accept https requests in the load balancer (just to enable IAP) but only http in my app? How can I add a forwarding rule that forwards from port 433 of the load balancer to port 80/8080 of the backend service?

How to create a http and https load balancer that will allow SSL pass to my instances

I am trying to create a load balancer on GCP that will route HTTP and HTTPS traffic to my single instance (I'm just testing things out so I have a single instance that serves http traffic).
My instance will be serving for many domains, and these domains are not owned by me but for my clients. I will simply manage the letsecrypt SSL certificates for these domains. They will point their domains to my service like a DNS record: service.example.com
Can I still use GCP load balancers for HTTPS traffic with the above considerations? I essentially need the load balancers to pass all SSL traffic to my instances.
I can't seem to figure out how to create a load balancer that will pass SSL traffic to my instances, is this possible?
If your goal is to create a load balancer that passes thru HTTPS traffic (and HTTP) directly to a backend instance(s), use the TCP Load Balancer.
Step 1. Create a "regional" static IP address before creating the load balancer. Create the IP address in the same region as your instance.
Step 2: Create a TCP Load Balancer. I will skip the minor details that are obvious.
Backend configuration:
Select Single region only. This will allow you to bypass having instance groups.
Select existing instances -> Select your vm.
Frontend configuration:
Protocol TCP. IP: select the static IP address that you created. Port: 80. Click Done.
Add another frontend. Protocol TCP. IP: same IP address. Port: 443. Click Done.
Once you create the load balancer, wait 5 or 10 minutes for everything to configure and startup.
Now your and HTTP and HTTPS traffic will be passed directly to your backend instance(s). Note that this configuration does not use autoscaling, managed instance groups, healthchecks, etc.
You will manage your SSL certificates on your backend instance(s) (your Compute Engine VMs). The load balancer just passes traffic thru with no SSL offload.

Issues with EC2 Load balancer, SNI, multiple SSL domains on the same server

i am having issues setting up an EC2 load balancer, on a instance, that has multiple domains protected by SSL.
Is it possible to make the load balancer pass the HTTPS request as is, and get it decrypted at the server level? If so, how do i set that up?
I have a standard LAMP setup on a EC2.
On your Elastic Load Balancer, configure a TCP listener that listens on port 443 and forwards to port 443 on the instances. This will allow your EC2 instances to perform the SSL termination.
Note that you won't be able to use Sticky Sessions in this configuration.