Why GCP loadbalancer frontend has only two ports enabled? - google-cloud-platform

I am trying to create a load balancer in GCP. I have created two instance groups and each instance group has single vm attached to itself. One vm is having a port 80 and another vm is having a port enabled at 86.
The moment I create a load balancer, I find a frontend ip configuration always enabled at 80.
I am looking forward to something like this, ip:80 and ip:86. Since I am new to GCP, I am struggling with this part

A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. With Google cloud you can create a single forwarding rule with a single IP by adding 2 ports separated by comma.
This port limitation for the TCP proxy load balancer and is due to the way TCP proxy load balancers are managed within the GCP internal infrastructure. It is not possible to use any port outside of this list.
For example:
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports us-ig2
--named-ports tcp110:110
--zone us-east1-b
gcloud compute health-checks create tcp my-tcp-health-check --port 110

Related

AWS Load Balancer to route traffic to one target group with multiple microservices

I have ALB which listens to HTTP traffic on port 80, i have added a target-group in ALB listeners which consists of a single EC2 machine. My EC2 instance runs multiple microservices on different ports for eg. App1 runs on 8080 , App2 on 8001, App3 on 8004 so on... EC2 listens on port 80 again for any incoming requests through ALB. I want to do a path based routing for incoming traffic to EC2 to different app ports based on the path for example,
"/users" -> app on 8080 ; "/get/info" -> 8001 etc
. Is there a way to achieve it? Or any better way to do what i'm trying ? Right now i have done routing based on ip-tables to route traffic from port 80 of EC2 to single port i.e 8080, but that serves only 1 of my many microservices. How can i configure it to serve all of them?
This is exactly what an Application Load Balancer is designed to do.
You can create multiple Target Groups. Each target group has:
A name
A target (eg HTTP on port 8080)
Health Check configuration to determine whether the target(s) are healthy
So, you would create one Target Group for each app you are running.
You can then associate Amazon EC2 instances with each Target Group. In your case, if everything is running on a single Amazon EC2 instance, you can associate the same instance with all target Groups.
Then, create the Application Load Balancer (or associate the Target Groups to an existing Application Load Balancer).
In the Application Load Balancer configuration, go to the Listeners tab and add rules that send a particular path (eg /users) to a particular target group.
See:
Tutorial: Use Path-Based Routing with Your Application Load Balancer - Elastic Load Balancing
Listeners for Your Application Load Balancers - Elastic Load Balancing

What is the equivalent of AWS Classic Load Balancer in GCP

We have created an Instance Template with ubuntu operating system. Using the instance template, we have created instance group with 3 machines.
These 3 machines are behind a TCP Loadbalancer with 8080 port enabled.
We have run the below python command on first VM.
python -m SimpleHTTPServer 8000
We see one of the instance health (1/3) is successful and have tested with telnet command. Since, the SimpleHTTPServer is installed on one instance, it shows (1/3) instance is healthy.
telnet <Loadbalacer ip> 8000
However, when we run the above command from the 2nd VM in the same instance group, we see 'Connection refused'.
telnet XX.XX.XX.XX 8000
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection refused.
Also, the same service is accessible on other VMs running on other instance group. The service is not accessible within the same instance group.
We have verified the firewall rules and we have tested with both 'allow all' and 'Specified protocols and ports' Protocols and ports option.
The above usecase works fine on AWS Classic LoadBalancer, however this fails on GCP.
I have created a firewall rule, 'cluster-firewall-rule' with 'master-cluster-ports' as tag. This tag has been added as part of Network tags in the instance. This rule allows traffic for 8080 port.
What is the equivalent of AWS Classic Load Balancer in GCP?
GCP does not have equivalent for AWS Classic Load Balancer (CLB).
AWS CLB was the first load balancer service from AWS and was built with EC2-Classic, with subsequent support for VPC. AWS NLB and ALB services are the modern LBs. If you can, I suggest using one of them. See https://aws.amazon.com/elasticloadbalancing/features/#compare for comparison between them.
If you switch, then you could use GCP's corresponding load balancer services. See https://cloud.google.com/docs/compare/aws/networking.
For my benefit:
1) Are you migrating applications from AWS to GCP?
2) What is your use case for migrating applications from AWS to GCP?

Registering an ELB to an ECS service with random host port

I'm working with the ECS service on AWS and I have this problem - the docker containers I need to run on ECS are webservices, each container should have its internal port 80 mapped to a random port on the container host. I don't want to specify the host port for the 80 container port beforehand, I'd like to let docker daemon to find a host port for the container.
But, how the ELB fits here? For me it looks that I have to know the host port to be able to create the ELB for the service.
Is it so?
This is now possible using Application load balancer
However, if you need to open up inbound traffic on the security group, security group's port mapping is not updated automatically.
ELB does not allow binding to random port.
We have recently implemented service discovery with ECS and CONSUL. We had to introduce Zuul as intermediate layer between the ELB and our apps.
ELB maps to zuul on a static port, but Zuul discovers the backend services dynamically and routes traffic.
You need a service discovery system, such as Hashicorp's Consul and then you need to integrate it with AWS infrastructure: https://aws.amazon.com/blogs/compute/service-discovery-via-consul-with-amazon-ecs/

GCloud Firewall rules on network load balancer

I just deployed a container with kubernetes to google cloud and all is working except that I couldn't figure out how I can apply default network rules to network load balancer to restrict accesses per incoming ip address.
I see that underlying instance group has the firewall rules applied but not to the service.
Any help is appreciated
It appears that kubernetes server "load balancer" creation automatically creates firewall rule on a given port for 0.0.0.0/0 and it is attached to instance template and that template is used to spin off GCEs.

Using an HTTP Load Balancer with a container cluster on Google Cloud

I want to put an HTTP load balancer in front of a cluster running a docker image on Google Container Engine, so that I can use HTTPS without the application needing to support it.
I've created a container cluster with the following command:
gcloud container clusters create test --zone europe-west1-b --machine-type f1-micro --num-nodes 3
I then created a replication controller to run an image on the cluster which is basically nginx with static files copied onto it.
If I create a network load balancer for this, everything works fine. I can go to my load balancer IP address and see the website. However, if I create an HTTP load balancer to use the instance group created when I created the cluster, I get an HTTP 502. I also noticed that if I try browsing to the external IP address of any of the individual instances in the cluster, it refuses the connection.
There is a firewall rule already for 0.0.0.0/0 on tcp:80, for the tag used by the cluster instances, which if I'm not mistaken should allow anything anywhere to connect to port 80 on those instances. It doesn't seem to be working though.
For your services to be exposed publicly on the individual instances' public IPs, they need to be specified as NodePort services. Otherwise, the service IPs are only reachable from within the cluster, which probably explains your 502. Being reachable on the instance's public IP is required for your HTTP load balancer to work.
There's a walkthrough on using the Ingress object for HTTP load balancing on GKE that might be useful.