I want to put an HTTP load balancer in front of a cluster running a docker image on Google Container Engine, so that I can use HTTPS without the application needing to support it.
I've created a container cluster with the following command:
gcloud container clusters create test --zone europe-west1-b --machine-type f1-micro --num-nodes 3
I then created a replication controller to run an image on the cluster which is basically nginx with static files copied onto it.
If I create a network load balancer for this, everything works fine. I can go to my load balancer IP address and see the website. However, if I create an HTTP load balancer to use the instance group created when I created the cluster, I get an HTTP 502. I also noticed that if I try browsing to the external IP address of any of the individual instances in the cluster, it refuses the connection.
There is a firewall rule already for 0.0.0.0/0 on tcp:80, for the tag used by the cluster instances, which if I'm not mistaken should allow anything anywhere to connect to port 80 on those instances. It doesn't seem to be working though.
For your services to be exposed publicly on the individual instances' public IPs, they need to be specified as NodePort services. Otherwise, the service IPs are only reachable from within the cluster, which probably explains your 502. Being reachable on the instance's public IP is required for your HTTP load balancer to work.
There's a walkthrough on using the Ingress object for HTTP load balancing on GKE that might be useful.
Related
I have a simple Java application listening on port 8443. I've deployed it as a Docker image into Fargate, it has a public IP address and I can access it through the IP address just fine.
The problem is every time I redeploy the image, it gets a new IP address.
I would like to have a static hostname. For example, when I use Elastic Beanstalk and deploy a website, it will get a hostname. How do I get the same thing?
I've been following the documentation for one whole day and didn't make any progress. I've created load balancers, targets, listeners, accelerators, nothing seems to work. For example, when creating a load balancer, it doesn't tell me what the hostname is.
I'm pretty sure this is supposed to be something really easy, but I just cannot figure it out. What am I doing wrong?
You may want to create an Application Load Balancer and register your Fargate services into a Target Group for the load balancer. You have to register your services only once, if you redeploy newer versions afterwards, they will be automatically added to the Target Group.
The Application Load Balancer will provide a publicly accessible hostname. Example:
For your load balancer to be reachable, it needs to be in a public subnet. It also needs to have a security group which allow traffic from the public internet and also allows traffic to the registered targets.
Steps to create an ALB for your ECS cluster: AWS docs
Registering ECS services into a Target Group: AWS docs
Update:
The problem is that when I create a Target Group I cannot associate it with the service.
When you create the cluster, the AWS console asks you if you would want to deploy your containers in a VPC. You have to select yes, and create a VPC:
Afterwards, you may want to get the id of the VPC (for example, in my case: vpc-0e6...) and you may want to go into your EC2 console an create a new Application Load Balancer, placing it into that VPC
ALB:
Now, when you create a new Fargate service, you should see the Application Load Balancer:
I am trying to create a load balancer in GCP. I have created two instance groups and each instance group has single vm attached to itself. One vm is having a port 80 and another vm is having a port enabled at 86.
The moment I create a load balancer, I find a frontend ip configuration always enabled at 80.
I am looking forward to something like this, ip:80 and ip:86. Since I am new to GCP, I am struggling with this part
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. With Google cloud you can create a single forwarding rule with a single IP by adding 2 ports separated by comma.
This port limitation for the TCP proxy load balancer and is due to the way TCP proxy load balancers are managed within the GCP internal infrastructure. It is not possible to use any port outside of this list.
For example:
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports us-ig2
--named-ports tcp110:110
--zone us-east1-b
gcloud compute health-checks create tcp my-tcp-health-check --port 110
I am using EKS in my Amazon Web Services cluster and I want to deploy a service using a Load Balancer. The YAML I am using has the following annotations on it:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
If I create the service using this YAML, it appears as created, the pods are detected and I get a public URL to access to. If I check the Load Balancer at EC2 it appears as healthy and ready. However, when I try to reach the URL it takes forever, literally. It keeps loading and loading and I get no timeout, no DNS error, no message.
On the other hand, if I remove these annotations and update the service, a new Load Balancer is deployed and it works perfectly. The only difference is that the old one appears as a Network Load Balancer and this one appears as a Classic Load Balancer. Even further, if I manually "upgrade" the Classic Load Balancer from the AWS Web UI, a new NLB is created and it works perfectly.
Why can't I make it work directly? What should I check?
Thank you very much.
We have created an Instance Template with ubuntu operating system. Using the instance template, we have created instance group with 3 machines.
These 3 machines are behind a TCP Loadbalancer with 8080 port enabled.
We have run the below python command on first VM.
python -m SimpleHTTPServer 8000
We see one of the instance health (1/3) is successful and have tested with telnet command. Since, the SimpleHTTPServer is installed on one instance, it shows (1/3) instance is healthy.
telnet <Loadbalacer ip> 8000
However, when we run the above command from the 2nd VM in the same instance group, we see 'Connection refused'.
telnet XX.XX.XX.XX 8000
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection refused.
Also, the same service is accessible on other VMs running on other instance group. The service is not accessible within the same instance group.
We have verified the firewall rules and we have tested with both 'allow all' and 'Specified protocols and ports' Protocols and ports option.
The above usecase works fine on AWS Classic LoadBalancer, however this fails on GCP.
I have created a firewall rule, 'cluster-firewall-rule' with 'master-cluster-ports' as tag. This tag has been added as part of Network tags in the instance. This rule allows traffic for 8080 port.
What is the equivalent of AWS Classic Load Balancer in GCP?
GCP does not have equivalent for AWS Classic Load Balancer (CLB).
AWS CLB was the first load balancer service from AWS and was built with EC2-Classic, with subsequent support for VPC. AWS NLB and ALB services are the modern LBs. If you can, I suggest using one of them. See https://aws.amazon.com/elasticloadbalancing/features/#compare for comparison between them.
If you switch, then you could use GCP's corresponding load balancer services. See https://cloud.google.com/docs/compare/aws/networking.
For my benefit:
1) Are you migrating applications from AWS to GCP?
2) What is your use case for migrating applications from AWS to GCP?
I have a cluster in AWS which is set up as Topology=Private and has an Internal Loadbalancer. Now I'm trying to deploy an Nginx Ingress Loadbalancer for it to expose the application pods to the internet.
I am trying to understand that in such a setting what will be the role of my internal loadbalancer (which I believe is a Elastic Loadbalancer). And could I have this setup even without the internal loadbalancer? In fact, what functionality would the cluster lose without the internal loadbalancer?
It is good to have the load balancer (ELB) for HA purpose, But place public facing ELB before the nginx controller instead of behind it. You can also do custom path routing in ALB (Layer7). Ideal setup would be
ELB(Public with SSL termination) --> 2 Nginx Ingress Loadbalancer(for HA have 2 instances in diff subnet) --> Application Pods.
Apart from ELB, remaining can be placed in private subnets.