We have created an Instance Template with ubuntu operating system. Using the instance template, we have created instance group with 3 machines.
These 3 machines are behind a TCP Loadbalancer with 8080 port enabled.
We have run the below python command on first VM.
python -m SimpleHTTPServer 8000
We see one of the instance health (1/3) is successful and have tested with telnet command. Since, the SimpleHTTPServer is installed on one instance, it shows (1/3) instance is healthy.
telnet <Loadbalacer ip> 8000
However, when we run the above command from the 2nd VM in the same instance group, we see 'Connection refused'.
telnet XX.XX.XX.XX 8000
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection refused.
Also, the same service is accessible on other VMs running on other instance group. The service is not accessible within the same instance group.
We have verified the firewall rules and we have tested with both 'allow all' and 'Specified protocols and ports' Protocols and ports option.
The above usecase works fine on AWS Classic LoadBalancer, however this fails on GCP.
I have created a firewall rule, 'cluster-firewall-rule' with 'master-cluster-ports' as tag. This tag has been added as part of Network tags in the instance. This rule allows traffic for 8080 port.
What is the equivalent of AWS Classic Load Balancer in GCP?
GCP does not have equivalent for AWS Classic Load Balancer (CLB).
AWS CLB was the first load balancer service from AWS and was built with EC2-Classic, with subsequent support for VPC. AWS NLB and ALB services are the modern LBs. If you can, I suggest using one of them. See https://aws.amazon.com/elasticloadbalancing/features/#compare for comparison between them.
If you switch, then you could use GCP's corresponding load balancer services. See https://cloud.google.com/docs/compare/aws/networking.
For my benefit:
1) Are you migrating applications from AWS to GCP?
2) What is your use case for migrating applications from AWS to GCP?
Related
As the title discussed, how do I set Netwrok ACL on both ec2 instances to allow curl from one instance to another?
For my case, the user data did not work for some reason when I creating the VM2 which would hosting Nginx servers. I manually installed the Nginx and ran it. For NACL:
VM 1:
Inbound/Outbound Customs TCP 1024-65535 IP of VM 2 Allow
VM 2
Inbound Custom TCP port of Nignx services IP of VM 1 Allow
Outbound Custom TCP 1024-65535 IP of VM 1 Allow
I hope this help.
I am trying to create a load balancer in GCP. I have created two instance groups and each instance group has single vm attached to itself. One vm is having a port 80 and another vm is having a port enabled at 86.
The moment I create a load balancer, I find a frontend ip configuration always enabled at 80.
I am looking forward to something like this, ip:80 and ip:86. Since I am new to GCP, I am struggling with this part
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. With Google cloud you can create a single forwarding rule with a single IP by adding 2 ports separated by comma.
This port limitation for the TCP proxy load balancer and is due to the way TCP proxy load balancers are managed within the GCP internal infrastructure. It is not possible to use any port outside of this list.
For example:
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports us-ig2
--named-ports tcp110:110
--zone us-east1-b
gcloud compute health-checks create tcp my-tcp-health-check --port 110
I've got a web application running as an AWS ECS Fargate task. The task consists of 2 Docker containers - nginx exposing port 80, running as reverse proxy, forwarding queries to an asp.net core web application exposing port 5000. The url configured in nginx.conf for upstream server is 127.0.0.1:5000, and the task is setup with container networking (awsvpc).
The ECS Service is defined as an autoscaling group of 1 task. When I run the service, AWS sets up an elastic ENI with a public and private ip. I can hit that public ip in a browser and get back a response from my web app, so it seems the ECS part is setup properly.
Next - I've defined an ALB with an http port 80 listener forwarding to a target group for the ECS Service. The target group shows the private ip for the task ENI, so it appears to be setup correctly. Health checks are configured as simple "/", and the task as well as ALB target group report them to be healthy.
However - when I navigate to the DNS name for the LB, I'm unable to get a response.
Additionally - this is running in a non-default VPC. Route table includes an IGW.
Not sure what else I should be checking, so would appreciate some help in troubleshooting further.
I have one CentOS instance in AWS and another instance in Hybris Cloud.
The AWS instance is running a Jenkins Server and I want to install a slave for it in the Hybris Cloud Instance.
I have followed the steps to establish SSH connection between two machine but still can't get them to connect.
What am I missing? Is there any special SSH configuration for establishing connection between different cloud providers?
I cant speak for Hybris, but AWS has a security group for your EC2 instance. The security group for your AWS instance must allow port 22 from the IP address of your Hybris server (or a range of IP addresses). In addition, the host firewall on the EC2 Jenkins server must allow for this as well.
Likewise, the Hybris server must have the same ports opened up.
If you continue having issues after checking security groups and host firewalls, check the Network ACL in AWS. If you are in your default VPC and there have been no alterations, the Network ACL should allow for your use case. However if you are in a non-default VPC, whoever created it may have adjusted the Network ACL.
I want to put an HTTP load balancer in front of a cluster running a docker image on Google Container Engine, so that I can use HTTPS without the application needing to support it.
I've created a container cluster with the following command:
gcloud container clusters create test --zone europe-west1-b --machine-type f1-micro --num-nodes 3
I then created a replication controller to run an image on the cluster which is basically nginx with static files copied onto it.
If I create a network load balancer for this, everything works fine. I can go to my load balancer IP address and see the website. However, if I create an HTTP load balancer to use the instance group created when I created the cluster, I get an HTTP 502. I also noticed that if I try browsing to the external IP address of any of the individual instances in the cluster, it refuses the connection.
There is a firewall rule already for 0.0.0.0/0 on tcp:80, for the tag used by the cluster instances, which if I'm not mistaken should allow anything anywhere to connect to port 80 on those instances. It doesn't seem to be working though.
For your services to be exposed publicly on the individual instances' public IPs, they need to be specified as NodePort services. Otherwise, the service IPs are only reachable from within the cluster, which probably explains your 502. Being reachable on the instance's public IP is required for your HTTP load balancer to work.
There's a walkthrough on using the Ingress object for HTTP load balancing on GKE that might be useful.