I just deployed a container with kubernetes to google cloud and all is working except that I couldn't figure out how I can apply default network rules to network load balancer to restrict accesses per incoming ip address.
I see that underlying instance group has the firewall rules applied but not to the service.
Any help is appreciated
It appears that kubernetes server "load balancer" creation automatically creates firewall rule on a given port for 0.0.0.0/0 and it is attached to instance template and that template is used to spin off GCEs.
Related
Please I want to know which port GKE uses when performing the health checks of the backend services.
Does it use the service port declared in the service yaml or other specific ports? Because I'm having trouble getting the back services healthy.
Google Cloud has special routes for the load balancers and their associated health checks.
Routes that facilitate communication between Google Cloud health check probe systems and your backend VMs exist outside your VPC network, and cannot be removed. However, your VPC network must have ingress allow firewall rules to permit traffic from these systems.
For health checks to work you must create ingress allow firewall rules so that traffic from Google Cloud probers can connect to your backends. You can refer to this documentation.
I am able to make work the backend service as an instance group - if i enable the "Allow http access" enabled while creating the members in the instance group.
However i want to disable this and make the network work only from the loadbalancer(external ip). However it is not working. The way I did was to define a firewall rule in the subnet where the instance group is there, such that the destination is the network tags defined for the instance group members
settings link as iamge
the source is dfined as the ip of the load balancer as a range.
Create VPC Firewall rules that only allow traffic from the load balancer and health check service.
130.211.0.0/22
35.191.0.0/16
34.96.0.0/20
34.127.192.0/18
References:
Firewall rules
Configuring a firewall rule
Where you looking is fine, you can do it.
The steps are, as I suggested in my comment a little bit more, I will resume them in this list, and let you the link of a qwiklab, you can check the steps there with the code to do it by yourself.
Basically:
Create the instances or instance group with the corresponding healtcheck.
Configure the Load balancer
Set the traffic to the new loadbalancer and build the proxy.
Create HTTPS Load Balancer and send the traffic to the Proxy.
https://www.qwiklabs.com/focuses/12007?catalog_rank=%7B%22rank%22%3A1%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&parent=catalog&search_id=15082883
I think that the link is creating instance by instance, but the steps should be the same for an instance group.
I am trying to create a load balancer in GCP. I have created two instance groups and each instance group has single vm attached to itself. One vm is having a port 80 and another vm is having a port enabled at 86.
The moment I create a load balancer, I find a frontend ip configuration always enabled at 80.
I am looking forward to something like this, ip:80 and ip:86. Since I am new to GCP, I am struggling with this part
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. With Google cloud you can create a single forwarding rule with a single IP by adding 2 ports separated by comma.
This port limitation for the TCP proxy load balancer and is due to the way TCP proxy load balancers are managed within the GCP internal infrastructure. It is not possible to use any port outside of this list.
For example:
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports us-ig2
--named-ports tcp110:110
--zone us-east1-b
gcloud compute health-checks create tcp my-tcp-health-check --port 110
Today, I tried to make a blog with Google Cloud Platform.
So, I made a Computer Engine Instance and install Apache2 on Ubuntu 16.
And then, clicked the Outer IP address, but it show me "connection denied.."
Why this happen?
I allowed HTTPS % HTTP Traffic also.
And I can't find a menu like AWS's Security Group...
So, this problem irritate me...
(I'm not a English native, so documentation is so hard read.. please, give me a tip for this matter)
TL;DR - You need to open up ports using firewall rules to allow ingress traffic into your VMs.
Google Compute Engine (GCE) blocks all traffic to your VMs by default for the purpose of keeping your infrastructure secure. You can open up ports as needed and manage the security yourself. The default created network has few exceptions in terms of allowing traffic from other VMs in the network, but still does not allow traffic from outside the network.
Firewalls
Each VPC network has its own firewall controlling access to the
instances.
All traffic to instances, even from other instances, is blocked by the
firewall unless firewall rules are created to allow it. The exception
is the default VPC network that is created automatically with each
project. This network has certain automatically created default
firewall rules.
For all VPC networks except the automatically created default VPC
network, you must create any firewall rules you need. To allow
incoming network connections on a manually created VPC network, you
need to set up firewall rules to permit these connections. Each
firewall rule represents a single rule that determines what
connections are permitted to enter or leave instances. It is possible
to have many rules and to be as general or specific with these rules
as you need. For example, you can create a firewall rule that allows
all traffic through port 80 to all instances, or you can create a rule
that only allows traffic from one specific IP or IP range to one
specific instance.
Firewall rules are connection tracking, and therefore only regulate
the initial connection. Once a connection has been established with an
instance, traffic is permitted in both directions over that
connection.
Since you say apache2 package on Ubuntu, the instructions I share here will guide you on how to open up port 80 on your VM and make it accessible through the VM's public IP. You can do the same for any additional ports as needed.
Using gcloud to allow ingress traffic for tcp:80 into your VM
# Create a new firewall rule that allows INGRESS tcp:80 with VMs containing tag 'allow-tcp-80'
gcloud compute firewall-rules create rule-allow-tcp-80 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-80 --allow tcp:80
# Add the 'allow-tcp-80' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-80
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Using Cloud Console to allow ingress traffic for tcp:80 into your VM
Menu -> Networking -> Firewall Rules
Create Firewall Rule
Choose the following settings for the firewall rule:
Name for the rule - rule-allow-tcp-80 or any other name you prefer for this firewall rule.
Direction is ingress
Action on match is Allow
Targets is Specified target tags
Target tags is allow-tcp-80
Source IP ranges is 0.0.0.0/0 (or if you have a set of IP ranges you know will be the only ones accessing this, use them instead for stronger restriction)
Protocols and ports is tcp:80
Select Create button to create this firewall rule.
Once you've created the above firewall rule you will need to add the tag allow-tcp-80 to all the instances where this rule needs to be applied. In your case:
Open up the GCE VM Instances page
Select the instance where Jenkins is running
In the VM instance details page, select the Edit link on the very top.
In the Network Tags box, enter allow-tcp-80 to apply the tag to this instance.
Select Save to save the changes.
Now give it a few seconds to a few minutes for the changes to take effect and you will be able to access the jenkins web URL.
You can also go through the documentation for Firewall rules to get a better understanding of how they work and how to configure them.
WARNING: By using a source range of 0.0.0.0/0, you're opening up the port on the VM to the entire internet. This lets clients anywhere in the world to connect to the application running on this port. Be fully aware of the security implications of doing this.
I want to put an HTTP load balancer in front of a cluster running a docker image on Google Container Engine, so that I can use HTTPS without the application needing to support it.
I've created a container cluster with the following command:
gcloud container clusters create test --zone europe-west1-b --machine-type f1-micro --num-nodes 3
I then created a replication controller to run an image on the cluster which is basically nginx with static files copied onto it.
If I create a network load balancer for this, everything works fine. I can go to my load balancer IP address and see the website. However, if I create an HTTP load balancer to use the instance group created when I created the cluster, I get an HTTP 502. I also noticed that if I try browsing to the external IP address of any of the individual instances in the cluster, it refuses the connection.
There is a firewall rule already for 0.0.0.0/0 on tcp:80, for the tag used by the cluster instances, which if I'm not mistaken should allow anything anywhere to connect to port 80 on those instances. It doesn't seem to be working though.
For your services to be exposed publicly on the individual instances' public IPs, they need to be specified as NodePort services. Otherwise, the service IPs are only reachable from within the cluster, which probably explains your 502. Being reachable on the instance's public IP is required for your HTTP load balancer to work.
There's a walkthrough on using the Ingress object for HTTP load balancing on GKE that might be useful.