Remove default firewall rule from single Google Compute Engine - google-cloud-platform

There are 4 "default" firewall rules defined.
I want to disable particular one default-allow-ssh for only specific host.
For some reason I don't see tag default-allow-ssh in gcloud compute instances describe $VM:
tags:
fingerprint: ioTF8nBLmIk=
items:
- allow-tcp-443
- allow-tcp-80
I checked rule definition:
gcloud compute firewall-rules describe default-allow-ssh
allowed:
- IPProtocol: tcp
ports:
- '22'
description: Allow SSH from anywhere
direction: INGRESS
disabled: false
kind: compute#firewall
name: default-allow-ssh
network: https://www.googleapis.com/compute/v1/projects/.../global/networks/default
priority: 65534
selfLink: https://www.googleapis.com/compute/v1/projects/.../global/firewalls/default-allow-ssh
sourceRanges:
- 0.0.0.0/0
I see no targetTags or sourceTags in definition. Does that mean that rule is applied to entire project and can't be disabled per host?

I see no targetTags or sourceTags in definition. Does that mean that
rule is applied to entire project and can't be disabled per host?
yes exactly, you can find more about the default firewall rules here
It's best practice to make this rule less permissive by the use of tags or source ips, however you could also make another rule that denies ssh traffic to that specific vms using a tag, maybe allowing ssh only from a bastion host.

Related

Unable to SSH into my Compute Engine VM instance on Google Cloud

I am trying to SSH into my compute engine VM instance on Google Cloud.
I am following the instructions to set up a regional external HTTP(S) load balancer with VM instance group backends
I have created a firewall rule to allow SSH traffic.
gcloud compute firewall-rules describe fw-allow-ssh returns:
allowed:
- IPProtocol: tcp
ports:
- '22'
creationTimestamp: '2022-09-13T07:55:49.187-07:00'
description: ''
direction: INGRESS
disabled: false
id: '3158638846670612250'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-ssh
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-ssh
sourceRanges:
- 0.0.0.0/0
targetTags:
- load-balanced-backend
Apart from that, I have two more firewall rules: fw-allow-health-check and fw-allow-proxies.
gcloud compute firewall-rules describe fw-allow-health-check returns:
allowed:
- IPProtocol: tcp
ports:
- '80'
creationTimestamp: '2022-09-12T21:29:49.688-07:00'
description: ''
direction: INGRESS
disabled: false
id: '2007525931317311954'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-health-check
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/lb-network
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-health-check
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
targetTags:
- load-balanced-backend
gcloud compute firewall-rules describe fw-allow-proxies returns:
allowed:
- IPProtocol: tcp
ports:
- '80'
- '443'
- '8080'
creationTimestamp: '2022-09-12T21:33:19.582-07:00'
description: ''
direction: INGRESS
disabled: false
id: '3828652160003716832'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-proxies
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/lb-network
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-proxies
sourceRanges:
- 10.129.0.0/23
targetTags:
- load-balanced-backend
When I try to SSH into my VM instance from the browser, I get the following error:
Cloud IAP for TCP forwarding is not currently supported for google.com projects; attempting to use the legacy relays instead. If you are connecting to a non google.com project, continue reading. Please consider adding a firewall rule to allow ingress from the Cloud IAP for TCP forwarding netblock to the SSH port of your machine to start using Cloud IAP for TCP forwarding for better performance.
and in due course:
We are unable to connect to the VM on port 22.
What am I doing wrong here please. Any guidance would be of great help.
Thank you!
I might not know the context and all you details, but in my personal experience -
If your firewalls are configured correctly - you should be able to make a SSH connection from some host over the 'internet' - i.e. from you local machine. Identity-Aware Proxy is not required at all.
If you would like to make a SSH connection from the UI console (from the SSH 'button' in the browser), you might need to
1/ make sure that the relevant API is enabled and you are ready to pay to such access - see an Identity-Aware Proxy overview and Identity-Aware Proxy (API) in the console.
2/ the firewalls are configured correctly to allow SSH access from the relevant Google's IP range (i.e. 35.235.240.0/20 and those who need such access have relevant IAM roles - see Using IAP for TCP forwarding
3/ check that the VM you would like to connect - has a 'tag' mentioned in the firewall rules (if tags are used).

How to setup Kubernetes NLB Load Balancer with target group "IP" based [AWS]?

Currently, I'm exposing a k8s service using network load balancer. It creates a network load balancer and sets the target group as instance based and everything works fine. As we know
port in the nodes is always in the range of 30000 - 32767.
There is a difference in the different target groups. Instance based target group is used to preserve the clientIP, where in IP based doesn't preserve the client IP.
Now there is a problem with the security group, I want to restrict the node ports only be accessible by the CIDR of load balancer. Since it is an instance based target group, inbound IP is always the client IP. So it is difficult to restrict the access only for certain IP's.
So my plan is to switch the target group to "IP" based, so that I can restrict the access to only for CIDR of load balancer.
Is there any other way to create the NLB load balancer with the IP based target type? Could you please help me with some suggestions?
apiVersion: v1
kind: Service
metadata:
name: nginx-router
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/healthz"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: [tes]
# service.beta.kubernetes.io/healthcheck-path: /healthz
spec:
selector:
app: nginx-router
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 80
protocol : TCP
name : http
- port : 443
targetPort: 443
protocol : TCP
name : https
type: LoadBalancer
I ask myself if you really need to solve this through the Network Load Balancer or if a solution in Kubernetes would be preferable.
The easiest way to remove a NodePort from the cluster is to actually not define any Services in Kubernetes of the type NodePort. If some already exist you can easily change them to type ClusterIP and the NodePort should be removed.
Since you wish to prevent any access to NodePorts you can consider using a ResourceQuota to prevent the creation any services of type NodePort at all. This way the cluster is telling the user that his services won't work instead of just preventing the traffic from reaching the application and most likely resulting in a hard to understand timeout if you don't know the specifics of the load balancer configuration. (See here for reference: https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota)

Problem creating regional TCP load balancer

I'm trying to create a load-balancer that balances traffic between 3 different AZ's in a given region. If I create a "global" load-balancer with an external IP, everything works fine, but if I am only trying to create a load-balancer that works with a particular subnet -- the health checks consistently fail because they are trying to go to port 80 instead of the port I've specified.
Note the following output of gcloud compute backend-services get-health xx-redacted-central-lb --region=us-central1:
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instances/yb-1-xx-redacted-lb-test-n2
ipAddress: 10.152.0.90
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/ac-kroger-central-b
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instances/yb-1-xx-redacted-lb-test-n1
ipAddress: 10.152.0.92
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instances/yb-1-xx-redacted-lb-test-n3
ipAddress: 10.152.0.4
port: 80
kind: compute#backendServiceGroupHealth
The health-check for this load-balancer was created with the following command:
gcloud compute health-checks create tcp xx-redacted-central-hc4 --port=5433
The backend was created like this:
gcloud compute backend-services create xx-redacted-central-lb --protocol=TCP --health-checks=xx-redacted-central-hc4 --region=us-central1 --load-balancing-scheme=INTERNAL
Full description of the backend:
gcloud compute backend-services describe xx-redacted-central-lb --region=us-central1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/xx-redacted-central-b
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2020-04-01T19:16:44.405-07:00'
description: ''
fingerprint: aOB7iT47XCk=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/yugabyte/global/healthChecks/xx-redacted-central-hc4
id: '1151478560954316259'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: xx-redacted-central-lb
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1/backendServices/xx-redacted-central-lb
sessionAffinity: NONE
timeoutSec: 30
If I try to edit the backend and add a port or portname annotation, it fails to save because thinks it is an invalid operation with INTERNAL load-balancers.
Any ideas?
--Alan
As per GCP documentation[1], For health checks to work, you must create an ingress to allow firewall rules for the ip address traffic from Google Cloud probers can connect to your backends.
You can review this documentation[2] to understand the Success criteria for SSL and TCP health check.
[1]Probe IP ranges and firewall rules
https://cloud.google.com/load-balancing/docs/health-check-concepts#ip-ranges
[2]Success Criteria
https://cloud.google.com/load-balancing/docs/health-check-concepts#criteria-protocol-ssl-tcp
Backend services must have an associated Named Port if their backends are instance groups. Named ports are used by load balancing services to direct traffic to specific ports on individual instances. You can assign port name mapping to Instance group, to inform the load balancer to use that port to reach to backend running the service.
Thanks for providing the information. I can successfully reproduce this issue at my end and find it strange that backend health checks are still being pointed to port 80 whereas LB HC configured for a port other than 80. The product engineering team has been made aware of this issue however, I don't have any ETA on the fix and implementation. You may follow thread[1] for further updates.
[1]https://issuetracker.google.com/153600927

How to open a port or all ports for Google Cloud Compute Engine

I have an IOTA IRI instance running on a VM in GCP compute engine.
The instance is using port 14265 to communicate, and checking it locally by doing something like curl http://localhost:14265 does respond.
I want to open this port to outside of the vm, so I set up a static IP, and a firewall rule to allow tcp:14265; udp:14265 and still the port is not responding.
I even tried allowing all by doing:
But no luck. There is no port open except for 22 for ssh (looked in a port scanner)
I am aware it feels like a duplicate of How to open a specific port such as 9090 in Google Compute Engine, but I did try those answers and they didn't solve it for me.
EDIT:
Running the two commands I was asked to run in an answer:
D:\Downloads> gcloud compute networks list
NAME MODE IPV4_RANGE GATEWAY_IPV4
default auto
D:\Downloads>gcloud compute instances describe instance-1 --zone europe-west1-b
canIpForward: false
cpuPlatform: Intel Sandy Bridge
creationTimestamp: '2017-08-22T09:33:12.240-07:00'
description: ''
disks:
- autoDelete: true
boot: true
deviceName: instance-1
index: 0
interface: SCSI
kind: compute#attachedDisk
licenses:
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1604-xenial
mode: READ_WRITE
source: https://www.googleapis.com/compute/v1/projects/iota-177616/zones/europe-west1-b/disks/instance-1
type: PERSISTENT
id: '8895209582493819432'
kind: compute#instance
labelFingerprint: 42WmSpB8rSM=
machineType: https://www.googleapis.com/compute/v1/projects/iota-177616/zones/europe-west1-b/machineTypes/f1-micro
metadata:
fingerprint: -pkE3KaIzLU=
kind: compute#metadata
name: instance-1
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: 35.187.9.204
type: ONE_TO_ONE_NAT
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/iota-177616/global/networks/default
networkIP: 10.132.0.2
subnetwork: https://www.googleapis.com/compute/v1/projects/iota-177616/regions/europe-west1/subnetworks/default
scheduling:
automaticRestart: true
onHostMaintenance: MIGRATE
preemptible: false
selfLink: https://www.googleapis.com/compute/v1/projects/iota-177616/zones/europe-west1-b/instances/instance-1
serviceAccounts:
- email: 59105716861-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring.write
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
startRestricted: false
status: RUNNING
tags:
fingerprint: 6smc4R4d39I=
items:
- http-server
- https-server
zone: https://www.googleapis.com/compute/v1/projects/iota-177616/zones/europe-west1-b
It is difficult to give an exact answer without some diagnostics.
It could be that the rules are being created for a network and your instance is in a different network.
So, first of all, check the networks available in your project:
gcloud compute networks list
Secondly, check in which network your instance is located:
gcloud compute instances describe [Instance Name] --zone [Zone]
Check the firewall rules being applied to the network used by your instance:
gcloud compute firewall-rules list
Also check that the target tags are the appropriate ones.
As you can see there are not tags applied to the VM, although the rules should apply if you target it to all vm's itis a good practice to do it.
Edit your VM and add a tag(Ex. frontserver)
gcloud compute instances add-tags [INSTANCE NAME] --zone [ZONE] --tags frontserver
Now create the firewall rule and apply it to the tag created
gcloud beta compute firewall-rules create [NAME_OF_THE_RULE] --direction=INGRESS --priority=1000 --network=default --allow=all --source-ranges=0.0.0.0/0 --target-tags=frontserver
Check this it it works you can run an update to restrict it to the desired ports and protocols and your source IP
gcloud beta compute firewall-rules update [NAME_OF_THE_RULE] --direction=INGRESS --priority=1000 --network=default --allow=tcp:--source-ranges=[your_source_IP] --target-tags=frontserver
Hope this helps, further info is found here with examples

Kubernetes Cluster on AWS with Kops - NodePort Service Unavailable

I am having difficulties accessing a NodePort service on my Kubernetes cluster.
Goal
set up ALB Ingress controller so that i can use websockets and http/2
setup NodePort service as required by that controller
Steps taken
Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
Changed service type from LoadBalancer to NodePort using kubectl replace --force
> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
Results
Curl from ALB hangs
Curl from <public ip address of all nodes>:<node port for service> hangs
Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)
Update:
Issues created on github referencing above with some further details in some cases:
https://github.com/kubernetes/kubernetes/issues/50261
https://github.com/coreos/alb-ingress-controller/issues/169
https://github.com/kubernetes/kops/issues/3146
By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.
In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS.
Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as nodes.<your cluster name> and master.<your cluster name>
We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.
Click on the security group, click on rules and add the following rule.
Port range to open on the nodes and master: 30000-32767
This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.
Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.
The SG of master is not needed to open the nodeport range in order to make : working.
So only the Worker's SG needs to open the port range.