Setup Cloud NAT for public GKE clusters - google-cloud-platform

I'd like to setup a NAT gateway, using Cloud NAT, so that VMs/Pods in a public GKE cluster use static IP addresses.
The issue I'm facing is that the NAT gateway seems to only be used if VMs have no other options, i.e:
GCP forwards traffic using Cloud NAT only when there are no other matching routes or paths for the traffic.
But in the case of a public GKE cluster, VMs have ephemeral external IPs, so they don't use the gateway.
According to the doc:
If you configure an external IP on a VM's interface [...] NAT will not be performed on such packets. However, alias IP ranges assigned to the interface can still use NAT because they cannot use the external IP to reach the Internet.
And
With this configuration, you can connect directly to a GKE VM via SSH, and yet have the GKE pods/containers use Cloud NAT to reach the Internet.
That's what I want, but I fail to see what precisely to setup here.
What is implied by alias IP ranges assigned to the interface can still use NAT and how to set this up?

"Unfortunately, this is not currently the case. While Cloud NAT is still in Beta, certain settings are not fully in place and thus the pods are still using SNAT even with IP aliasing. Because of the SNAT to the node's IP, the pods will not use Cloud NAT."
Indeed, as Patrick W says above, it's not currently working as documented. I tried as well, and spoke with folks on the GCP Slack group in the Kubernetes Engine channel. They also confirmed in testing that it only works with a GKE private cluster. We haven't started playing with private clusters yet. I can't find solid documentation on this simple question: If I create a private cluster, can I still have public K8S services (aka load balancers) in that cluster? All of the docs about private GKE clusters indicate you don't want any external traffic coming in, but we're running production Internet-facing services on our GKE clusters.
I filed a ticket with GCP support about the Cloud NAT issue, and here's what they said:
"I have been reviewing your configuration and the reason that Cloud NAT is not working is because your cluster is not private.
To use Cloud NAT with GKE you have to create a private cluster. In the non-private cluster the public IP addresses of the cluster are used for communication between the master and the nodes. That’s why GKE is not taking into consideration the Cloud NAT configuration you have.
Creating a private cluster will allow you to combine Cloud NAT and GKE.
I understand this is not very clear from our documentation and I have reported this to be clarified and explained exactly how it is supposed to work."
I responded asking them to please make it work as documented, rather than changing their documentation. I'm waiting for an update from them...

Using google's Cloud NAT with public GKE clusters works!
First a cloud NAT gateway and router needs to be setup using a reserved external IP.
Once that's done the ip-masq-agent configuration needs to be changed to not masquerade the pod IPs for the external IPs that are the target of requests from inside the cluster. Changing the configuration is done in the nonMasqueradeCidrs list in the ConfigMap for the ip-masq-agent.
The way this works is that for every outgoing requests to an IP in the nonMasqueradeCidrs list IP masquerading is not done. So the requests does not seem to originate from the node IP but from the pod IP. This internal IP is then automatically NATed by the Cloud NAT gateway/router. The result is that the request seems to originate from the (stable) IP of the cloud NAT router.
Sources:
https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a
https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

The idea here is that if you use native VPC (Ip alias) for the cluster, your pods will not use SNAT when routing out of the cluster. With no SNAT, the pods will not use the node's external IP and thus should use the Cloud NAT.
Unfortunately, this is not currently the case. While Cloud NAT is still in Beta, certain settings are not fully in place and thus the pods are still using SNAT even with IP aliasing. Because of the SNAT to the node's IP, the pods will not use Cloud NAT.
This being said, why not use a private cluster? It's more secure and will work with Cloud NAT. You can't SSH directly into a node, but A) you can create a bastion VM instance in your project that can SSH using the internal IP flag and B) you generally do not need to SSH into the node on most occassions.

Related

How to route all outgoing traffic of a Pod in GKE

We have a GKE Autopilot Cluster and an external Address/Cloud NAT set up. For certain Pods we want to ensure that all their outgoing traffic (layer 4) is routed through that external address.
The only possibilities I can think of is to make the whole Cluster private (and thus enforce use of the Cloud NAT) or to use a Service Mesh solution which could perhaps intercept all pakets via ebpf?
Are there other solutions to enforcing a routing to one external Address?
With the time being, there is no way to do this for the GKE Autopilot Cluster.
But by the end of October, there will likely be an upgrade to the Egress NAT policy that will enable users to setup SNAT based on pod labels, namespaces, and even the destination IP address.

GCP: How to access a L2 VM (qemu) running in a pod in gcp by IP from internet?

i have a cluster of 2 nodes created in gcp. the worker node (L1 VM) has nested virtualization enabled. i have created a pod in this L1 VM. and i have launched a L2 VM using qemu in this pod.
my objective is to access this L2 VM only by a IP address from external word (internet). there are many services running in my VM (L2 VM) and i need to access it only by IP.
i created a tunnel from node to L2 VM (which is within pod) to get dhcp address to my VM. but it seems dhcp offer and ack messages are blocked by google cloud.
i have got a public IP in the cluster through which i can reach to private IP of node. most probably there is a NAT configured in the cloud for the node's private IP.
can i configure node as a NAT gw so that i can push this packet further from internet to L2 VM?
any other suggestions are welcome!
I think you are trying to implement something like a bastion host. However, this is something that you shouldn't do with kubernetes. Although you 'can' implement it with kubernetes, it is simply not made for it.
I see there two vivid options for you:
A. Create another virtual machine (GCE instance) inside the same VPC as the cluster and set it up as a bastion host or as an endpoint for a VPN.
B. You can use the identity aware proxy (IAP) to tunnel the traffic to your machine inside the VPC as described here
The IAP is probably the best solution for your usecase.
Also consider using simple GCE instances as opposed to a kubernetes cluster. A kubernetes cluster is very useful if you have a lot of workload that is too much for one node or if you need to scale out and in etc. Your usecase looks to me more that you still think in the traditional server world and less about cattle vs pets.

GCP traffic on pod being routed to internet via VMs IP address, not cloud NAT

I have a gke cluster, I've setup cloud NAT and set All subnets' primary and secondary IP ranges for the cluster VPC.
But when I go to www.ifconfig.me, it shows me the IP address of the VM the pod is running on.
Any idea how I get traffic to route to my cloud NAT address?
Check your node, it should be having a public IP. Remove that Public IP from your node to direct traffic from Cloud NAT.
By using this document you are able to send traffic to cloud NAT and while creating a private cluster, using cloud NAT ensures that you clear the “Access master using its external IP address” checkbox, otherwise it will take cluster IP not Cloud NAT IP.
For creating the private cluster, go to Cloud Console → click Kubernetes clusters→ create cluster -->location (zone) → click networking → select Private cluster → Clear the Access master using its external IP address checkbox → Enter a Master IP -->select the Network→ click Create.
Hope this information may help you.

Compute Engine in VPC can't connect to Internet & Cloud Storage after establishing Cloud VPN

Assuming I have a custom VPC with IP ranges 10.148.0.0/20
This custom VPC has firewall rules to allow-internal so the service inside those IP ranges can communicate to each other.
After the system grows I need to connect to some on-premises network by using Classic Cloud VPN, already create Cloud VPN (the on-premises side configuration already configured by someone) and the VPN Tunnel already established (with green checkmarks).
I also can ping to on-premises IP right now (let's say ping to 10.xxx.xxx.xxx where this is not GCP internal/private IP but on-premises private IP) using compute engine created on custom VPC network.
The problem is all the compute engine instance spawn in custom VPC network can't communicate to the internet now (like doing sudo apt update) or even communicate to google cloud storage (using gsutil), but they can communicate using private IP.
I also can't spawn dataproc cluster on that custom VPC (I guess because it can't connect to GCS, since dataproc needs GCS for staging buckets).
Since I do not really know about networking stuff and relatively new to GCP, how to be able to connect to the internet on instances that I created inside custom VPC?
After checking more in-depth about my custom VPC and Cloud VPN I realize there's misconfiguration when I establish the Cloud VPN, I've chosen route-based in routing option and input 0.0.0.0/0 in Remote network IP ranges. I guess this routes sending all traffic to VPN as #John Hanley said.
Solved it by using policy-based in routing option and only add specific IP in Remote network IP ranges.
Thank you #John Hanley and
#guillaume blaquiere for pointing this out

How to use cloud NAT in public GKE cluster pods using sourceIP

As per GCP documentation on Cloud NAT,
Regular (non-private) GKE clusters assign each node an external IP address, so such clusters cannot use Cloud NAT to send packets from the node's primary interface. Pods can still use Cloud NAT if they send packets with source IP addresses set to the pod IP
Question: How do I configure pods to set source IP to pod IP while sending packets to some external service?
Cloud NAT is used to permit GCE instances or GKE clusters that only have internal IP addresses to access public resources on the internet. If you want to use Cloud NAT, you will need to follow the guidelines from the public docs or you can build your own NAT gateway using a GCE Instance which does not require you to use a private cluster.
Muhammad's answer is mostly accurate and it is the supported method for GCP. Though there is one addition to address the quoted text.
GKE uses ip masquerade and SNAT when routing traffic between nodes or outside the cluster. As long as pods are routing to traffic within the Masq range, SNAT occurs and the pods use the node's external (or internal) IP address. You'll want to disable SNAT by extending the non-masq range to include all IPs (0.0.0.0/0). You can do this using the ip-masq-agent, which, if not present, you can install.