As per GCP documentation on Cloud NAT,
Regular (non-private) GKE clusters assign each node an external IP address, so such clusters cannot use Cloud NAT to send packets from the node's primary interface. Pods can still use Cloud NAT if they send packets with source IP addresses set to the pod IP
Question: How do I configure pods to set source IP to pod IP while sending packets to some external service?
Cloud NAT is used to permit GCE instances or GKE clusters that only have internal IP addresses to access public resources on the internet. If you want to use Cloud NAT, you will need to follow the guidelines from the public docs or you can build your own NAT gateway using a GCE Instance which does not require you to use a private cluster.
Muhammad's answer is mostly accurate and it is the supported method for GCP. Though there is one addition to address the quoted text.
GKE uses ip masquerade and SNAT when routing traffic between nodes or outside the cluster. As long as pods are routing to traffic within the Masq range, SNAT occurs and the pods use the node's external (or internal) IP address. You'll want to disable SNAT by extending the non-masq range to include all IPs (0.0.0.0/0). You can do this using the ip-masq-agent, which, if not present, you can install.
Related
We have a GKE Autopilot Cluster and an external Address/Cloud NAT set up. For certain Pods we want to ensure that all their outgoing traffic (layer 4) is routed through that external address.
The only possibilities I can think of is to make the whole Cluster private (and thus enforce use of the Cloud NAT) or to use a Service Mesh solution which could perhaps intercept all pakets via ebpf?
Are there other solutions to enforcing a routing to one external Address?
With the time being, there is no way to do this for the GKE Autopilot Cluster.
But by the end of October, there will likely be an upgrade to the Egress NAT policy that will enable users to setup SNAT based on pod labels, namespaces, and even the destination IP address.
We need to add only external IP of the bastion host to "authorised network" to access the control plane of GKE private cluster. It does not work if we add internal IP of the VM in same VPC which serves as bastion. Is there any specific region for this ?
We add IP ranges to this field
The connection works when you add the external IP of the VM to the master authorized networks, but not when you use the internal IP, indicating that the GKE cluster and the bastion VM are in different networks which are not connected to one another. I need to know the private, internal network path between the bastion VM and the GKE cluster (if there is one) in order to investigate further the connectivity and help you fix this issue.
The general guidance is that you should have an architecture like the one below:
GKE control plane VPC --VPC Peering--> GKE cluster VPC --Cloud Interconnect / VPN--> Network where the bastion VM is
To the best of my understanding, there should be no limitation regarding adding external or internal ranges to a cluster's authorized networks, as long as there is a way to connect to these addresses (connectivity with VPN/Interconnect for internal ranges and Cloud NAT for external ranges).
I have a gke cluster, I've setup cloud NAT and set All subnets' primary and secondary IP ranges for the cluster VPC.
But when I go to www.ifconfig.me, it shows me the IP address of the VM the pod is running on.
Any idea how I get traffic to route to my cloud NAT address?
Check your node, it should be having a public IP. Remove that Public IP from your node to direct traffic from Cloud NAT.
By using this document you are able to send traffic to cloud NAT and while creating a private cluster, using cloud NAT ensures that you clear the “Access master using its external IP address” checkbox, otherwise it will take cluster IP not Cloud NAT IP.
For creating the private cluster, go to Cloud Console → click Kubernetes clusters→ create cluster -->location (zone) → click networking → select Private cluster → Clear the Access master using its external IP address checkbox → Enter a Master IP -->select the Network→ click Create.
Hope this information may help you.
I'd like to setup a NAT gateway, using Cloud NAT, so that VMs/Pods in a public GKE cluster use static IP addresses.
The issue I'm facing is that the NAT gateway seems to only be used if VMs have no other options, i.e:
GCP forwards traffic using Cloud NAT only when there are no other matching routes or paths for the traffic.
But in the case of a public GKE cluster, VMs have ephemeral external IPs, so they don't use the gateway.
According to the doc:
If you configure an external IP on a VM's interface [...] NAT will not be performed on such packets. However, alias IP ranges assigned to the interface can still use NAT because they cannot use the external IP to reach the Internet.
And
With this configuration, you can connect directly to a GKE VM via SSH, and yet have the GKE pods/containers use Cloud NAT to reach the Internet.
That's what I want, but I fail to see what precisely to setup here.
What is implied by alias IP ranges assigned to the interface can still use NAT and how to set this up?
"Unfortunately, this is not currently the case. While Cloud NAT is still in Beta, certain settings are not fully in place and thus the pods are still using SNAT even with IP aliasing. Because of the SNAT to the node's IP, the pods will not use Cloud NAT."
Indeed, as Patrick W says above, it's not currently working as documented. I tried as well, and spoke with folks on the GCP Slack group in the Kubernetes Engine channel. They also confirmed in testing that it only works with a GKE private cluster. We haven't started playing with private clusters yet. I can't find solid documentation on this simple question: If I create a private cluster, can I still have public K8S services (aka load balancers) in that cluster? All of the docs about private GKE clusters indicate you don't want any external traffic coming in, but we're running production Internet-facing services on our GKE clusters.
I filed a ticket with GCP support about the Cloud NAT issue, and here's what they said:
"I have been reviewing your configuration and the reason that Cloud NAT is not working is because your cluster is not private.
To use Cloud NAT with GKE you have to create a private cluster. In the non-private cluster the public IP addresses of the cluster are used for communication between the master and the nodes. That’s why GKE is not taking into consideration the Cloud NAT configuration you have.
Creating a private cluster will allow you to combine Cloud NAT and GKE.
I understand this is not very clear from our documentation and I have reported this to be clarified and explained exactly how it is supposed to work."
I responded asking them to please make it work as documented, rather than changing their documentation. I'm waiting for an update from them...
Using google's Cloud NAT with public GKE clusters works!
First a cloud NAT gateway and router needs to be setup using a reserved external IP.
Once that's done the ip-masq-agent configuration needs to be changed to not masquerade the pod IPs for the external IPs that are the target of requests from inside the cluster. Changing the configuration is done in the nonMasqueradeCidrs list in the ConfigMap for the ip-masq-agent.
The way this works is that for every outgoing requests to an IP in the nonMasqueradeCidrs list IP masquerading is not done. So the requests does not seem to originate from the node IP but from the pod IP. This internal IP is then automatically NATed by the Cloud NAT gateway/router. The result is that the request seems to originate from the (stable) IP of the cloud NAT router.
Sources:
https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a
https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
The idea here is that if you use native VPC (Ip alias) for the cluster, your pods will not use SNAT when routing out of the cluster. With no SNAT, the pods will not use the node's external IP and thus should use the Cloud NAT.
Unfortunately, this is not currently the case. While Cloud NAT is still in Beta, certain settings are not fully in place and thus the pods are still using SNAT even with IP aliasing. Because of the SNAT to the node's IP, the pods will not use Cloud NAT.
This being said, why not use a private cluster? It's more secure and will work with Cloud NAT. You can't SSH directly into a node, but A) you can create a bastion VM instance in your project that can SSH using the internal IP flag and B) you generally do not need to SSH into the node on most occassions.
We want to be able to connect to my on-premise database from our google cloud kubernetes.
We are currently attempting to do so by using "Create a VPN connection" from within the google console.
In the field IP address, I am forced to create (or pick from existing) "External IP Addresses".
I am able to link a single VM-instance to this External IP Address. But I want my VPN connection/tunnel to be between my on-premises network and EVERYTHING within my Google cloud network.
This IP should not just work as External IP Addr. for a single instance. I need to make it a gateway to the network as a whole. What am I missing?
Thanks in advance.
Another way to frame the question:
How do I find the IP Address of the gateway to my Google cloud network (VPC) and how do I supply that IP to the VPN Connection creation ?
The Cloud VPN connects your on-premises to the VPC, that means every Instance, Cluster or other products that use Google Cloud Engine (GCE).
As mentioned in a previous answer from avinoam-meir the VPN has at least two components: Gateway and Tunnel but I will add a third one: Type of routing.
a) Gateway: This is where you can add an existing or reserve any static IP address (from the Google Pool of External IP Addresses).
b) Tunnel: Where the encapsulated and encrypted traffic will flow to reach the Local IP ranges.
c) Type of routing: Cloud VPN has three possibilities:
Tunnel using Dynamic Routing
Route Based VPN
Policy based VPN
Depending on the type you choose, the routing happens in a different way but in general terms, it will propagate your subnetwork(s) to your on-premises network and receive the routes from it.
Important: Remember to open your firewall on your GCP VPC to receive traffic from your on-premises IP Ranges as the default and implied rule for Ingress will block it.
The implied allow egress rule: An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination.
The implied deny ingress rule: An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming traffic to them.
The answer was simpler than I thought.
My question was:
How do I find the IP Address of the gateway to my Google cloud network
(VPC) and how do I supply that IP to the VPN Connection creation ?
The answer is simply to fill out the "Create a VPN connection" page. It automatically sets up whatever IP you get/choose in the "IP Address" field as the gateway. I did NOT need to configure this IP address to work as a gateway. Simply getting it assigned in this step is enough. Google does the rest behind the scenes.
You need to distinguish between gateway IP address and local IP range of the VPN tunnel
The gateway IP address is the IP of the gateway where all the packets from your on-premises arrive encapsulated and encrypted.
The local IP range of the VPN tunnel is the range of IPs that can be reached through the VPN tunnel. By default this is all the
private IP addresses of your GCP network
Create a NAT gateway [1] with Kubernetes Engine and Compute Engine Network Routes to route outbound traffic from an existing GKE cluster through the NAT Gateway instance.
Use that NAT gateway IP address to create a VPN connection to remote peer gateway.
[1] https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine