My co-workers are launching GKE clusters and managing them from a pair of centralized VMs. The vms are in us-east4
When they launch GKE clusters in the same region (us-east4), all is well. They can access both the worker nodes and also the GKE Master addresses via the peering connection. However, they could not access the master nodes of a GKE cluster built in europe-west3. I built a VM in that region, and was successfully able to connect to port 443 of the master node IPs. Global routing is enabled for the VPC network and inter-region access of VMs and other services is no problem.
Seems very clear that GKE master nodes can only be accessed in the same region. But is this documented somewhere? I did open a support case on Monday, but having little luck getting any reasonable information back.
It seems like this is an expected behavior. Since I have reviewed here, I understood the following information about it, but you are right, there is nothing like this on it:
The private IP address of the master in a regional cluster only could be reachable from the subnetworks that are in the same region, or from on-premises devices that are connected to the same region.
Now, based on this, I would recommend you to set up a proxy on the same region where your GKE master is, in order to make all the requests coming from a different region, look like they come from the reachable region.
Please review this, it is an example about how to reach your master from a cluster in another region.
Related
I've created
1 GKE autopilot cluster
1 classic VPN Tunnel to on-premise network
Network Connectivity tests suggest, that my VPN is working (I suppose)
Result of Connectivity Test
When trying to traceroute from a pod however:
traceroute from pod
Pod IP is 10.4.0.85
I have a host project (network) which also contains VPN Tunnel and routing. This network is shared with another project, which contains the GKE autopilot cluster.
VPN tunnel show to be working from both ends. GKE Nodes are pingable from on-premise network.
Since it is a autopilot cluster, I cannot confirm, whether connection from nodes to on-premise is working.
I expected my traceroute to show successful connection to on premise IP, or at least to VPN Endpoint of on-premise network.
It only shows one hop to 10.4.0.65 (this is in the CIDR of my GKE Cluster, but I do not know where it belongs to)
I've taken a look at the IP masquerading as described here without success.
And now I am lost. I suppose, my packages (from traceroute) are not even leaving the GKE cluster. But why that is, I cannot tell.
Id be grateful to get a hint in the right direction.
i have a cluster of 2 nodes created in gcp. the worker node (L1 VM) has nested virtualization enabled. i have created a pod in this L1 VM. and i have launched a L2 VM using qemu in this pod.
my objective is to access this L2 VM only by a IP address from external word (internet). there are many services running in my VM (L2 VM) and i need to access it only by IP.
i created a tunnel from node to L2 VM (which is within pod) to get dhcp address to my VM. but it seems dhcp offer and ack messages are blocked by google cloud.
i have got a public IP in the cluster through which i can reach to private IP of node. most probably there is a NAT configured in the cloud for the node's private IP.
can i configure node as a NAT gw so that i can push this packet further from internet to L2 VM?
any other suggestions are welcome!
I think you are trying to implement something like a bastion host. However, this is something that you shouldn't do with kubernetes. Although you 'can' implement it with kubernetes, it is simply not made for it.
I see there two vivid options for you:
A. Create another virtual machine (GCE instance) inside the same VPC as the cluster and set it up as a bastion host or as an endpoint for a VPN.
B. You can use the identity aware proxy (IAP) to tunnel the traffic to your machine inside the VPC as described here
The IAP is probably the best solution for your usecase.
Also consider using simple GCE instances as opposed to a kubernetes cluster. A kubernetes cluster is very useful if you have a lot of workload that is too much for one node or if you need to scale out and in etc. Your usecase looks to me more that you still think in the traditional server world and less about cattle vs pets.
My company gave us has a very small subnet size (like 30 IP addresses). By default in K8 every node gets a ton of IPs assigned to it, and every pod gets an IP, so having only 30 IPs to draw from isn't nearly enough to run a K8 cluster. I need hundreds, specifically around 400 or more to be able so stand up this cluster. I never used EKS and this is what we will be using. After some research I saw that that AKS in Azure can do virtual networks (so you can have all the IPs you need) with kubenet, so even with a small subnet Kubernetes can still function. This doc explains it pretty well from Azure side https://learn.microsoft.com/en-us/azure/aks/configure-kubenet.
I am still digging if EKS uses kubenet, and haven't found anything yet. I would appreciate any feedback for a virtual server or plugin I can use in EKS to get more IP space.
You are having trouble finding information on EKS utilizing kubenet because kubenet has a significant limitation in AWS that is not present in Azure. EKS utilizing kubenet is limited to a theoretical limit of 50 cluster nodes because kubenet relies on the VPC routing table for access to other EKS nodes. Because AWS limits the number of routes on a VPC routing table to 50, the number of EKS nodes is theoretically 50 (Less if you consider default route for internet traffic or routes to other EC2 instances)
In AKS (Azure) the UDR limit for a subnet is 250 (254 - 4 reserved by Azure).
EKS deploys with the CNI plugin enabled by default. You can get around the small subnet limitation by enabling CNI with custom networking which allows you to have the same type of IP Masquerade that you get with kubenet (i.E a separate internal kubernetes network for pods that is NAT'd to the EKS EC2 Nodes interface).
Here is a good description:
https://medium.com/elotl-blog/kubernetes-networking-on-aws-part-i-99012e938a40
Background: I have a kubernetes cluster set up in one AWS account that needs to access data in an RDS MySQL instance in a different account and I can't seem to get the settings correct to allow traffic to flow.
What I've tried so far:
Setup a peering connection between the two VPCs. They are in the same region, us-east-1.
Created Route table entries in each account to point traffic on the corresponding subnet to the peering connection.
Created a security group in the RDS VPC to allow traffic from the kubernetes subnets to access MySql.
Made sure DNS Resolution is enabled on both VPC's.
Kubernetes VPC details (Requester)
This contains 3 EC2's (looks like each has its own subnet) that house my kubernetes cluster. I used EKS to set this up.
The route table rules I set up have the 3 subnets associated, and point the RDS VPC CIDR block at the peering connection.
RDS VPC details (Accepter)
This VPC contains the mysql RDS instance, as well as some other resources. The RDS instance has quite a few VPC security groups assigned to it for access from our office IP's etc. It has Public Accessibility set to true.
I repeated the route table setup (in reverse) and pointed back to the K8s VPC subnet / peering connection.
Testing
To test the connection, I've tried 2 different ways. The application that needs to access mysql is written in node, so I just wrote a test connector and example query and it times out.
I also tried netcat from a terminal in the pod running in the kubernetes cluster.
nc -v {{myclustername}}.us-east-1.rds.amazonaws.com 3306
Which also times out. It seems to be trying to hit the correct mysql instance IP though so I'm not sure if that means my routing rules are working right from the k8s vpc side.
DNS fwd/rev mismatch: ec2-XXX.compute-1.amazonaws.com != ip-{{IP OF MY MYSQL}}.ec2.internal
I'm not sure what steps to take next. Any direction would be greatly appreciated.
Side Note: I've read thru this Kubernetes container connection to RDS instance in separate VPC
I think I understand what's going on there. My CIDR blocks do not conflict with the default K8s ips (10.0...) so my problem seems to be different.
I know this was asked a long time ago, but I just ran into this problem as well.
It turns out I was editing the wrong AWS routing table! When I ran kops to create my cluster, it created a new VPC with its own routing table but also another routing table! I needed to add the peer connection route to the cluster's routing table instead of the VPC's Main routing table.
The default in-use IP address quota is only 8 but I would like to create a Dataproc cluster with more than 8 nodes. I tried to request more allocation but was rejected xD
I tried creating a cluster with my VPC network (with internal ip address only) as described in https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/network#create_a_cloud_dataproc_cluster_with_internal_ip_address_only
But the problem is that the master node cannot be SSHed into. Is there a way in which I can create a cluster with a number of nodes more than the quota, but
I can also SSH into the master node?
Much appreciated! :)
If you're looking for SSH access to a VM with private IPs then there are a few options detailed here.
One option is to create what it calls a "bastion" or "jump" host VM that's connected to both the private network and one that's reachable from your computer. You SSH into the bastion and then from there to the private one.
You can also use VPN as described here.