I have a private cluster with a pod running which prints a file to a network printer. The printer is accessible from VM's outside the cluster but it is not accessible from the pod. I am unable to ping the IP as well. DO we need some additional configuration in GKE to access the printer or say any application .?
Here are the common things to check and do when you can't access services outside of your GKE cluster:
Check if you can access the service from the GKE VM directly by SSHing into the VM and then doing ping/curl to verify connectivity
Verify that the ip-masq-agent is running on the GKE cluster and that it's configured correctly
Configure Cloud NAT for the network when you're using Private Clusters so you can still access internet resources
See more details for each these steps below:
Verifying connectivity from the GKE node itself
SSH into the node and run ping / curl by running:
gcloud compute ssh my-gke-node-name --tunnel-through-iap
curl https://facebook.com
ping my-network-printer
Verify ip-masq-agent configuration
Check if the ip-masq-agent Daemonset is running:
kubectl get ds ip-masq-agent -n kube-system
Verify that the ip-masq-agent configuration is set to ensure all RFC1918 addresses get masqueraded except the GKE node CIDR and pod CIDR:
kubectl describe configmaps/ip-masq-agent -n kube-system
Note the default configuration of ip-masq-agent most of the time has too many RFC1918 addresses included in the nonMasqueradeCIDRs setting. You need to ensure your external network printer isn't included in any of nonMasqueradeCIDRs ranges.
If it is included or the no CIDRs are set, then you should set the nonMasqueradeCIDRs to only include the GKE node CIDR and the GKE pod CIDR for your cluster. You can edit the configmap by running:
kubectl edit configmap ip-masq-agent --namespace=kube-system
More docs on GKE ip-masq-agent here: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#edit-ip-masq-agent-configmap
Related
In this question, the author says that the gke cluster is not available from other subnets in the VPC.
BUT, that is exactly what I need to do. I've added detail below, all suggestions welcome.
I created a VPC in Google Cloud with custom sub-nets. I have a subnet in us-east1 and another in us-east4. Then, I created a VPC-native private GKE cluster in the same VPC in the us-east4 subnet.
[added details]
GKE in us-east4
endpoint 10.181.15.2
control plane 10.181.15.0/28
pod address range 10.16.0.0/16
service address range 10.17.0.0/22
VPC subnet in us-east4
10.181.11.0/24
VPC subnet in us-east1
10.171.1.0/24
I added 10.171.1.0/24 as a Control Plane authorized network, and I added 10.171.1.0/24 to the automatically created firewall rule.
But I still can't use kubectl from the instance in the 10.171.1.0/24 subnet.
What I see when trying to use kubectl from a VM in us-east4 10.181.11.7
On this VM, I set the context with kubectl config use-context <correct gke context> and I have gcloud configured correctly. Then,
kubectl get pods correctly gives a list of pods in the gke cluster.
from a VM in us-east4 10.171.1.0 subnet, which is set up in the same way, kubectl get pods times out with an error that it's unable to reach the endpoint. The message is:
kubectl get pods
Unable to connect to the server: dial tcp 10.181.15.2:443: i/o timeout
This seems like a firewall problem, but I've been unable to find a solution, despite the abundance of GKE documentation out there. It could be a routing problem, but I thought VPC-native GKE cluster would take care of the routes automatically?
By default, the private endpoint for the control plane is accessible from clients in the same region as the cluster. If you want clients in the same VPC but located in different regions to access the control plane, you'll need to enable global access using the --enable-master-global-access option. You can do this when you create the cluster or you can update an existing cluster.
My question here is maybe simple but I'm missing something.
I have a GKE cluster and I want to connect it to an on-premise database.
GKE - VPC-native cluster
GKE version - 1.17.15-gke.800
POD ip range - 10.0.0.0/14
SERVICES ip range - 10.4.0.0/20
I have a cloud VPN working (policy based connection) and I have a connection from Google's network to the onpremise network. I've tested it from a test instance and from the instances of the GKE cluster.
I don't have connection only from the pods.
What am I missing here ?
I managed to find the right answer:
Egress traffic from GKE Pod through VPN
Got it from here, I needed to enable Network Policy for master + nodes and then used the ip-masq-agent config to create a Configmap, then you must delete the pods of ip-masq-agent and when they come up with the new config everything is working fine.
So I've been struggling with the fact that I'm unable to expose any deployment in my eks cluster.
I got down to this:
My LoadBalancer service public IP never responds
Went to the load balancer section in my aws console
Load balancer is no working because my cluster node is not passing the heath checks
SSHd to my cluster node and found out that containers do not have ports associated to them:
This makes the cluster node fail the health checks, so no traffic is forwarded that way.
I tried running a simple nginx container manually, without kubectl directly in my cluster node:
docker run -p 80:80 nginx
and pasting the node public IP in my browser. No luck:
then I tried curling to the nginx container directly from the cluster node via ssh:
curl localhost
And I'm getting this response: "curl: (7) Failed to connect to localhost port 80: Connection refused"
Why are containers in the cluster node not showing ports?
How can I make the cluster node pass the load balancer health checks?
Could it have something to do with the fact that I created a single node cluster with eksctl?
What other options do I have to easily run a kubernetes cluster in AWS?
This is something in the middle between answer and question, but I hope it will help you.
Im using Deploying a Kubernetes Cluster with Amazon EKS guide for years when it comes to create EKS cluster.
For test purposes, I just spinned up new cluster and it works as expected, including accessing test application using external LB ip and passing health checks...
In short you need:
1. create EKS role
2. create VPC to use in EKS
3. create stack (Cloudformation) from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml
4. Export variables to simplify further cli command usage
export EKS_CLUSTER_REGION=
export EKS_CLUSTER_NAME=
export EKS_ROLE_ARN=
export EKS_SUBNETS_ID=
export EKS_SECURITY_GROUP_ID=
5. Create cluster, verify its creation and generating appropriate config.
#Create
aws eks --region ${EKS_CLUSTER_REGION} create-cluster --name ${EKS_CLUSTER_NAME} --role-arn ${EKS_ROLE_ARN} --resources-vpc-config subnetIds=${EKS_SUBNETS_ID},securityGroupIds=${EKS_SECURITY_GROUP_ID}
#Verify
watch aws eks --region ${EKS_CLUSTER_REGION} describe-cluster --name ${EKS_CLUSTER_NAME} --query cluster.status
#Create .kube/config entry
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
Can you please check article and confirm you havent missed any steps during installation?
I'm using a managed AWS EKS Kubernetes cluster. For the worker nodes I have setup a node group within the EKS cluster with 2 worker nodes
These worker nodes get a public IP assigned automatically by EKS:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-0-129.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.0.129 1.2.3.4 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
ip-10-0-1-218.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.1.218 5.6.7.8 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
For this example let's assume that the values assigned automatically by AWS are 1.2.3.4 and 5.6.7.8.
When running a command from inside a pod running on the first node I can also see that this is the IP address with which external requests are being made:
$ curl 'https://api.ipify.org'
1.2.3.4
The issue that I'm facing now is that I would like to configure this IP address. Let's assume I have a service that I use from within the pod that I'm not in control of and that requires whitelisting via IP address.
I haven't found any way to specify a range of IP addresses to the node group (or the subnets setup for the VPC in which the cluster is located) from which AWS will pick an IP address.
Is there any other way to configure the worker nodes to use fixed IP addresses?
Currently it is not possible to associate Elastic IPs with instances running as part of EKS node group. However, I will provide you with a much better alternative that should be used over your setup which is essentially all public.
Firstly, run your worker nodes or node groups inside private subnets. This will give you the ability to route out to internet through a static ip.
To achieve static ip that you can whitelist on a desired service is by using NAT Gateway. Setup instructions. NAT gateway will be associated with an elastic IP which won't be changing.
Since you are running EKS, don't forget to modify aws-vpc-cni configuration with AWS_VPC_K8S_CNI_EXTERNALSNAT = true. This is essential for pods to correctly work and route out to internet. If set to true, the SNAT iptables rule and off-VPC IP rule are not applied, and these rules are removed if they have already been applied. For this your nodes must be running in a private subnet and connected to the internet through an AWS NAT Gateway or another external NAT device. more info on aws-vpc-cni
I have Kubernetes Master and Minions running on EC2 instances and was able to successfully deploy an example app with below commands
kubectl run hello-world --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-world --type=NodePort
Which is now available externally from port 30013:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world 10.43.95.16 <nodes> 8080:30013/TCP 1h
I'm now trying to access this by visiting EC2 instance private IP of this Kubernetes Minion node and its port number as 30013 but is not able to connect at all.
I've checked security group of AWS and this port is open and is attached to the EC2 instance. I cannot think of anything else that would stop accessing the application.
Is there any known issues with AWS networking with Kubernetes exposed services?
It should work (and it works on my cluster on AWS). Are you sure you are using the IP address of the eth0 interface and nop crb0 or something else? EC2 instances just have one interface and the public address is mapped, so from inside the EC2 is not much difference.
Also, you should be able to contact 10.43.95.16 on port 8080 or just use the DNS name. If you want to connect to other services from a k8s app, you should use that (no node crash will affect the communication, etc.)