Trouble mounting an EBS to a Pod in a Kubernetes cluster - amazon-web-services

The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}
I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:
apiVersion: v1
kind: Pod
metadata:
name: mongodb-aws
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
At first I had this error from the logs of the pod :
“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “
After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :
kubeadm_config.yml file:
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: “/etc/kubernetes/cloud.conf"
In /etc/kubernetes/cloud.conf I put :
[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes
After running kubeadm init --config kubeadm_config.yml I had these errors:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
The Control Plane is not created
When I removed :
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the
Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-apiserver-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-controller-manager-ip-172-31-31-160 0/1 CrashLoopBackOff 6 11m
kube-system kube-scheduler-ip-172-31-31-160 1/1 Running 0 10m
The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )
When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:
Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433 1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493 1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091 1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605 1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760 1 leaderelection.go:187] attempting to acquire leader lease kube-system/kube-controller-manager...
I1126 11:27:53.260484 1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493 1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642 1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760 1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\""
I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)
If you need more information, please feel free to ask.

Related

AWS EKS Fargate - Unable to mount EFS volume with statefulset

I want to run a statefulSet in AWS EKS Fargate and attach a EFS volume with it, but I am getting errors in mounting a volume with pod.
These are the error I am getting from describe pod.
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 114s fargate-scheduler Successfully enabled logging for pod
Normal Scheduled 75s fargate-scheduler Successfully assigned default/app1 to fargate-10.0.2.123
Warning FailedMount 43s (x7 over 75s) kubelet MountVolume.SetUp failed for volume "efs-pv" : rpc error: code = Internal desc = Could not mount "fs-xxxxxxxxxxxxxxxxx:/" at "/var/lib/kubelet/pods/b799a6d6-fe9e-4f80-ac2d-8ccf8834d7c4/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs -o tls fs-xxxxxxxxxxxxxxxxx:/ /var/lib/kubelet/pods/b799a6d6-fe9e-4f80-ac2d-8ccf8834d7c4/volumes/kubernetes.io~csi/efs-pv/mount
Output: Failed to resolve "fs-xxxxxxxxxxxxxxxxx.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.
See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail.
Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first.
Warning: config file does not have fall_back_to_mount_target_ip_address_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fall_back_to_mount_target_ip_address_enabled = True].
If anyone has setup efs volume with eks fargate cluster please have a look at it. I am really stucked in from long time.
What I have setup
Created a EFS Volume
CSIDriver Object
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: efs.csi.aws.com
spec:
attachRequired: false
Storage Class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: <EFS filesystem ID>
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
Pod Configuration
apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
I had the same question as you literally a day after and have been working on the error nonstop since then! Did you check to make sure your VPC had DNS hostnames enabled? That is what fixed it for me.
Just an FYI, if you are using fargate and you want to change this--I had to go as far as deleting the entire cluster after changing the DNS hostnames flag in order for the change to propagate. I'm unsure if you're familiar with the DHCP options of a normal ec2 instance, but usually it takes something like renewing the ipconfig in order to force the flag to propagate, but since fargate is a managed system, I was unable to find a way to do so from the node itself. I have created another post here attempting to answer that question.
Another quick FYI: if your pod execution role doesn't have access to EFS, you will need to add a policy that allows access (I just used the default AmazonElasticFileSystemFullAccess Role for the time being in order to try to get things working). Once again, you will have to relaunch your whole cluster in order to get this role change to propagate if you haven't already done so!

pod-identity-webhook missing after EKS 1.16 upgrade

After upgrading to EKS 1.16, IAM Roles for Service Account stopped working.
It was configured as described in the article, configuring and assigning service accounts to pods, and worked with EKS 1.14 and 1.15.
Running service-account.yaml and test-pod.yaml on EKS 1.15 (qa env) does mount the following env variables
AWS_ROLE_ARN=arn:aws:iam::xxx:role/oidc-my-service-api-qa
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
While running same resources on EKS 1.16 (test env), they are not added.
service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/oidc-my-service-test
name: oidc-my-service-service-account
test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
command: ["/bin/sh", "-c", "env | grep AWS"]
securityContext:
fsGroup: 1000
serviceAccountName: "oidc-my-service-service-account"
UPDATE
Turns out I'm missing Amazon EKS Pod Identity Webhook, but where did it go?
EKS 1.15
kubectl get mutatingwebhookconfigurations pod-identity-webhook
NAME CREATED AT
pod-identity-webhook 2020-01-11T17:01:52Z
EKS 1.16
kubectl get mutatingwebhookconfigurations pod-identity-webhook
Error from server (NotFound): mutatingwebhookconfigurations.admissionregistration.k8s.io "pod-identity-webhook" not found
I was able to re-install amazon-eks-pod-identity-webhook in my EKS 1.16 cluster by cloning the code from github, building docker image and pushing it to my ECR repo, and running
make cluster-up IMAGE=my-account-id.dkr.ecr.my-region.amazonaws.com/pod-identity-webhook:latest
Still open question where did it go, as it's part of managed service as stated in this comment

AWS EKS Kubernetes and DockerHub

I have a cluster and node creates in AWS EKS. I applied the deployment to that cluster as under
kubectl apply -f deployment.yaml
Where deployment.yaml contains the containers' specification along with DockerHub repo and image
However, I did a mistake in deployment.yaml and I need to re-apply it to the configuration
My question is:
1 - How do I reapply a deployment.yaml to the AWS EKS cluster using kubectl?
Just running the above command is not working (kubectl apply -f deployment.yaml)
2- After I re-apply the deployment.yaml , will the node will go an pick up the DockerHub image or do I still need to do something else( supposing all the other details are ok)
Some outputs below:
>> kubectl get pods
my-app-786dc95d8f-b6w4h 0/1 ImagePullBackOff 0 9h
my-app-786dc95d8f-w8hkg 0/1 ImagePullBackOff 0 9h
kubectl describe pod my-app-786dc95d8f-b6w4h
Name: my-app-786dc95d8f-b6w4h
Namespace: default
Priority: 0
Node: ip-192-168-24-13.ec2.internal/192.168.24.13
Start Time: Fri, 10 Jul 2020 12:54:38 -0400
Labels: app=my-app
pod-template-hash=786dc95d8f
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP: 192.168.7.235
IPs:
IP: 192.168.7.235
Controlled By: ReplicaSet/my-app-786dc95d8f
Containers:
simple-node:
Container ID:
Image: BAD_REPO/simple-node
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mwwvl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mwwvl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mwwvl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 17m (x2570 over 9h) kubelet, ip-192-168-24-13.ec2.internal Back-off pulling image "BAD_REPO/simple-node"
Warning Failed 2m48s (x2634 over 9h) kubelet, ip-192-168-24-13.ec2.internal Error: ImagePullBackOff
BR
if you need to change image:
kubectl set image deployment.v1.apps/{your_deployment_name} image_name:tag
but you always can do
kubectl delete -f deployment.yaml
kubectl create -f deployment.yaml
since your image is in ImagePullBackOff - it doesn't work anyway and you can just recreate deployment. Usually you don't do drop/create on prod. that is why i am using image change all the time. just have to change tag on every new image.
ImagePullBackOff means that kubernetes is not able to pull the image.
Specially, the service account "default" is not able to pull the image.
To fix this issue, you need two checks:
Check that you don't have typo in the image name and tag. And that image is available publically.
If the Docker registry is private, make sure to create secret with dockerlogin type, and then patch the service account "default" by this secret.

Cannot create deployment of nginx tasks definitions on AWS Fargate using virtual-kubelet

I am unable to to deploy nginx containers using kubectl to AWS Fargate using virtual-kubelet. I am following this guide: https://aws.amazon.com/blogs/opensource/aws-fargate-virtual-kubelet/.
I am having an issue with Step 6: Create Kubernetes objects.
I would like to know why the nginx containers are PENDING and why the AWS Fargate task definitions have not been created.
The following is some of my commands I used. I can give more detail upon request.
# ./virtual-kubelet --provider aws --provider-config fargate.toml
...
2019/05/16 06:50:24 Received NodeDaemonEndpoints request.
ERRO[0000] TLS certificates not provided, not setting up pod http server certPath= keyPath= node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
INFO[0000] Initialized node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
INFO[0000] Created node node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
INFO[0000] Node leases not supported, falling back to only node status updates node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
INFO[0000] Pod cache in-sync node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
2019/05/16 06:50:25 Received GetPods request.
2019/05/16 06:50:25 Responding to GetPods: [].
INFO[0000] starting workers node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
INFO[0000] started workers node=virtual-kubelet operatingSystem=Linux provider=aws watchedNamespace=
# kubectl describe node virtual-kubelet
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedToCreateRoute 98s (x951 over 160m) route_controller (combined from similar events): Could not create route e1e32758-77a6-11e9-a68e-0a95bb07bfa2 100.96.4.0/24 for node virtual-kubelet after 47.871544ms: instance not found
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-47-10.eu-west-2.compute.internal Ready master 30h v1.14.1
ip-172-20-47-242.eu-west-2.compute.internal Ready node 30h v1.14.1
ip-172-20-59-102.eu-west-2.compute.internal Ready node 30h v1.14.1
virtual-kubelet Ready agent 33m v1.13.1-vk-v0.9.0-40-g5b3190ac-dev
kubectl create -f nginx-deployment.yaml
# kubectl get deployments -o wide
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-c6695csfc-5f7bh 0/1 Pending 0 21m <none> <none> <none> <none>
nginx-deployment-c6695csfc-bwfb8 0/1 Pending 0 21m <none> <none> <none> <none>
nginx-deployment-c6695csfc-mcfvw 0/1 Pending 0 21m <none> <none> <none> <none>
# kubectl describe pod nginx-deployment-c6695csfc-5f7bh
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m11s (x191 over 22m) default-scheduler 0/4 nodes are available: 1 Insufficient cpu, 1 node(s) had taints that the pod didn't tolerate, 3 node(s) didn't match node selector.
Update:
I then ran the command to add the nodeSelector to my nodes using the following command for each node:
kubectl label nodes ip-172-20-47-15.eu-west-2.compute.internal type=virtual-kubelet
type=virtual-kubelet is the nodeSelector specified in the manifest file, nginx-deployment.yaml.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-c6695csfc-5f7bh 1/1 Running 0 4m59s 100.96.2.7 ip-172-20-47-242.eu-west-2.compute.internal <none> <none>
nginx-deployment-c6695csfc-bwfb8 1/1 Running 0 4m59s 100.96.1.6 ip-172-20-59-102.eu-west-2.compute.internal <none> <none>
nginx-deployment-c6695csfc-mcfvw 1/1 Running 0 4m59s 100.96.2.8 ip-172-20-47-242.eu-west-2.compute.internal <none>
Now when I go to the AWS Fargate Dashboard the associated task definitions are not created as shown in the tutorial.
This issue is resolved. I was able to create the AWS Fargate definitions by adding the ALB Security group to the fargate.toml file and by adding tolerations to the nginx.deployment.yaml file as shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: azure
effect: NoSchedule

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
I have created a Deployment using the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
The deployment is running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
A single pod has been created on the worker node:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
From the worker node, I can curl the pod and get the information:
$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
I have created a service using the following configuration:
kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
The service is up and running:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.
If I curl the service from the worker node it works just as expected:
$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:
$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?
I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.
Depends of which mode you using it working different in details, but conceptually same.
You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.
IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.
That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.
So, service IP address desired to be available from other pod, not from nodes.
You can read in official documentation about how the Service Virtual IPs works.
kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.