I have a Minikube cluster and when I want to run kubectl apply ./application.yaml it responsds with error: open ./application.yaml: operation not permitted. How can I Solve it?
You can resolve the error by enabling permissions for my terminal on my operational system.
Related
Github repo
After I configured the kubectl with the AWS EKS cluster, I deployed the services using these commands:
kubectl apply -f env-configmap.yaml
kubectl apply -f env-secret.yaml
kubectl apply -f aws-secret.yaml
# this is repeated for all services
kubectl apply -f svcname-deploymant.yaml
kubectl apply -f svcname-service.yaml
The other services ran successfully but the reverse proxy returned an error and when I investigated by running the command kubectl describe pod reverseproxy...
I got this info:
https://pastebin.com/GaREMuyj
[Edited]
After running the command kubectl logs -f reverseproxy-667b78569b-qg7p I get this:
As David Maze very rightly pointed out, your problem is not reproducible. You haven't provided all the configuration files, for example. However, the error you received clearly tells about the problem:
host not found in upstream "udagram-users: 8080" in /etc/nginx/nginx.conf:11
This error makes it clear that you are trying to connect to host udagram-users: 8080 as defined in file /etc/nginx/nginx.conf on line 11.
And how can I solve it please?
You need to check the connection. (It is also possible that you entered the wrong hostname or port in the config). You mentioned that you are using multiple subnets:
it is using 5 subnets
In such a situation, it is very likely that there is no connection because the individual components operate on different networks and will never be able to communicate with each other. If you run all your containers on one network, it should work. If, on the other hand, you want to use multiple subnets, you need to ensure container-to-container communication across multiple subnets.
See also this similar problem with many possible solutions.
I am trying to create an application load balancer controller on my EKS cluster by following
this link
When I run these steps (after making the necessary changes to the downloaded yaml file)
curl -o v2_1_2_full.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.2/docs/install/v2_1_2_full.yaml
kubectl apply -f v2_1_2_full.yaml
I get this output
customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws configured
mutatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook configured
role.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/aws-load-balancer-controller-role configured
rolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-rolebinding unchanged
service/aws-load-balancer-webhook-service unchanged
deployment.apps/aws-load-balancer-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook configured
Error from server (InternalError): error when creating "v2_1_2_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: no endpoints available for service "cert-manager-webhook"
Error from server (InternalError): error when creating "v2_1_2_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: no endpoints available for service "cert-manager-webhook"
The load balancer controller doesnt appear to start up because of this and never gets to the ready state
Has anyone any suggestions on how to resolve this issue?
Turns out the taints on my nodegroup prevented the cert-manager pods from starting on any node.
These commands helped debug and led me to a fix for this issue:
kubectl get po -n cert-manager
kubectl describe po <pod id> -n cert-manager
My solution was to create another nodeGroup with no taints specified. This allowed the cert-manager to run.
While I'm trying to get the pods or node states, from Google Cloud Platform Cloud Shell, I'm facing this error? Can someone please help me? I can see the output of the "kubectl config view".
Posting this answer as community wiki for better visibility and the fact that the possible solution was posted in the comments:
Does this answer your question? Unable to connect to the server: dial tcp i/o time out
Adding to that:
Below command:
$ kubectl config view
is used to show the configuration stored in your ./kube/config file. The fact that you can see the output of this command doesn't mean you have correct cluster configured to use with kubectl.
From the perspective of Google Cloud Platform and Cloud Shell
There is an official documentation regarding troubleshooting issues with GKE:
Cloud.google.com: Kubernetes Engine: Docs: Troubleshooting
There could be several reasons why you are getting following error:
You are referencing wrong cluster in your ~/.kube/config file.
$ gcloud container clusters get-credentials CLUSTER_NAME --zone=ZONE - you will need to run this command to fetch the correct configuration
You can also get above command from the Kubernetes Engine page (Connect button)
You are referencing a cluster in your ~/.kube/config file that was deleted
You created Private GKE cluster
For more information you can look in the Cloud Console -> Kubernetes Engine -> CLUSTER_NAME
You can also run:
$ gcloud container clusters list - this command will show clusters and their state (status) they are in
$ gcloud container clusters describe CLUSTER_NAME --zone=ZONE - this command will show you the configuration of the cluster
I've checked almost all of the answers on here, but nothing has resolved this yet.
When running kubectl, I will consistently get error: You must be logged in to the server (Unauthorized).
I have tried editing the config file via kubectl config --kubeconfig=config view, but I still receive the same error, even when running kubectl edit -n kube-system configmap/aws-auth.
Even when I just try to analyze my clusters and run aws eks list-clusters, I receive a different error An error occurred (UnrecognizedClientException) when calling the ListClusters operation: The security token included in the request is invalid.
I have completely torn down my clusters on EKS and rebuilding them, but I keep encountering these same errors. This is my first time attempting to use AWS EKS, and I've been trying different things for a few days.
I've set my aws configure
λ aws configure
AWS Access Key ID [****************Q]: *****
AWS Secret Access Key [****************5]: *****
Default region name [us-west-2]: us-west-2
Default output format [json]: json
Even when trying to look at the config map, I receive the same error:
λ kubectl describe configmap -n kube-system aws-auth
error: You must be logged in to the server (Unauthorized)
For me the problem was because of the system time, below solved the issue for me.
sudo apt install ntp
service ntp restart
Having my cluster up and running on AWS EKS, I'm finding trouble running helm init with the following error:
$ helm init --service-account tiller --upgrade
Error: error installing: deployments.extensions is forbidden: User "system:anonymous" cannot create deployments.extensions in the namespace "kube-system"
kubectl works properly (object retrieval, creation and cluster administration), authenticating and authorizing correctly by running heptio-authenticator-aws at connection time ( with an exec section in the kubectl config).
In order to prepare the cluster for helm, I created the service account and role binding as specified in the helm docs.
I've heard of people having helm running on EKS, and I'm guessing they're skipping the exec section of the kubectl config by hardcoding the token... I'd like to avoid that!
Any ideas on how to fix this? My guess is that it is related to helm not being able to execute heptio-authenticator-aws properly
I was running helm version 2.8.2 when obtaining this error, upgrading to v2.9.1 fixed this!