kubectl Error: You must be logged in to the server - amazon-web-services

I've checked almost all of the answers on here, but nothing has resolved this yet.
When running kubectl, I will consistently get error: You must be logged in to the server (Unauthorized).
I have tried editing the config file via kubectl config --kubeconfig=config view, but I still receive the same error, even when running kubectl edit -n kube-system configmap/aws-auth.
Even when I just try to analyze my clusters and run aws eks list-clusters, I receive a different error An error occurred (UnrecognizedClientException) when calling the ListClusters operation: The security token included in the request is invalid.
I have completely torn down my clusters on EKS and rebuilding them, but I keep encountering these same errors. This is my first time attempting to use AWS EKS, and I've been trying different things for a few days.
I've set my aws configure
λ aws configure
AWS Access Key ID [****************Q]: *****
AWS Secret Access Key [****************5]: *****
Default region name [us-west-2]: us-west-2
Default output format [json]: json
Even when trying to look at the config map, I receive the same error:
λ kubectl describe configmap -n kube-system aws-auth
error: You must be logged in to the server (Unauthorized)

For me the problem was because of the system time, below solved the issue for me.
sudo apt install ntp
service ntp restart

Related

Istio Ingress not showing address (Kubeflow on AWS)

I'm trying to setup kubeflow on AWS, I did follow this tutorial to setup kubeflow on AWS.
I used dex instead of cognito with following policy.
then at step: kfctl apply -V -f kfctl_aws.yaml , first I received this error:
IAM for Service Account is not supported on non-EKS cluster
So to fix this I set the property enablePodIamPolicy: false
Then retried and it successfully deployed kubeflow, on checking services status using kubectl -n kubeflow get all, I found all services ready except MPI operator.
ignoring this when I tried to run kubectl get ingress -n istio-system
I got the following result.
upon investigation using kubectl -n kubeflow logs $(kubectl get pods -n kubeflow --selector=app=aws-alb-ingress-controller --output=jsonpath={.items..metadata.name})
I found the following error:
E1104 12:09:37.446342 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile LB managed SecurityGroup: failed to reconcile managed LoadBalancer securityGroup: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: Lsvzm7f4rthL4Wxn6O8wiQL1iYXQUES_9Az_231BV7fyjgs7CHrwgUOVTNTf4334_C4voUogjSuCoF8GTOKhc5A7zAFzvcGUKT_FBs6if06KMQCLiCoujgfoqKJbG75pPsHHDFARIAdxNYZeIr4klmaUaxbQiFFxpvQsfT4ZkLMD7jmuQQcrEIw_U0MlpCQGkcvC69NRVVKjynIifxPBySubw_O81zifDp0Dk8ciRysaN1SbF85i8V3LoUkrtwROhUI9aQYJgYgSJ1CzWpfNLplbbr0X7YIrTDKb9sMhmlVicj_Yng0qFka_OVmBjHTnpojbKUSN96uBjGYZqC2VQXM1svLAHDTU1yRruFt5myqjhJ0fVh8Imhsk1Iqh0ytoO6eFoiLTWK4_Crb8XPS5tptBBzpEtgwgyk4QwOmzySUwkvNdDB-EIsTJcg5RQJl8ds4STNwqYV7XXeWxYQsmL1vGPVFY2lh_MX6q1jA9n8smxITE7F6AXsuRHTMP5q0jk58lbrUe-ZvuaD1b0kUTvpO3JtwWwxRd7jTKF7xde2InNOXwXxYCxHOw0sMX56Y1wLkvEDTLrNLZWOACS-T5o7mXDip43U0sSoUtMccu7lpfQzH3c7lNdr9s2Wgz4OqYaQYWsxNxRlRBdR11TRMweZt4Ta6K-7si5Z-rrcGmjG44NodT0O14Gzj-S4i6bK-qPYvUEsVeUl51ev_MsnBKtCXcMF8W6j9D7Oe3iGj13uvlVJEtq3OIoRjBXIuQQ012H0b3nQqlkoKEvsPAA_txAjgHXVzEVcM301_NDQikujTHdnxHNdzMcCfY7DQeeOE_2FT_hxYGlbuIg5vonRTT7MfSP8_LUuoIICGS81O-hDXvCLoomltb1fqCBBU2jpjIvNALMwNdJmMnwQOcIMI_QonRKoe5W43v\n\tstatus code: 403, request id: a9be63bd-2a3a-4a21-bb87-93532923ffd2" "controller"="alb-ingress-controller" "request"={"Namespace":"istio-system","Name":"istio-ingress"}
I don't understand what exactly went wrong in security permissions ?
The alb-ingress-controller doesn't have permission to create an ALB.
By setting the enablePodIamPolicy: false, I assume you go for option 2 of the guide.
The alb-ingress-controller uses the kf-admin role, and the installer needs attach on that role a policy found in aws-config/iam-alb-ingress-policy.json. Most probably it's not installed, so you'll have to add it in IAM and attach it to the role.
After doing that, observe the reconciler logs of the alb-ingress-controller to see if it's able to create the ALB.
It's likely the cluster-name in the aws-alb-ingress-controller-config is not correctly configured.
If that's the case, you should edit the Config Map to the right cluster name using kubectl edit cm aws-alb-ingress-controller-config -n kubeflow.
After that you should delete the pod so it restarts (kubectl -n kubeflow delete pod $(kubectl get pods -n kubeflow --selector=app=aws-alb-ingress-controller --output=jsonpath={.items..metadata.name})).

How can I get authed to run kubectl commands in EKS when I'm not the creator of the cluster && the creator/admin isn't available?

I have an EKS cluster that's not created by me. I want to operate the cluster by running kubectl commands, but I keep getting "error: You must be logged in to the server (Unauthorized)".
e.g.,
$ kubectl get pod
error: You must be logged in to the server (Unauthorized)
My IAM user account has AdminFullAccess so I believe I'm blocked by the permission of Kubernetes. According to the AWS doc, someone who didn't create the cluster needs to ask the owner or the admin to modify aws-auth ConfigMap, but they already left the company. Is there any way to solve the problem?
Perhaps there is still creator's account in IAM. If so, you can reset API-access keys and generate kubeconfig anew using their IAM-account. Assuming you've already configured awscli for their account:
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
Once you did that, you will get full access to the cluster just as the person who had left had. I recommend use this opportunity to patch aws-auth to enable access from your own account.

AWS EKS - Failure creating load balancer controller

I am trying to create an application load balancer controller on my EKS cluster by following
this link
When I run these steps (after making the necessary changes to the downloaded yaml file)
curl -o v2_1_2_full.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.2/docs/install/v2_1_2_full.yaml
kubectl apply -f v2_1_2_full.yaml
I get this output
customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws configured
mutatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook configured
role.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/aws-load-balancer-controller-role configured
rolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-rolebinding unchanged
service/aws-load-balancer-webhook-service unchanged
deployment.apps/aws-load-balancer-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/aws-load-balancer-webhook configured
Error from server (InternalError): error when creating "v2_1_2_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: no endpoints available for service "cert-manager-webhook"
Error from server (InternalError): error when creating "v2_1_2_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: no endpoints available for service "cert-manager-webhook"
The load balancer controller doesnt appear to start up because of this and never gets to the ready state
Has anyone any suggestions on how to resolve this issue?
Turns out the taints on my nodegroup prevented the cert-manager pods from starting on any node.
These commands helped debug and led me to a fix for this issue:
kubectl get po -n cert-manager
kubectl describe po <pod id> -n cert-manager
My solution was to create another nodeGroup with no taints specified. This allowed the cert-manager to run.

kubectl : error: You must be logged in to the server (Unauthorized)

I have created kops cluster and getting below error when logging to the cluster.
Error log :
*****INFO! KUBECONFIG env var set to /home/user/scripts/kube/kubeconfig.yaml
INFO! Testing kubectl connection....
error: You must be logged in to the server (Unauthorized)
ERROR! Test Failed, AWS role might not be recongized by cluster*****
Using script for iam-authentication and logged in to server with proper role before connecting.
I am able to login to other server which is in the same environment. tried with diff k8s version and diff configuration.
KUBECONFIG doesn't have any problem and same entry and token details like other cluster.
I can see the token with 'aws-iam-authenticator' command
Went through most of the articles and didn't helped
with kops vs1.19 you need to add --admin or --user to update your kubernetes cluster and each time you log out of your server you have to export the cluster name and the storage bucket and then update the cluster again. this will work.
It seems as a AWS authorization issue. At cluster creation only the IAM user who created the cluster has admin rights on it, so you may need to add your own IAM User first.
1- Start by verifying the IAM user identity used implicitly in all commands: aws sts get-caller-identity
If your aws-cli is set correctly you will have an output similar to this:
{
"UserId": "ABCDEFGHIJK",
"Account": "12344455555",
"Arn": "arn:aws:iam::1234577777:user/Toto"
}
we will refer to the value in Account as YOUR_AWS_ACCOUNT_ID in step 3. (in this example YOUR_AWS_ACCOUNT_ID="12344455555"
2- Once you have this identity you have to add it to AWS role binding to get EKS permissions.
3- You will need to edit the ConfigMap file used by kubectl to add your user kubectl edit -n kube-system configmap/aws-auth
In the editor opened, create a username you want to use to refer to yourself using the cluster YOUR_USER_NAME (for simplicity you may use the same as your aws user name, example Toto in step 2) , you will need it in step 4, and use the aws account id (don't forget to keep the quotes ""),you found it in your identity info at step 1 YOUR_AWS_ACCOUNT_ID, as follows in sections mapUsers and mapAccounts.
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/ops-user
username: YOUR_USER_NAME
groups:
- system:masters
mapAccounts: |
- "YOUR_AWS_ACCOUNT_ID"
4- Finally you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user YOUR_USER_NAME

aws kops create cluster errors out as InvalidClientTokenId

I am actually trying to deploy my application using Kubernetes in the AWS Kops. For this i followed the steps given in the AWS workshop tutorial.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/01-path-basics/101-start-here
I created a AWS Cloud9 environment by logging in as a IAM user and installed kops and other required software's as well. When i try to create the cluster using the following command
kops create cluster --name cs.cluster.k8s.local --zones $AWS_AVAILABILITY_ZONES
--yes
i get an error like below in the cloud9 IDE
error running tasks: deadline exceeded executing task IAMRole/nodes.cs.cluster.k8s.local. Example error: error creating IAMRole: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: 30fe2a97-0fc4-11e8-8c48-0f8441e73bc3
I am not able to find a way to solve this issue. Any help on this would be appreciable.
I found the issue and fixed it. Actually
I did not export the following 2 environment variables in the terminal where I am running create cluster. These 2 below variables are required while creating a cluster using kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)