my first EKS - error: You must be logged in to the server (Unauthorized) - kubectl

I've seen this error in many previous questions, but as a complete beginner I'm not sure if those questions were relevant to my situation.
I've created a cluster called wiz_try using the AWS EKS gui.
When it asked for a role, I've created a new one with EKS policy AmazonEKSClusterPolicy.
I've download AWS ICL and configured it. Also downloaded Kubectl.
checked that I see the cluster:
aws eks --region eu-central-1 describe-cluster --name wiz_try --query cluster.status
ACTIVE
ran
aws eks update-kubeconfig --name wiz_try --region eu-central-1
this has created a config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *******
server: https://***.gr7.eu-central-1.eks.amazonaws.com
name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
contexts:
- context:
cluster: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
user: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
current-context: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- wiz_try
command: aws
kubectl get svc
error: You must be logged in to the server (Unauthorized)
I tried solving by running
aws eks update-kubeconfig --name wiz_try --region eu-central-1 --role-arn arn:aws:iam::***:role/EKS_cluster_role
as this is the role I created, but this is not helping.
what am I missing?
Thanks

Related

Where can I view service account created by `eksctl`?

I create a EKS cluster in AWS and use this command to create a service account eksctl create iamserviceaccount --name alb-ingress-controller --cluster $componentName --attach-policy-arn $serviceRoleArn --approve --override-existing-serviceaccounts.
The output of the command is:
[ℹ] using region ap-southeast-2
[ℹ] 1 existing iamserviceaccount(s) (default/alb-ingress-controller) will be excluded
[ℹ] 1 iamserviceaccount (default/alb-ingress-controller) was excluded (based on the include/exclude rules)
[!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
[ℹ] no tasks
I am not sure whether it is created successfully or not.
I use this command eksctl get iamserviceaccount to verify the result but get an error response:
Error: getting iamserviceaccounts: no output "Role1" in stack "eksctl-monitor-addon-iamserviceaccount-default-alb-ingress-controller"
I also tried to run kubectl get serviceaccount but I got the error: Error from server (NotFound): serviceaccounts "alb-ingress-controller" not found.
Does this mean the service account failed to create? Where can I view the service account in AWS console? or where can I view the error?
As per the error, it means serviceaccount already exists.
For getting the service account use kubectl
kubectl get serviceaccount <SERVICE_ACCOUNT_NAME> -n kube-system -o yaml
The order is, create the IAM-role, and after that – RBAC Role and binding.
Below is command in case you want to override the existing serviceaccount
eksctl --profile <PROFILE_NAME> \
--region=ap-northeast-2 \
create iamserviceaccount \
--name alb-ingress-controller \
--namespace kube-system \
--override-existing-serviceaccounts \
--approve --cluster <CLUSTER_NAME> \
--attach-policy-arn \
arn:aws:iam::ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy
I found this workshop Amazon EKS Workshop very helpful during my venture into EKS.
More information pertaining to ALB can be found here
EDIT
from this error
[ℹ] 1 existing iamserviceaccount(s) (default/alb-ingress-controller) will be excluded
It seems like the service accounts is created inside the default namespace.
so the command to check the serviceaccount will be
kubectl get serviceaccount <SERVICE_ACCOUNT_NAME> -n default-o yaml
eksctl uses CloudFormation to create the resources so you probably will find the cause of the error there.
Go to CloudFormation console in AWS
Find the stack with the name eksctl-[CLUSTER NAME]-addon-iamserviceaccount-default-[SERVICE ACCOUNT NAME], it should have the ROLLBACK_COMPLETE status.
Select the "events" tab and scroll to the first error
In my case, the cause was a missing policy that I was attaching to the role.
works as expected, thanks #samtoddler ! 😎
1 Create the IAM policy for the IAM Role 👏
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.0/docs/install/iam_policy.json
aws-vault exec Spryker-Humanetic-POC -- aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
2 Create the IAM role and attach it to the newly created ServiceAccount 👏
eksctl create iamserviceaccount \
--cluster education-eks-7yby62S7 \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
3.1 Verify # 1 that ServiceAccount lives in --namespace kube-system 👏
kubectl get sa aws-load-balancer-controller -n kube-system -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/eksctl-education-eks-7yby62S7-addon-iamservi-Role1-126OXTBKF3WBM
creationTimestamp: "2021-12-12T17:38:48Z"
labels:
app.kubernetes.io/managed-by: eksctl
name: aws-load-balancer-controller
namespace: kube-system
resourceVersion: "686442"
uid: 895f6f34-ab04-4bca-aeac-1b6b75766546
secrets:
- name: aws-load-balancer-controller-token-gcd5c
3.2 Verify #2 👏
kubectl get sa aws-load-balancer-controller -n kube-system
NAME SECRETS AGE
aws-load-balancer-controller 1 123m
Hope it will help! 🧗🏼‍♀️

Access remote EKS cluster from an EKS pod, using an assumed IAM role

I've gone over this guide to allow one of the pods running on my EKS cluster to access a remote EKS cluster using kubectl.
I'm currently running a pod using amazon/aws-cli inside my cluster, mounting a service account token which allows me to assume an IAM role configured with kubernetes RBAC according to the guide above. I've made sure that the role is correctly assumed by running aws sts get-caller-identity and this is indeed the case.
I've now installed kubectl and configured kube/config like so -
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <redacted>
server: <redacted>
name: <cluster-arn>
contexts:
- context:
cluster: <cluster-arn>
user: <cluster-arn>
name: <cluster-arn>
current-context: <cluster-arn>
kind: Config
preferences: {}
users:
- name: <cluster-arn>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-arn>
command: aws
env: null
However, every operation I try to carry out using kubectl results in this error -
error: You must be logged in to the server (Unauthorized)
I've no idea what did I misconfigure, and would appreciate any idea on how to get a more verbose error message.
If the AWS CLI is already using the identity of the role you want, then it's not needed to specify the --role & <role-arn> in the kubeconfig args.
By leaving them in, your role from aws sts get-caller-identity will need to have sts:AssumeRole permissions for the role <role-arn>. If they are the same, then the role needs to be able to assume itself - which is redundant.
So I'd try remove those args from the kubeconfig.yml and see if it helps.

kubectl error You must be logged in to the server (Unauthorized) - EKS cluster

I am new to EKS and Kubernetes -
Here is what happened
A EKS cluster was created with a specific IAM Role
When trying to connect to the cluster with kubectl commands it was throwing
error You must be logged in to the server (Unauthorized)
I followed the steps detailed here
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/
Assumed to the role that created the EKS cluster
Exported them to new profile dev in aws credentials
Ran AWS_PROFILE=dev kubectl get nodes. It was able to list all my nodes.
Note: I had already run aws eks --region <region> update-kubeconfig --name <cluster-name>
Now I tried to add the role/SAML User that is trying to access the EKS cluster by applying the configmap as below and ran AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
aws-auth.yaml being
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/abc#def.com
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
notice the role arn is a SAML User assumed to aws_dev role that tries to connect to the cluster.
Once this was applied, the response was configmap/aws-auth configured
I now tried to execute kubectl get nodes without the AWS_PROFILE=dev and it fails again with error You must be logged in to the server (Unauthorized).
I also executed AWS_PROFILE=dev kubectl get nodes which previously worked but fails now.
I am guessing the aws-auth information messed up and is there a way to revert the kubectl apply that was done above.
any kubectl command fails now. What might be happening? How can I rectify this?
You get an authorization error when your AWS Identity and Access Management (IAM) entity isn't authorized by the role-based access control (RBAC) configuration of the Amazon EKS cluster. This happens when the Amazon EKS cluster is created by an IAM user or role that's different from the one used by aws-iam-authenticator.
Check the resolution here.
kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster
Recreate the cluster and when you get to step 6 in the link add a second role (or user) to your aws-auth.yaml, like this:
Get ConfigMap with kubectl get cm -n kube-system aws-auth -o yaml
Add your role as a second item to the ConfigMap (don't change the first one):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/abc#def.com
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
### Add only this (assuming you're using a role)
- rolearn: <ARN of your IAM role>
username: <any name>
groups:
- system:masters
Run AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
Then get the kubeconfig with your temporary IAM role credentials with aws eks --region <region> update-kubeconfig --name <cluster-name>
You probably changed the aws-auth config.
Generally when you create a cluster, the user (or role) who created that cluster has admin rights, when you switch users you need to add them to the config (done as the user who created the cluster).

EKS AWS: Can't connect Worker Node

I am a bit very stuck on the step of Launching worker node in the AWS EKS guide. And to be honest, at this point, I don't know what's wrong.
When I do kubectl get svc, I get my cluster so that's good news.
I have this in my aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::Account:role/rolename
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
Here is my config in .kube
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERTIFICATE
server: server
name: arn:aws:eks:region:account:cluster/clustername
contexts:
- context:
cluster: arn:aws:eks:region:account:cluster/clustername
user: arn:aws:eks:region:account:cluster/clustername
name: arn:aws:eks:region:account:cluster/clustername
current-context: arn:aws:eks:region:account:cluster/clustername
kind: Config
preferences: {}
users:
- name: arn:aws:eks:region:account:cluster/clustername
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- clustername
command: aws-iam-authenticator.exe
I have launched an EC2 instance with the advised AMI.
Some things to note :
I launched my cluster with the CLI,
I created some Key Pair,
I am not using the Cloudformation Stack,
I attached those policies to the role of my EC2 : AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy.
It is my first attempt at kubernetes and EKS, so please keep that in mind :). Thanks for your help!
Your config file and auth file looks right. Maybe there is some issue with the security group assignments? Can you share the exact steps that you followed to create the cluster and the worker nodes?
And any special reason why you had to use the CLI instead of the console? I mean if it's your first attempt at EKS, then you should probably try to set up a cluster using the console at least once.
Sometimes for whatever reason aws_auth configmap does not apply automatically. So we need to add them manually. I had this issue, so leaving it here in case it helps someone.
Check to see if you have already applied the aws-auth ConfigMap.
kubectl describe configmap -n kube-system aws-auth
If you receive an error stating "Error from server (NotFound): configmaps "aws-auth" not found", then proceed
Download the configuration map.
curl -o aws-auth-cm.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
Open the file with your favorite text editor. Replace <ARN of instance role (not instance profile)> with the Amazon Resource Name (ARN) of the IAM role associated with your nodes, and save the file.
Apply the configuration.
kubectl apply -f aws-auth-cm.yaml
Watch the status of your nodes and wait for them to reach the Ready status.
kubectl get nodes --watch
You can also go to your aws console and find the worker node being added.
Find more info here

kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster

I have been trying to follow the getting started guide to EKS.
When I tried to call kubectl get service I got the message: error: You must be logged in to the server (Unauthorized)
Here is what I did:
1. Created the EKS cluster.
2. Created the config file as follows:
apiVersion: v1
clusters:
- cluster:
server: https://*********.yl4.us-west-2.eks.amazonaws.com
certificate-authority-data: *********
name: *********
contexts:
- context:
cluster: *********
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "*********"
- "-r"
- "arn:aws:iam::*****:role/******"
Downloaded and installed latest aws cli
Ran aws configure and set the credentials for my IAM user and the region as us-west-2
Added a policy to the IAM user for sts:AssumeRole for the EKS role and set it up as a trusted relationship
Setup kubectl to use the config file
I can get a token when I run heptio-authenticator-aws token -r arn:aws:iam::**********:role/********* -i my-cluster-ame
However when I try to access the cluster I keep receiving error: You must be logged in to the server (Unauthorized)
Any idea how to fix this issue?
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
eks-docs
So to add access to other aws users, first
you must edit ConfigMap to add an IAM user or role to an Amazon EKS cluster.
You can edit the ConfigMap file by executing:
kubectl edit -n kube-system configmap/aws-auth, after which you will be granted with editor with which you map new users.
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
mapAccounts: |
- "111122223333"
Pay close attention to the mapUsers where you're adding ops-user together with mapAccounts label which maps the AWS user account with a username on Kubernetes cluster.
However, no permissions are provided in RBAC by this action alone; you must still create role bindings in your cluster to provide these entities permissions.
As the amazon documentation(iam-docs) states you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap. You can do that by executing following command (kub-docs):
kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=ops-user
which grants the cluster-admin ClusterRole to a user named ops-user across the entire cluster.
I am sure issue is resolved but I will be putting more information here so if any other people are still facing the issue related to any of the below setup then they can use the steps below.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group system:masters (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster. Although we can always give the access to other IAM user/role using the aws-auth file but for that we must have to use the IAM user/role who created the cluster.
To verify the role/user for the EKS cluster we can search for the CreateCluster" Api call on cloudtrail and it will tell us the creator of the cluster in the sessionIssuer section for field arn (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html).
When we create the cluster using the IAM role or IAM user, setting up the access for the EKS cluster will become little tricky when we created the cluster using the role compare to user.
I will put the steps we can follow for each different method while setting up the access to EKS cluster.
Scenario-1: Cluster was Created using the IAM user (For example "eks-user")
Confirm that IAM user credentials are set properly on AWS cli who has created the cluster via running the command aws sts get-caller-identity
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
Scenario-2: Cluster was Created using the IAM Role (For example "eks-role")
Mainly there are four different way to setup the access via cli when cluster was created via IAM role.
1. Setting up the role directly in kubeconfig file.
In this case we do not have to make any assume role api call via cli manually, before running kubectl command because that will be automatically done by aws/aws-iam-authenticator set in the kube config file.
Lets say now we are trying to setup the access for the user eks-user the first make sure that user does have permission to assume the role eks-role
Add the assume role permission to the eks-user
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::xxxxxxxxxxx:role/eks-role"
}
]
}
Edit the trust relationship on the role so that it will allow the eks-user to assume the role.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
},
"Action": "sts:AssumeRole"
}
]
}
Confirm that IAM user credentials are set properly on AWS cli who has created the cluster via running the command aws sts get-caller-identity. Important thing to remember it should show us the IAM user ARN not the IAM assumed ROLE ARN.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn arn:aws:iam::xxxxxxxxxxx:user/eks-role
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
- --role
- arn:aws:iam::xxxxxxx:role/eks-role
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
2. If you have setup the AWS profile (https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) on CLI and if you want to use that with the kube config.
Confirm that profile is set properly so that it can use the credentials for the eks-user
$ cat ~/.aws/config
[default]
output = json
region = us-east-1
[eks]
output = json
region = us-east-1
[profile adminrole]
role_arn = arn:aws:iam::############:role/eks-role
source_profile = eks
$ cat ~/.aws/credentials
[default]
aws_access_key_id = xxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[eks]
aws_access_key_id = xxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Once this profile configuration is done please confirm that profile configuration is fine by running the command aws sts get-caller-identity --profile eks
$ aws sts get-caller-identity --profile eks
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command with the profile and please make sure we are not using the role here.
aws eks update-kubeconfig --name devel --profile eks
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
env:
- name: AWS_PROFILE
value: eks
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
3. Assume the role by any other way, For example we can attach the IAM role to the instance directly.
If role is directly attached to the instance profile then we can follow the similar steps as we followed while setting up the access for IAM user in Scenario-1
Verify that we have attached the correct role to EC2 instance and as this instance profile will come into least precedence, this step will also verify that there are no any other credentials setup on the instnace.
[ec2-user#ip-xx-xxx-xx-252 ~]$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx:i-xxxxxxxxxxx",
"Arn": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/eks-role/i-xxxxxxxxxxx"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
4. Manually assuming the IAM role via aws sts assume-role command.
Assume the role eks-role manually by running the cli command.
aws sts assume-role --role-arn arn:aws:iam::xxxxxxxxxxx:role/eks-role --role-session-name test
{
"AssumedRoleUser": {
"AssumedRoleId": "xxxxxxxxxxxxxxxxxxxx:test",
"Arn": "arn:aws:sts::xxxxxxxxxxx:assumed-role/eks-role/test"
},
"Credentials": {
"SecretAccessKey": "xxxxxxxxxx",
"SessionToken": xxxxxxxxxxx",
"Expiration": "xxxxxxxxx",
"AccessKeyId": "xxxxxxxxxx"
}
}
After that set the required environment variable using the value from above output so that we can use the correct credentials generated from the session.
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxx
export AWS_SESSION_TOKEN=xxxxxxxxxx
After that verify that we assumed the IAM role by running the command aws sts get-caller-identity.
aws sts get-caller-identity
{
"Account": "xxxxxxxxxx",
"UserId": "xxxxxxxxxx:test",
"Arn": "arn:aws:sts::xxxxxxxxxx:assumed-role/eks-role/test"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
NOTE:
I have try to cover major use case here but there might be other use case too where we need to setup the access to the cluster.
Also the above tests are mainly aiming at the first time setup of the EKS cluster and none of the above method is touching the aws-auth configmap yet.
But once you have given access to other IAM user/role to EKS cluster via aws-auth (https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) file you can use the same set of commands for those users too as mentioned in above answer.
Update: If you are using the SSO then setup will be preety much same but one thing we have to consider is either in case of SSO or while using the role directly if we are trying to update path based role in ConfigMap then we have to get rid of the paths in role for example instead arn:aws:iam::xxxxxxxxxxx:role/path-1/subpath-1/eks-role of this use arn:aws:iam::xxxxxxxxxxx:role/eks-role so basically we are getting rid of the /path-1/subpath-1 becuase when we run kubectl command it will first make AssumeRole api call and if we see assumed role ARN then it will not contians the path so if we include the path then it EKS will deny those requests.
If you are using eksctl to manage your aws eks deployments you can add the user to the config map with one command:
eksctl create iamidentitymapping --cluster <cluster-name> --arn arn:aws:iam::<id>:user/<user-name> --group system:masters --username ops-user
I commented out the last two lines of the config file
# - "-r"
# - "arn:aws:iam::**********:role/**********"
and it worked though I have no idea why
You need to create the cluster under the same IAM profile that you are accessing it from via AWS cli.
Said in another way, inside ~/.aws/credentials, the profile that is accessing kubectl must match exactly the same IAM that was used to create the cluster.
My recommendation is to use AWS cli to create your clusters as creating from the GUI may be more confusing than helpful. The Getting Started guide is your best bet to get up and running.
Also, make sure your users are in the aws-auth k8s ConfigMap:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
I had the same problem . It's likely that you are using a root account. It appears root accounts are blocked from assuming the required roles. This error can sometimes be cloaked if you are using expired keys.
I had the same problem, my AWS credentials for CLI change frequently. These steps fixed the problem:
export AWS_ACCESS_KEY_ID="***************"
export AWS_SECRET_ACCESS_KEY="*************"
export AWS_SESSION_TOKEN="************************"
If you have created the EKS cluster with kops, Then all you need to do is update your kubecfig file with following kops command
kops export kubecfg --admin
If you have exhausted all of the above solutions and are still getting the same error. Make sure kubectl is actually using the AWS credentials you think you are.
I made a very stupid mistake. I have environment variables setup for different accounts and kubectl seems to always pick up the one from AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY. So precedence of options on this page may help https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
This happens also to me with local environment on minikube, independently of EKS. My problem is related to this issue: https://github.com/kubernetes/kubernetes/issues/76774
The solution i adopted is to remove the cache directories of kubectl: rm -rf ~/.kube/{cache,http-cache}. I guess is the only workaround at the time of writing.
In my case it is the AWS profile issue, be sure to use aws sts get-caller-identity to verify the IAM user.
I just debugged this issue. I have a question. Are you running this on a corporate wifi network? If yes, could you create an EC2 instance and then test if you are able to do kubectl get svc?
Also, try if this command works
kubectl get svc --insecure-skip-tls-verify
The issue for me was that I had set up environment variables with different, invalid, AWS credentials (maybe a long time ago, and forgot). I realised this after running aws configure list and seeing that the credentials are different from what I expected with aws configure list --profile default. Finding and deleting those invalid environment variables fixed the issue, now I can run kubectl get svc.
The issue is with the policy added for the roles created. We must have AWSEKSCNI policy.
Better to create eks using the command:
eksctl create cluster --name ekscluster --version 1.19 --with-oidc
--vpc-public-subnets=subnet-08c6b0b0166abc1d1,subnet-02822a142bb5a802a
--vpc-private-subnets=subnet-09bbf4871902ee64c,subnet-0926c224909b5a811
This will create and assign the policy automatically using cloudformation.
In my case the scenario was, that I replaced a person who worked as a devops and created all the infrastructure on aws. He had his own IAM user (so yes, no root access case, however could be also applicable) with all AWS permission and I had my own IAM user also with all available AWS permissions.
So, he left the company and that was the problem, since I didn't get any access to cluster which by defauly is shared to the creator only ... and in fact all of my approaches to get the access to cluster were failed, even despite a fact that I had all permissions.
The good thing was, guy whom I replaced was still had his IAM user available (not removed). And what I did, I simply generated new AWS access pair under his account and set them as my default aws credentials on my ubuntu host (from which I've tried to access the cluster). Important part was to make sure that after running aws sts get-caller-identity command it meant to be HIS account to appear on the output. In that case I've been able to run all the kubectl commands that I wanted without error You must be logged in to the server (Unauthorized) message.
So the solution is - be lucky to find cluster creator credentials and use them! (sounds like a crime, however ...)
I had the same issue.
Refer the answers:
Cannot create namespaces in AWS Elastic Kubernetes Service - Forbidden
User cannot log into EKS Cluster using kubectl
The correct way is to:
Create an IAM Group with the following permissions.
1. AmazonEKSClusterPolicy
2. AmazonEKSWorkerNodePolicy
3. AmazonEC2ContainerRegistryReadOnly
4. AmazonEKS_CNI_Policy
5. AmazonElasticContainerRegistryPublicReadOnly
6. EC2InstanceProfileForImageBuilderECRContainerBuilds
7. AmazonElasticContainerRegistryPublicFullAccess
8. AWSAppRunnerServicePolicyForECRAccess
9. AmazonElasticContainerRegistryPublicPowerUser
10. SecretsManagerReadWrite
Create an IAM user (AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID will be provided) and add the user to the IAM Group created above.
Next, login to AWS Console as the IAM user and create the EKS Cluster.
Next, use the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID to setup the AWS CLI in local machine.
Next, run the following commands in the local machine:
1. aws sts get-caller-identity
2. aws eks describe-cluster --name [cluster-name] --region [aws-region] --query cluster.status (To check the status of the Cluster)
3. aws eks update-kubeconfig --name [cluster-name] --region [aws-region]
After this, you will be able to run the kubectl commands.
If you need to add additional users to the EKS Cluster, create the additional IAM user, add the user to the IAM Group in AWS. Next, log into the EKS cluster as the original IAM user and run: kubectl edit -n kube-system configmap/aws-auth.
Add the following block of code to the existing configuration:
(Make sure to add changes to the ARN and username)
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/admin
username: admin
groups:
- system:masters
Refer the following link to understand this better: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
Hope this become useful.
Happy learning!
In addition to the great answers that have already been given, I would like to add a good way to troubleshoot issues. In my case, I was trying to run kubectl in an ECS task as part of an AWS pipeline, and kubectl version was failing with the "You must be logged in to the server" message.
What was happening was that the ECS task was assuming a service role, and then aws eks update-kubeconfig --name $EKS_CLUSTER_NAME --region $AWS_DEFAULT_REGION was being run. It turns out the service role, CodeBuildServiceRole, was not mapped to an RBAC user via a clusterrolebinding in the EKS cluster, and the aws-iam-authenticator EKS service was denying access to the AWS service account (or something like that).
What helped me figure out what was going on in this case was to go to CloudWatch → Log Insights and run the following query against the /aws/eks/<cluster-name>/cluster log group:
fields #timestamp, #message
| sort #timestamp desc
| filter #logStream like /authenticator/
| filter #message like /access denied/
| limit 50
It was clear from the log output that the authenticator service was expecting an ARN in configmap/aws-auth that was lowercase: arn:aws:iam::123456789:role/codebuildservicerole. Once I fixed the case of the ARN in the configmap, kubectl version started working in the ECS task.
$ kubectl edit -n kube-system configmap/aws-auth
I had the same problem and the fix was to remove an alias I had added when setting up minikube. Their docs say to add this alias:
alias kubectl="minikube kubectl --"
Which breaks some things like Azure integrations, as you would expect.
I got this error when I created the eks cluster using the root from the eks console. I recreated the eks cluster using an IAM user and use the access keys to update the aws configure. It worked. Now you can add additional IAM users to issue kubectl commands.
I was trying to create an EKS cluster with a private endpoint. Read this thread several times and the thing that worked for me:
Created a user: admin (with access to the console)
Logged into the console using that user
Created the cluster using the admin user
Created my kube/config file using aws eks (aws eks --region update-kubeconfig --name )
Done (kubectl get ns)
The last 2 commands were executed from an ec2 instance part of the same VPC.
I faced the same issue. I tried to configure AWS CLI directly with access key and secret key and it worked.
It should be bug to not able to assume role. Try setup cli and test.
I followed these docs. It took some time to understand things.
But finally implemented the things easily with full understanding.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
When invoking eksctl with an assumed role through sso, the following steps got me access to the cluster.
eksctl.exe create cluster --name test --without-nodegroup --profile <your_profile>.
Update the kubeconfig :
eksctl utils write-kubeconfig --cluster=test --profile<your_profile>.
Enable Cloudwatch to get the iam role arn:
eksctl utils update-cluster-logging --enable-types=all --cluster=test --approve --profile=<your_profile>.
run "kubectl get pods -A" . server responds with "error: You must be logged in to the server (Unauthorized)" message.
Switch to the console and get the role arn from the cloudwatch group audit log.
Update the aws-auth config map by mapping the derived arn to system:masters group. The username is users:name field in your current kubeconfig context as extracted it below
user_id=` k config view --minify -o json | jq -r '.users[0].name'`
eksctl create iamidentitymapping --cluster test --group system:masters --username ${user_id} --arn arn:aws:iam::<account_id>:role/<assumed_role> --profile <your_profile>
For me adding the user in a single line like below worked
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::******:role/eksctl-atoa-microservices-nodegro-NodeInstanceRole-346C48Q1W7OB
username: system:node:{{EC2PrivateDNSName}}
mapUsers: "- groups:\n - system:masters\n userarn: arn:aws:iam::*****:user/<username>
\ \n username: <username>\n"