I created an EKS cluster from an EC2 instance with my-cluster-role added to instance profile using aws cli:
aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::012345678910:role/my-cluster-role --resources-vpc-config subnetIds=subnet-abcd123,subnet-wxyz345,securityGroupIds=sg-123456,endpointPublicAccess=false,endpointPrivateAccess=true
Kubeconfig file:
aws eks --region us-east-1 update-kubeconfig --name my-cluster
But while trying to access Kubernetes resources, I get below error:
[root#k8s-mgr ~]# kubectl get deployments --all-namespaces
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:i-xxxxxxxx" cannot list resource "deployments" in API group "apps" at the cluster scope
Except for pods and services, no other resource is accessible.
Note that the cluster was created using the role my-cluster-role, as per the documentation, this role should have permissions to access the resources.
[root#k8s-mgr ~]# aws sts get-caller-identity
{
"Account": "012345678910",
"UserId": "ABCDEFGHIJKKLMNO12PQR:i-xxxxxxxx",
"Arn": "arn:aws:sts::012345678910:assumed-role/my-cluster-role/i-xxxxxxxx"
}
Edit:
Tried creating ClusterRole and ClusterRoleBinding as suggested here: https://stackoverflow.com/a/70125670/7654693
Error:
[root#k8s-mgr]# kubectl apply -f access.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "eks-console-dashboard-full-access-clusterrole", Namespace: ""
from server for: "access.yaml": clusterroles.rbac.authorization.k8s.io "eks-console-dashboard-full-access-clusterrole" is forbidden: User "system:node:i-xxxxxxxx" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "eks-console-dashboard-full-access-binding", Namespace: ""
Below is my Kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: SERVER ENDPOINT
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
command: aws
Create a cluster role and cluster role binding, or a role and role binding
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
verbs:
- get
- list
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
- replicasets
verbs:
- get
- list
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks-console-dashboard-full-access-binding
subjects:
- kind: Group
name: eks-console-dashboard-full-access-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-console-dashboard-full-access-clusterrole
apiGroup: rbac.authorization.k8s.io
You can read more at : https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/
Update role
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: SERVER ENDPOINT
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
- --role
- arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
command: aws
add role details to config
- --role
- arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
command: aws
env:
- name: AWS_PROFILE
value: my-prod
or else
- --role-arn
- arn:aws:iam::1213:role/eks-cluster-admin-role-dfasf
command: aws-vault
env: null
There is a apparently a mismatch between the IAM user, that created the cluster, and the one is taken from your kubeconfig file while authenticating to your EKS cluster. You can tell it by RBAC's error output.
The quote from aws eks cli's reference
--role-arn (string) To assume a role for cluster authentication, specify an IAM role ARN with this option. For example, if you created
a cluster while assuming an IAM role, then you must also assume that
role to connect to the cluster the first time.
Probable solution, please update your kubeconfig file accordingly with command:
aws eks my-cluster update-kubeconfig --role-arn arn:aws:iam::012345678910:role/my-cluster-role
Related
I was trying to get started with AWS EKS Kubernetes cluster provisioning using Terraform. I was following the tutorial and got stuck on the moment with the kubeconfig file.
After the command: terraform output kubeconfig > ~/.kube/config, I should be able to communicate with the cluster. Unfortunately, when I try to use any command together with kubectl, for example cluster-info, I get an error error loading config file, yaml: line 4: mapping values are not allowed in this context
This is the code of the outputs.tf file:
# Outputs
#
locals {
config_map_aws_auth = <<CONFIGMAPAWSAUTH
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.demo-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
CONFIGMAPAWSAUTH
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.demo.endpoint}
certificate-authority-data: ${aws_eks_cluster.demo.certificate_authority[0].data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${var.cluster-name}"
KUBECONFIG
}
output "config_map_aws_auth" {
value = local.config_map_aws_auth
}
output "kubeconfig" {
value = local.kubeconfig
}```
Already saw this particular post kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster and followed some guides from AWS but still no success..
I'm creating a CI/CD pipeline. But CodeBuild is apparently not authorized to access the EKS cluster. I went to the specific CodeBuild role and added the following policies:
AWSCodeCommitFullAccess
AmazonEC2ContainerRegistryFullAccess
AmazonS3FullAccess
CloudWatchLogsFullAccess
AWSCodeBuildAdminAccess
Also created and added the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
}
]
}
Afterwards I executed the following command in the terminal where I created the EKS cluster: eksctl create iamidentitymapping --cluster <my_cluster_name> --arn <arn_from_the_codebuild_role> --group system:masters --username admin
And checked if it successfully added to aws-auth by running the command kubectl get configmaps aws-auth -n kube-system -o yaml. It returned:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::********:role/*********
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::*****:role/service-role/*******
username: ******
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2021-11-10T07:37:06Z"
name: aws-auth
namespace: kube-system
resourceVersion: *******
uid: *********
Still I get the error it's unauthorized.. Below is the buildspec.yml file:
version: 0.2
run-as: root
phases:
install:
commands:
- echo Installing app dependencies...
- chmod +x prereqs.sh
- sh prereqs.sh
- source ~/.bashrc
- echo Check kubectl version
- kubectl version --short --client
pre_build:
commands:
- echo Logging in to Amazon EKS...
- aws eks --region eu-west-2 update-kubeconfig --name <my-cluster-name>
- echo Check config
- kubectl config view
- echo Check kubectl access
- kubectl get svc
post_build:
commands:
- echo Push the latest image to cluster
- kubectl apply -n mattermost-operator -f mattermost-operator.yml
- kubectl rollout restart -n mattermost-operator -f mattermost-operator.yml
EDIT:
Running the command kubectl config view in CodeBuild returns:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://**********eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
user: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
current-context: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:**********:cluster/<cluster_name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-2
- eks
- get-token
- --cluster-name
- <cluster_name>
- --role
- arn:aws:iam::*********:role/service-role/<codebuild_role>
command: aws
env: null
Running the command kubectl config view in the terminal where I created the EKS cluster returns:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: ***********eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
- cluster:
certificate-authority-data: DATA+OMITTED
server: *********eu-west-2.eks.amazonaws.com
name: <cluster_name>.eu-west-2.eksctl.io
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
user: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
- context:
cluster: <cluster_name>.eu-west-2.eksctl.io
user: ******#<cluster_name>.eu-west-2.eksctl.io
name: ******#<cluster_name>.eu-west-2.eksctl.io
current-context: arn:aws:eks:eu-west-2:********:cluster/<cluster_name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:*******:cluster/<cluster_name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-2
- eks
- get-token
- --cluster-name
- <cluster_name>
command: aws
env: null
interactiveMode: IfAvailable
provideClusterInfo: false
- name: ******#******.eu-west-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- <cluster_name>
command: aws-iam-authenticator
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: eu-west-2
interactiveMode: IfAvailable
provideClusterInfo: false
ANYBODY IDEAS PLS? :D
GOT IT!
I used the role that CodeBuild created automatically.. But by creating a new role with the mandatory policies and edit this in CodeBuild, those steps above will succeed..
If anyone can further explain this that would be great!
Please help to understand why I receiving
error: You must be logged in to the server (Unauthorized)
I have created an empty EKS host and created inside role by manual :
eksctl create iamidentitymapping \
--cluster eks-cluster \
--arn arn:aws:iam::account_number:role/eks_test_full_role \
--username admin \
--group system:masters
and it's working
eksctl get iamidentitymapping --cluster eks-cluster
ARN USERNAME GROUPS
arn:aws:iam::account_number:role/eks_test_full_role admin system:masters
arn:aws:iam::account_number:role/eks-cluster-eks-workers-iam-role system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes
After I have created IAM user which can assume created role with all required permissions
aws sts assume-role --role-arn arn:aws:iam::account_number:role/eks_test_full_role --role-session-name test
{
"Credentials": {
"AccessKeyId": "",
"SecretAccessKey": "",
"SessionToken": "",
"Expiration": "2021-02-04T10:24:59Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "Role_id:test",
"Arn": "arn:aws:sts::account_number:assumed-role/eks_test_full_role/test"
}
}
What step I have missed to have it works?
P.S. my role and rolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev
name: get-pods
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: get-pods-bind
namespace: dev
subjects:
- kind: User
name: admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: get-pods
apiGroup: rbac.authorization.k8s.io
You will need to edit the aws-auth configmap and allow this different IAM user to access the cluster. Something like below:
1. kubectl edit cm aws-auth -n kube-system
Add the below config and edit the username placeholders and account-id placeholder. If mapusers already exists, then just add a new entry to it starting with - in the next line:
mapUsers: |
- userarn: arn:aws:iam::<account-id>:user/<name of IAM user>
username: <name of IAM user>
groups:
- system:masters
Aim is to create a read-only user for production namespace for my EKS cluster. However, I see the user has full access. As I am new to EKS and Kubernetes, please let me know the error that I have injected.
I have created an IAM user without any permission added. ARN is: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user. Also, I have noted down the access key id and secret access key -
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
Then, I have created the production namespace, role, and role binding as follows –
ubuntu#ip-172-16-0-252:~$ sudo kubectl create namespace production
ubuntu#ip-172-16-0-252:~$ cat role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["get", "list", "watch"]
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f role.yaml
ubuntu#ip-172-16-0-252:~$ cat rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prod-viewer-binding
namespace: production
subjects:
- kind: User
name: eks-prod-readonly-user
apiGroup: ""
roleRef:
kind: Role
name: prod-viewer-role
apiGroup: ""
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f rolebinding.yaml
Then, we have added the newly created user to aws-auth configuration map and have applied the changes -
ubuntu#ip-172-16-0-252:~$ sudo kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yaml
ubuntu#ip-172-16-0-252:~$ vi aws-auth-configmap.yaml
The following section is added under ‘mapUsers’ –
- userarn: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user
username: eks-prod-readonly-user
groups:
- prod-viewer-role
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f aws-auth-configmap.yaml
Now, I include this user details as a new section inside AWS credential file ( ~/.aws/credentials ) so that this user can be authenticated to API server of Kubernetes -
[eksprodreadonlyuser]
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
region=eu-west-2
output=json
I activate this AWS profile -
ubuntu#ip-172-16-0-252:~$ export AWS_PROFILE="eksprodreadonlyuser"
ubuntu#ip-172-16-0-252:~$ aws sts get-caller-identity
We see the correct user ARN in the output of get-caller-identity command.
While trying to see pods of default namespace, it works. Ideally it shall not as the user is given access on the production namespace only -
ubuntu#ip-172-16-0-252:~$ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
test-autoscaler-697b95d8b-5wl5d 1/1 Running 0 7d20h
ubuntu#ip-172-16-0-252:~$
Let know pointers to resolve. Thanks in advance!
Please try first to export all your credentials to the terminal as environment variables instead of using profiles:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=XXX
export AWS_DEFAULT_REGION=us-east-2
This is just for debugging and making sure that the problem is not in your configuration.
If this doesn't work - try using the configuration below.
ClusterRoleBinding and ClusterRole:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-binding
subjects:
- kind: User
name: eks-ro-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-ro-user-cluster-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-cluster-role
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
AWS auth config map (after you created an IAM user):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::<account-id>:role/eks-node-group-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::<account-id>:user/eks-ro-user
username: eks-ro-user
I am finding it impossible to connect to my k8s cluster on eks.
I am getting the error
error: You must be logged in to the server (Unauthorized)
However,
I have multiple clusters:
$ kubectl config get-clusters
NAME
hilarious-unicorn-1568659474.eu-west-1.eksctl.io
arn:aws:eks:eu-west-1:<redacted>:cluster/derp
These also appear when I do
$ kubectl config get-contexts
My issue is, if I switch to my eu-west cluster/context by running kubectl config use-context <my context>
And then do kubectl cluster-info I get
error: You must be logged in to the server (Unauthorized)
I have ran
$ aws eks update-kubeconfig --name myCluster
And this has updated in my ~/.kube/config file but to no avail.
I am literally at a loss as to why this is not working because it was previously working, and I can authenticate on another cluster.
Edits due to Comments
#Eduardo Baitello
I have aws-iam-authenticator installed. Although we also use awsmfa.
Here is the contents of my .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: redacted
server: redacted
name: arn:aws:eks:eu-west-1:redacted:cluster/savvy
- cluster:
certificate-authority-data: redacted
server: redacted
name: hilarious-unicorn-1568659474.eu-west-1.eksctl.io
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:redacted:cluster/savvy
user: arn:aws:eks:eu-west-1:redacted:cluster/savvy
name: arn:aws:eks:eu-west-1:redacted:cluster/savvy
- context:
cluster: hilarious-unicorn-1568659474.eu-west-1.eksctl.io
user: karl#hilarious-unicorn-1568659474.eu-west-1.eksctl.io
name: karl#hilarious-unicorn-1568659474.eu-west-1.eksctl.io
current-context: arn:aws:eks:eu-west-1:redacted:cluster/savvy
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:redacted:cluster/savvy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- savvy
command: aws
- name: karl#hilarious-unicorn-1568659474.eu-west-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- hilarious-unicorn-1568659474
command: aws-iam-authenticator
env: null
#shogan
Running kubectl describe configmap -n kube-system aws-auth says I am (Unauthorized)