I'm trying to provide access to an EKS cluster.
From what I understood, I should be doing it with the aws-auth configmap.
My aws-auth config map looks like this:
Name: aws-auth
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: auth
meta.helm.sh/release-namespace: default
Data
====
mapRoles:
----
- rolearn: arn:aws:iam::xxxxxxx:role/xxxxxxxxx
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn:arn:aws:iam::xxxxxxx:role/AWSReservedSSO_DevAccountAdmin_xxxxxx
username: admin
groups:
- system:masters
- system:nodes
- system:bootstrappers
- rolearn: arn:aws:iam::xxxx:role/AWSReservedSSO_DevAccountAccess_xxxx
username: admin
groups:
- system:masters
- system:nodes
- system:bootstrappers
Note that this is the output of kubectl describe cm -n kube-system aws-auth
This seems to work usually. But for some reason, doing the same is not working for one of my developers.
Gives the error: You must be logged in to the server (Unauthorized)
This has got to be the most complicated subject I've come accross wrt kubernetes despite the simplicity of the topic. Please help.
Related
I have two namespaces: n1 (for running EC2 instances) and fargate (connected to Fargate profile).
There is data-processor account in n1.
I'd like to allow data-processor account to run pods in fargate name space.
Now, I'm getting the following error:
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://<cluster-id>.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/fargate/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:n1:data-processor" cannot create resource "pods" in API group "" in the namespace "fargate".
You haven't provided any of the roles or rolebindings so we can't see what permissions you have set already, but if you apply the following manifest it should work:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: data-processor-role
rules:
- apiGroups: ['']
resources: ['pods']
verbs: ['get', 'watch', 'list', 'create', 'patch', 'update', 'delete']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: data-processor-rolebinding
namespace: fargate
subjects:
- kind: ServiceAccount
name: data-processor
namespace: n1
roleRef:
kind: ClusterRole
name: data-processor-role
apiGroup: rbac.authorization.k8s.io
That should allow your data-processor service account read/write permissions to pods in the fargate namespace.
I have a aws-auth config map like this. In the mapUsers section I wanted to add another user with master permissions:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXX:role/eks-nodegroup-service-role
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::XXX:user/k8s-admin
username: k8s-admin
groups:
- system:masters
mapAccounts: |
- "XXX"
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
But I still get an error when using k8s-admin user: error: You must be logged in to the server (Unauthorized).
[EDIT]
I've added mapAccounts. Now the error is:
Error from server (Forbidden): pods is forbidden: User "arn:aws:iam::XXX:user/k8s-admin" cannot list resource "pods" in API group "" in the namespace "default
mapUsers: |
- userarn: arn:aws:iam::XXX:user/k8s-admin
username: k8s-admin
groups:
- system:masters
You statement above is incorrect, try:
...
mapUsers: |
- userarn: arn:aws:iam::XXX:user/k8s-admin
username: k8s-admin
groups:
- system:masters
...
I installed the EFS CSI driver to mount EFS on EKS, I followed Amazon EFS CSI driver and aws-efs-csi-driver
I've faced the below error while deploying PersistentVolumeClaim.
Error from server (Forbidden): error when creating "claim.yml": persistentvolumeclaims "efs-claim" is forbidden: may only update PVC status
StorageClass.yaml -->
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
- tls
pv.yaml -->
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-xxxxxxxxxxx
pvclaim.yaml -->
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
selector:
matchLabels:
name: production-environment
role: prod
Kindly help me to resolve this
I fixed the issue with aws support.
Posting the resolution might help someone.
We removed the system:nodes ands system:bootstrappers permission of the controller server from the aws-auth configmap. and it's fixed the issue.
Previous configmap/aws-auth -->
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::xxxxxxxxxx:role/eksctl-sc-prod-eks-cluster-NodeInstanceRole-T3B32A19KBZB
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:bootstrappers
- system:nodes
- system:masters
rolearn: arn:aws:iam::xxxxxxxxx:role/sc-prod-iam-ec2-instance-profile-bastion
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::xxxxxxxxxx:user/jawad846
username: admin
groups:
- system:masters
Currenlt configmap/aws-auth -->
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::xxxxxxxxx:role/eksctl-sc-prod-eks-cluster-NodeInstanceRole-T3B32A19KBZB
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::xxxxxxxxxx:role/sc-prod-iam-ec2-instance-profile-bastion
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::xxxxxxxxxx:user/jawad846
username: admin
groups:
- system:masters
Thanks #all
Aim is to create a read-only user for production namespace for my EKS cluster. However, I see the user has full access. As I am new to EKS and Kubernetes, please let me know the error that I have injected.
I have created an IAM user without any permission added. ARN is: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user. Also, I have noted down the access key id and secret access key -
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
Then, I have created the production namespace, role, and role binding as follows –
ubuntu#ip-172-16-0-252:~$ sudo kubectl create namespace production
ubuntu#ip-172-16-0-252:~$ cat role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["get", "list", "watch"]
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f role.yaml
ubuntu#ip-172-16-0-252:~$ cat rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prod-viewer-binding
namespace: production
subjects:
- kind: User
name: eks-prod-readonly-user
apiGroup: ""
roleRef:
kind: Role
name: prod-viewer-role
apiGroup: ""
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f rolebinding.yaml
Then, we have added the newly created user to aws-auth configuration map and have applied the changes -
ubuntu#ip-172-16-0-252:~$ sudo kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yaml
ubuntu#ip-172-16-0-252:~$ vi aws-auth-configmap.yaml
The following section is added under ‘mapUsers’ –
- userarn: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user
username: eks-prod-readonly-user
groups:
- prod-viewer-role
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f aws-auth-configmap.yaml
Now, I include this user details as a new section inside AWS credential file ( ~/.aws/credentials ) so that this user can be authenticated to API server of Kubernetes -
[eksprodreadonlyuser]
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
region=eu-west-2
output=json
I activate this AWS profile -
ubuntu#ip-172-16-0-252:~$ export AWS_PROFILE="eksprodreadonlyuser"
ubuntu#ip-172-16-0-252:~$ aws sts get-caller-identity
We see the correct user ARN in the output of get-caller-identity command.
While trying to see pods of default namespace, it works. Ideally it shall not as the user is given access on the production namespace only -
ubuntu#ip-172-16-0-252:~$ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
test-autoscaler-697b95d8b-5wl5d 1/1 Running 0 7d20h
ubuntu#ip-172-16-0-252:~$
Let know pointers to resolve. Thanks in advance!
Please try first to export all your credentials to the terminal as environment variables instead of using profiles:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=XXX
export AWS_DEFAULT_REGION=us-east-2
This is just for debugging and making sure that the problem is not in your configuration.
If this doesn't work - try using the configuration below.
ClusterRoleBinding and ClusterRole:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-binding
subjects:
- kind: User
name: eks-ro-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-ro-user-cluster-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-cluster-role
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
AWS auth config map (after you created an IAM user):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::<account-id>:role/eks-node-group-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::<account-id>:user/eks-ro-user
username: eks-ro-user
I have created AWS EKS cluster since I have created using my AWS userID has been added to system:masters group. But when checked ConfigMap aws-auth I don't see my user ID. Why ?
I had to give access to another user, so I have to assign appropriate AWS policies to the IAM user, then I edited the ConfigMap aws-auth with the following mapping
mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- system:masters
So far I have understood when a user is part of system:masters group, this user has admin privileges on the cluster.
How can I add a new user who will have restricted privileges to a specific namespace? Do I have to do the same thing what I have done for the above user? If so then what group I should add the new user to?
I would familiarize with Kubernetes RBAC concepts
So you can create a Role since these are limited to a specific namespace.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: full-namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Then create a RoleBinding:
$ kubectl create rolebinding my-namespace-binding --role=full-namespace --group=namespacegroup --namespace=my-namespace
Or kubectl create -f this:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-namespace-binding
namespace: mynamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: full-namespace
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: namespacegroup
Then on your ConfigMap:
mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- namespacegroup