Aim is to create a read-only user for production namespace for my EKS cluster. However, I see the user has full access. As I am new to EKS and Kubernetes, please let me know the error that I have injected.
I have created an IAM user without any permission added. ARN is: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user. Also, I have noted down the access key id and secret access key -
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
Then, I have created the production namespace, role, and role binding as follows –
ubuntu#ip-172-16-0-252:~$ sudo kubectl create namespace production
ubuntu#ip-172-16-0-252:~$ cat role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["get", "list", "watch"]
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f role.yaml
ubuntu#ip-172-16-0-252:~$ cat rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prod-viewer-binding
namespace: production
subjects:
- kind: User
name: eks-prod-readonly-user
apiGroup: ""
roleRef:
kind: Role
name: prod-viewer-role
apiGroup: ""
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f rolebinding.yaml
Then, we have added the newly created user to aws-auth configuration map and have applied the changes -
ubuntu#ip-172-16-0-252:~$ sudo kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yaml
ubuntu#ip-172-16-0-252:~$ vi aws-auth-configmap.yaml
The following section is added under ‘mapUsers’ –
- userarn: arn:aws:iam::460764xxxxxx:user/eks-prod-readonly-user
username: eks-prod-readonly-user
groups:
- prod-viewer-role
ubuntu#ip-172-16-0-252:~$ sudo kubectl apply -f aws-auth-configmap.yaml
Now, I include this user details as a new section inside AWS credential file ( ~/.aws/credentials ) so that this user can be authenticated to API server of Kubernetes -
[eksprodreadonlyuser]
aws_access_key_id= AKIAWWxxxxxxxxxxxxx
aws_secret_access_key= lAbtyv3zlbAMPxym8Jktal+xxxxxxxxxxxxxxxxxxxx
region=eu-west-2
output=json
I activate this AWS profile -
ubuntu#ip-172-16-0-252:~$ export AWS_PROFILE="eksprodreadonlyuser"
ubuntu#ip-172-16-0-252:~$ aws sts get-caller-identity
We see the correct user ARN in the output of get-caller-identity command.
While trying to see pods of default namespace, it works. Ideally it shall not as the user is given access on the production namespace only -
ubuntu#ip-172-16-0-252:~$ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
test-autoscaler-697b95d8b-5wl5d 1/1 Running 0 7d20h
ubuntu#ip-172-16-0-252:~$
Let know pointers to resolve. Thanks in advance!
Please try first to export all your credentials to the terminal as environment variables instead of using profiles:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=XXX
export AWS_DEFAULT_REGION=us-east-2
This is just for debugging and making sure that the problem is not in your configuration.
If this doesn't work - try using the configuration below.
ClusterRoleBinding and ClusterRole:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-binding
subjects:
- kind: User
name: eks-ro-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-ro-user-cluster-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: eks-ro-user-cluster-role
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
AWS auth config map (after you created an IAM user):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::<account-id>:role/eks-node-group-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::<account-id>:user/eks-ro-user
username: eks-ro-user
Related
I have two namespaces: n1 (for running EC2 instances) and fargate (connected to Fargate profile).
There is data-processor account in n1.
I'd like to allow data-processor account to run pods in fargate name space.
Now, I'm getting the following error:
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://<cluster-id>.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/fargate/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:n1:data-processor" cannot create resource "pods" in API group "" in the namespace "fargate".
You haven't provided any of the roles or rolebindings so we can't see what permissions you have set already, but if you apply the following manifest it should work:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: data-processor-role
rules:
- apiGroups: ['']
resources: ['pods']
verbs: ['get', 'watch', 'list', 'create', 'patch', 'update', 'delete']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: data-processor-rolebinding
namespace: fargate
subjects:
- kind: ServiceAccount
name: data-processor
namespace: n1
roleRef:
kind: ClusterRole
name: data-processor-role
apiGroup: rbac.authorization.k8s.io
That should allow your data-processor service account read/write permissions to pods in the fargate namespace.
I was trying to get started with AWS EKS Kubernetes cluster provisioning using Terraform. I was following the tutorial and got stuck on the moment with the kubeconfig file.
After the command: terraform output kubeconfig > ~/.kube/config, I should be able to communicate with the cluster. Unfortunately, when I try to use any command together with kubectl, for example cluster-info, I get an error error loading config file, yaml: line 4: mapping values are not allowed in this context
This is the code of the outputs.tf file:
# Outputs
#
locals {
config_map_aws_auth = <<CONFIGMAPAWSAUTH
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.demo-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
CONFIGMAPAWSAUTH
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.demo.endpoint}
certificate-authority-data: ${aws_eks_cluster.demo.certificate_authority[0].data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${var.cluster-name}"
KUBECONFIG
}
output "config_map_aws_auth" {
value = local.config_map_aws_auth
}
output "kubeconfig" {
value = local.kubeconfig
}```
I created an EKS cluster from an EC2 instance with my-cluster-role added to instance profile using aws cli:
aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::012345678910:role/my-cluster-role --resources-vpc-config subnetIds=subnet-abcd123,subnet-wxyz345,securityGroupIds=sg-123456,endpointPublicAccess=false,endpointPrivateAccess=true
Kubeconfig file:
aws eks --region us-east-1 update-kubeconfig --name my-cluster
But while trying to access Kubernetes resources, I get below error:
[root#k8s-mgr ~]# kubectl get deployments --all-namespaces
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:i-xxxxxxxx" cannot list resource "deployments" in API group "apps" at the cluster scope
Except for pods and services, no other resource is accessible.
Note that the cluster was created using the role my-cluster-role, as per the documentation, this role should have permissions to access the resources.
[root#k8s-mgr ~]# aws sts get-caller-identity
{
"Account": "012345678910",
"UserId": "ABCDEFGHIJKKLMNO12PQR:i-xxxxxxxx",
"Arn": "arn:aws:sts::012345678910:assumed-role/my-cluster-role/i-xxxxxxxx"
}
Edit:
Tried creating ClusterRole and ClusterRoleBinding as suggested here: https://stackoverflow.com/a/70125670/7654693
Error:
[root#k8s-mgr]# kubectl apply -f access.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "eks-console-dashboard-full-access-clusterrole", Namespace: ""
from server for: "access.yaml": clusterroles.rbac.authorization.k8s.io "eks-console-dashboard-full-access-clusterrole" is forbidden: User "system:node:i-xxxxxxxx" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "eks-console-dashboard-full-access-binding", Namespace: ""
Below is my Kubeconfig:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: SERVER ENDPOINT
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
command: aws
Create a cluster role and cluster role binding, or a role and role binding
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
verbs:
- get
- list
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
- replicasets
verbs:
- get
- list
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks-console-dashboard-full-access-binding
subjects:
- kind: Group
name: eks-console-dashboard-full-access-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-console-dashboard-full-access-clusterrole
apiGroup: rbac.authorization.k8s.io
You can read more at : https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/
Update role
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: SERVER ENDPOINT
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- my-cluster
- --role
- arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
command: aws
add role details to config
- --role
- arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
command: aws
env:
- name: AWS_PROFILE
value: my-prod
or else
- --role-arn
- arn:aws:iam::1213:role/eks-cluster-admin-role-dfasf
command: aws-vault
env: null
There is a apparently a mismatch between the IAM user, that created the cluster, and the one is taken from your kubeconfig file while authenticating to your EKS cluster. You can tell it by RBAC's error output.
The quote from aws eks cli's reference
--role-arn (string) To assume a role for cluster authentication, specify an IAM role ARN with this option. For example, if you created
a cluster while assuming an IAM role, then you must also assume that
role to connect to the cluster the first time.
Probable solution, please update your kubeconfig file accordingly with command:
aws eks my-cluster update-kubeconfig --role-arn arn:aws:iam::012345678910:role/my-cluster-role
I have created AWS EKS cluster since I have created using my AWS userID has been added to system:masters group. But when checked ConfigMap aws-auth I don't see my user ID. Why ?
I had to give access to another user, so I have to assign appropriate AWS policies to the IAM user, then I edited the ConfigMap aws-auth with the following mapping
mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- system:masters
So far I have understood when a user is part of system:masters group, this user has admin privileges on the cluster.
How can I add a new user who will have restricted privileges to a specific namespace? Do I have to do the same thing what I have done for the above user? If so then what group I should add the new user to?
I would familiarize with Kubernetes RBAC concepts
So you can create a Role since these are limited to a specific namespace.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: full-namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Then create a RoleBinding:
$ kubectl create rolebinding my-namespace-binding --role=full-namespace --group=namespacegroup --namespace=my-namespace
Or kubectl create -f this:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-namespace-binding
namespace: mynamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: full-namespace
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: namespacegroup
Then on your ConfigMap:
mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- namespacegroup
I want to have TLS termination enabled on ingress (on top of kubernetes) on google cloud platform.
My ingress cluster is working, my cert manager is failing with the error message
textPayload: "2018/07/05 22:04:00 Error while processing certificate during sync: Error while creating ACME client for 'domain': Error while initializing challenge provider googlecloud: Unable to get Google Cloud client: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /opt/google/kube-cert-manager.json: no such file or directory
"
This is what I did in order to get into the current state:
created cluster, deployment, service, ingress
executed:
gcloud --project 'project' iam service-accounts create kube-cert-manager-sv-security --display-name "kube-cert-manager-sv-security"
gcloud --project 'project' iam service-accounts keys create ~/.config/gcloud/kube-cert-manager-sv-security.json --iam-account kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com
gcloud --project 'project' projects add-iam-policy-binding --member serviceAccount:kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com --role roles/dns.admin
kubectl create secret generic kube-cert-manager-sv-security-secret --from-file=/home/perre/.config/gcloud/kube-cert-manager-sv-security.json
and created the following resources:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kube-cert-manager-sv-security-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-cert-manager-sv-security
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security
rules:
- apiGroups: ["*"]
resources: ["certificates", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: ["events"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security-service-account
subjects:
- kind: ServiceAccount
namespace: default
name: kube-cert-manager-sv-security
roleRef:
kind: ClusterRole
name: kube-cert-manager-sv-security
apiGroup: rbac.authorization.k8s.io
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.stable.k8s.psg.io
spec:
scope: Namespaced
group: stable.k8s.psg.io
version: v1
names:
kind: Certificate
plural: certificates
singular: certificate
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
replicas: 1
template:
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
serviceAccount: kube-cert-manager-sv-security
containers:
- name: kube-cert-manager
env:
- name: GCE_PROJECT
value: solidair-vlaanderen-207315
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/google/kube-cert-manager.json
image: bcawthra/kube-cert-manager:2017-12-10
args:
- "-data-dir=/var/lib/cert-manager-sv-security"
#- "-acme-url=https://acme-staging.api.letsencrypt.org/directory"
# NOTE: the URL above points to the staging server, where you won't get real certs.
# Uncomment the line below to use the production LetsEncrypt server:
- "-acme-url=https://acme-v01.api.letsencrypt.org/directory"
# You can run multiple instances of kube-cert-manager for the same namespace(s),
# each watching for a different value for the 'class' label
- "-class=kube-cert-manager"
# You can choose to monitor only some namespaces, otherwise all namespaces will be monitored
#- "-namespaces=default,test"
# If you set a default email, you can omit the field/annotation from Certificates/Ingresses
- "-default-email=viae.it#gmail.com"
# If you set a default provider, you can omit the field/annotation from Certificates/Ingresses
- "-default-provider=googlecloud"
volumeMounts:
- name: data-sv-security
mountPath: /var/lib/cert-manager-sv-security
- name: google-application-credentials
mountPath: /opt/google
volumes:
- name: data-sv-security
persistentVolumeClaim:
claimName: kube-cert-manager-sv-security-data
- name: google-application-credentials
secret:
secretName: kube-cert-manager-sv-security-secret
anyone knows what I'm missing?
Your secret resource kube-cert-manager-sv-security-secret may contains a JSON file named kube-cert-manager-sv-security.json and it is not matched with GOOGLE_APPLICATION_CREDENTIALS value. You can confirm file name in the secret resource with kubectl get secret -oyaml YOUR-SECRET-NAME.
So you change the file path to the actual file name, cert-manager works fine.
- name: GOOGLE_APPLICATION_CREDENTIALS
# value: /opt/google/kube-cert-manager.json
value: /opt/google/kube-cert-manager-sv-security.json