AWS EKS kubectl - No resources found in default namespace - amazon-web-services

Trying to setup a EKS cluster.
An error occurred (AccessDeniedException) when calling the DescribeCluster operation: Account xxx is not authorized to use this service. This error came form the CLI, on the console I was able to crate the cluster and everything successfully.
I am logged in as the root user (its just my personal account).
It says Account so sounds like its not a user/permissions issue?
Do I have to enable my account for this service? I don't see any such option.
Also if login as a user (rather than root) - will I be able to see everything that was earlier created as root. I have now created a user and assigned admin and eks* permissions. I checked this when I sign in as the user - I can see everything.
The aws cli was setup with root credentials (I think) - so do I have to go back and undo fix all this and just use this user.
Update 1
I redid/restarted everything including user and awscli configure - just to make sure. But still the issue did not get resolved.
There is an option to create the file manually - that finally worked.
And I was able to : kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 48m
KUBECONFIG: I had setup the env:KUBECONFIG
$env:KUBECONFIG="C:\Users\sbaha\.kube\config-EKS-nginixClstr"
$Env:KUBECONFIG
C:\Users\sbaha\.kube\config-EKS-nginixClstr
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* aws kubernetes aws
kubectl config current-context
aws
My understanding is is I should see both the aws and my EKS-nginixClstr contexts but I only see aws - is this (also) a issue?
Next Step is to create and add worker nodes. I updated the node arn correctly in the .yaml file: kubectl apply -f ~\.kube\aws-auth-cm.yaml
configmap/aws-auth configured So this perhaps worked.
But next it fails:
kubectl get nodes No resources found in default namespace.
On AWS Console NodeGrp shows- Create Completed. Also on CLI kubectl get nodes --watch - it does not even return.
So this this has to be debugged next- (it never ends)
aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::arn:aws:iam::xxxxx:role/Nginix-NodeGrpClstr-NodeInstanceRole-1SK61JHT0JE4
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes

This problem was related to not having the correct version of eksctl - it must be at least 0.7.0. The documentation states this and I knew this, but initially whatever I did could not get beyond 0.6.0. The way you get it is to configure your AWS CLI to a region that supports EKS. Once you get 0.7.0 this issue gets resolved.
Overall to make EKS work - you must have the same user both on console and CLI, and you must work on a region that supports EKS, and have correct eksctl version 0.7.0.

Related

Terraform EKS configmaps is forbidden

I am trying to deploy a Kubernetes cluster on AWS EKS using Terraform, run from a Gitlab CI pipeline. My code currently gets a full cluster up and running, except there is a step in which it tries to add the nodes (which are created separately) into the cluster.
When it tries to do this, this is the error I receive:
│ Error: configmaps is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
│
│ with module.mastercluster.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/mastercluster/aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
│ 63: resource "kubernetes_config_map" "aws_auth" {
│
Terraform I believe is trying to edit the configmap aws_auth in the kube-system namespace, but for whatever reason, it doesn't have permission to do so?
I have found a different answer from years ago on Stackoverflow, that currently matches with what the documentation has to say about adding a aws_eks_cluster_auth data source and adding this to the kubernetes provider.
My configuration of this currently looks like this:
data "aws_eks_cluster" "mastercluster" {
name = module.mastercluster.cluster_id
}
data "aws_eks_cluster_auth" "mastercluster" {
name = module.mastercluster.cluster_id
}
provider "kubernetes" {
alias = "mastercluster"
host = data.aws_eks_cluster.mastercluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.mastercluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.mastercluster.token
load_config_file = false
}
The weird thing is, this has worked for me before. I have successfully deployed multiple clusters using this method. This configuration is an almost identical copy to another one I had before, only the names of the clusters are different. I am totally lost as to why this can possibly go wrong.
Use semver to lock hashicorp provider versions
That's why is so important to use semver in terraform manifests.
As per Terraform documentation:
Terraform providers manage resources by communicating between Terraform and target APIs. Whenever the target APIs change or add functionality, provider maintainers may update and version the provider.
When multiple users or automation tools run the same Terraform configuration, they should all use the same versions of their required providers.
Use RBAC rules for Kubernetes
There is a Github issue filed about this: v2.0.1: Resources cannot be created. Does kubectl reference to kube config properly? · Issue #1127 · hashicorp/terraform-provider-kubernetes with the same error message as in yours case.
And one of the comments answers:
Offhand, this looks related to RBAC rules in the cluster (which may have been installed by the helm chart). This command might help diagnose the permissions issues relating to the service account in the error message.
$ kubectl auth can-i create namespace --as=system:serviceaccount:gitlab-prod:default
$ kubectl auth can-i --list --as=system:serviceaccount:gitlab-prod:default
You might be able to compare that list with other users on the cluster:
kubectl auth can-i --list --namespace=default --as=system:serviceaccount:default:default
$ kubectl auth can-i create configmaps
yes
$ kubectl auth can-i create configmaps --namespace=nginx-ingress --as=system:serviceaccount:gitlab-prod:default
no
And investigate related clusterroles:
$ kube describe clusterrolebinding system:basic-user
Name: system:basic-user
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: system:basic-user
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:authenticated
$ kubectl describe clusterrole system:basic-user
Name: system:basic-user
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
My guess is that the chart or Terraform config in question is responsible for creating the service account, and the [cluster] roles and rolebindings, but it might be doing so in the wrong order, or not idempotently (so you get different results on re-install vs the initial install). But we would need to see a configuration that reproduces this error. In my testing of version 2 of the providers on AKS, EKS, GKE, and minikube, I haven't seen this issue come up.
Feel free to browse these working examples of building specific clusters and using them with Kubernetes and Helm providers. Giving the config a skim might give you some ideas for troubleshooting further.
Howto solve RBAC issues
As for the error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list
There is great explanation by #m-abramovich:
First, some information for newbies.
In Kubernetes there are:
Account - something like your ID. Example: john
Role - some group in the project permitted to do something. Examples: cluster-admin, it-support, ...
Binding - joining Account to Role. "John in it-support" - is a binding.
Thus, in our message above, we see that our Tiller acts as account "default" registered at namespace "kube-system". Most likely you didn't bind him to a sufficient role.
Now back to the problem.
How do we track it:
check if you have specific account for tiller. Usually it has same name - "tiller":
kubectl [--namespace kube-system] get serviceaccount
create if not:
kubectl [--namespace kube-system] create serviceaccount tiller
check if you have role or clusterrole (cluster role is "better" for newbies - it is cluster-wide unlike namespace-wide role). If this is not a production, you can use highly privileged role "cluster-admin":
kubectl [--namespace kube-system] get clusterrole
you can check role content via:
kubectl [--namespace kube-system] get clusterrole cluster-admin -o yaml
check if account "tiller" in first clause has a binding to clusterrole "cluster-admin" that you deem sufficient:
kubectl [--namespace kube-system] get clusterrolebinding
if it is hard to figure out based on names, you can simply create new:
kubectl [--namespace kube-system] create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
finally, when you have the account, the role and the binding between them, you can check if you really act as this account:
kubectl [--namespace kube-system] get deploy tiller-deploy -o yaml
I suspect that your output will not have settings "serviceAccount" and "serviceAccountName":
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
if yes, than add an account you want tiller to use:
kubectl [--namespace kube-system] patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
(if you use PowerShell, then check below for post from #snpdev)
Now you repeat previous check command and see the difference:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: tiller <-- new line
serviceAccountName: tiller <-- new line
terminationGracePeriodSeconds: 30
Resources:
Using RBAC Authorization | Kubernetes
Demystifying RBAC in Kubernetes | Cloud Native Computing Foundation
Helm | Role-based Access Control
Lock and Upgrade Provider Versions | Terraform - HashiCorp Learn

Messed up with configmap aws-auth

I was trying to add permission to view nodes to my admin IAM using information in this article (https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/) and ended up saving the configmap with a malformed mapUsers section (didn't include the username at all)
Now every kubectl command return an error like this: Error from server (Forbidden): nodes is forbidden: User "" cannot list resource "nodes" in API group "" at the cluster scope
How can I circumvent corrupted configmap and regain access to the cluster? I found two questions at Stackoverflow but as I am very new to kubernetes and still buffled as to exactly I need to do.
Mistakenly updated configmap aws-auth with rbac & lost access to the cluster
I have an access to root user but kubectl doesn't work for this user, too.
Is there another way to authenticate to the cluster?
Update 1
Yesterday I recreated this problem on a new cluster: I still got this error even if I am the root user.
The structure of the configmap goes like this:
apiVersion: v1
data:
mapRoles: <default options>
mapUsers: |
- userarn: arn:aws:iam::<root id>:root
username: #there should be a username value on this line, but it's missing in my configmap; presumable this is the cause
groups:
- system:bootstrappers
- system:nodes
Update 2
Tried to use serviceAccount token, got an error:
Error from server (Forbidden): configmaps "aws-auth" is forbidden: User "system:serviceaccount:kube-system:aws-node" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
How did you create your cluster? The IAM user, or IAM role, that you used to actually create it, is grandfathered in as a sysadmin. So long as you use the same credentials that you used for
aws eks create-cluster
You can do
aws eks update-kubeconfig, followed by using kubectl to modify the configmap and give other entites the permissions.
You haven't said what you actually tried. Lets do some more troubleshooting-
system:serviceaccount:kube-system:aws-node this is saying that THIS kubernetes user does not have permission to modify configmaps. But, that is completely correct- it SHOULDNT. What command did you run to get that error? What were the contents of your kubeconfig context to generate that message? Did you run the command from a worker node maybe?
You said "I have access to the root user". Access in what way? Through the console? With an AWS_SECRET_ACCESS_KEY? You'll need the second - assuming thats the case run aws iam get-caller-identity and post the results.
Root user or not, the only user that has guaranteed access to the cluster is the one that created it. Are there ANY OTHER IAM users or roles in your account? Do you have cloudtrail enabled? If so, you could go back and check the logs and verify that it was the root user that issued the create cluster command.
After running get-caller-identity, remove your .kube/config file and run aws eks update-kubeconfig. Tell us the output from the command, and the contents of the new config file.
Run kubectl auth can-i '*' '*' with the new config and let us know the result.

Error loading Namespaces. Unauthorized: Verify you have access to the Kubernetes cluster

I have created a EKS cluster using the the command line eksctl and verified that the application is working fine.
But noticing a strange issue, when i try yo access the nodes in the cluster in the web browser i see the following error
Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster
I am able to see the nodes using kubectl get nodes
I am logged in as the admin user. Any help on how to workaround this would be really great. Thanks.
You will need to add your IAM role/user to your cluster's aws-auth config map
Basic steps to follow taken from https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: <arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup-standard-wo-NodeInstanceRole-1WP3NUE3O6UCF>
username: <system:node:{{EC2PrivateDNSName}}>
groups:
- <system:bootstrappers>
- <system:nodes>
mapUsers: |
- userarn: <arn:aws:iam::111122223333:user/admin>
username: <admin>
groups:
- <system:masters>
- userarn: <arn:aws:iam::111122223333:user/ops-user>
username: <ops-user>
groups:
- <system:masters>
Also seeing this error and it got introduced by the latest addition to EKS, see https://aws.amazon.com/blogs/containers/introducing-the-new-amazon-eks-console/
Since then, the console makes requests to EKS in behalf of the user or role you are logged in.
So make sure the kube-system:aws-auth configmap has that user or role added.
This user/role might not be the same you are using locally with AWS CLI, hence kubectl might work while you still see that error !
Amazon added recently (2020.12) new feature that allows you to browse workloads inside cluster from Aws Console.
If you miss permissions you will get that error.
What permissions are needed is described here
https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html#policy_example3
This might as well be because you created the AWS EKS cluster using a different IAM user than the one currently logged into the AWS Management Console hence the IAM user currently logged into the AWS Management Console does not have permissions to view the namespaces on the AWS EKS cluster.
Try logging in to the AWS Management Console using the IAM user credentials of the user who created the AWS EKS cluster, the issue should be fixed.

"kubectl" not connecting to aws EKS cluster from my local windows workstation

I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;
Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:
Command:kubectl.exe get svc
output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1
Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?
For people got into this, may be you provision eks with profile.
EKS does not add profile inside kubeconfig.
Solution:
export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "general"
env:
- name: AWS_PROFILE
value: "<YOUR_PROFILE_HERE>" #
I think i got the answer for this issue; want to write down here so people will be benefit out of it.
When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.
Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.
Alternatively you can set the AWS env parameters:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=ssssssssss
Adding another option.
Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:
- name: my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args: #<--- Change the args
- --region
- <YOUR_REGION>
- eks
- get-token
- --cluster-name
- my-cluster
command: aws #<--- Change to command to aws
env:
- name: AWS_PROFILE
value: <YOUR_PROFILE_HERE>

Attempting to share cluster access results in creating new cluster

I have deleted the config file I used when experimenting with Kubernetes on my AWS (using this tutorial) and replaced it with another devs config file when they set up Kubernetes on a shared AWS (using this). When I run kubectl config view I see the following above the users section:
- cluster:
certificate-authority-data: REDACTED
server: <removed>
name: aws_kubernetes
contexts:
- context:
cluster: aws_kubernetes
user: aws_kubernetes
name: aws_kubernetes
current-context: aws_kubernetes
This leads me to believe that my config should be pointing to use our shared AWS but whenever I run cluster/kube-up.sh it creates a new GCE cluster so I'm thinking I'm using the wrong command to spin up the cluster on AWS.
Am I using the wrong command/missing a flag/etc? Additionally I'm thinking kube-up creates a new cluster instead of recreating a previously instantiated one.
If you are sharing the cluster, you shouldn't need to run kube-up.sh (that script only needs to be run once to initially create a cluster). Once a cluster exists, you can use the standard kubectl commands to interact with it. Try starting with kubectl get nodes to verify that your configuration file has valid credentials and you see the expected AWS nodes printed in the output.